### Deterministic and Random Processes

Intuitively we can distinguish between a deterministic event and a random one. A Deterministic event is a process whose development we can precisely compute. The path of a stone in the air, the track of a car on a highway, or the sun rising from the east every morning precisely at the same hour every day.

A random process is linked to incomputable events. The result of a coin toss, the next number of a lottery ticket, or the number and colour of the next roulette result.

In probability theory a random process is usually called stochastic process, (this has nothing to do with the Stochastics Oscillator)

### The Coin Toss

When tossing a coin there is no way to know in advance which side will land up. Although over many tosses we can compute some facts.

Pierre Simon Laplace (1749-1827) determined that the probability of an event to happen as the ratio of the number of ways that this event could happen divided by the total possible event classes.For the coin toss, the number of ways that heads could happen is just one and the total number of events are two: Heads or Tails. Therefore, the probability of heads is 0.5. We also know that the joint probability of the heads+tails should be 1. So, the probability of tails is also 0.5.

A die has 6 sides, so the total number of possible events is 6. Here, the probability of getting a 6 is, then, 1/6, or 0.1666.

### Independent versus dependent processes

Random processes can be divided into two categories:

Independent processes: Those events whose probability of occurrence does not depend on previous events. Examples of these are the coin toss and de rolling of a die. Their probability of occurrence is always the same with independence of the past occurrences.

Dependent processes: The probability of occurrence is affected by its past events. Card games, Poker, for instance. The probability of getting a determined card is dependent on the cards already dealt. Every time a card is dealt the composition of the deck changes, and so the probabilities of every card in it.

### Are Trade outcomes independent or dependent processes?

The majority of systems depict independent outcomes, although some trading systems’ results are dependent. But before treating the system as a dependent stochastic process it has to be analysed and determined, because treating it as dependent when it’s not, may result in a drop in performance.

Although we should treat our system’s output as independent, we usually act as if it were not. For instance, when a trader doesn’t take a trade after a loss, or when, under the same circumstance, doubles the trade size. That is the reason, the trader should treat the trading strategy (entry, exit, stops) separated from the trade size decision unless there is a parameter in the system that computes the individual probability of winning. In that case, the trade size should be linked to it.

### Mathematical Expectation

Mathematical Expectation (ME) is a mathematical concept that is known by gamblers as the Player’s edge (if positive) and the House’s advantage (if negative).

Let’s define P as the probability of winning and RR the Reward-to-risk ratio. Then,

**ME = (1+RR) x P – 1**

As an example, let’s compute the Mathematical Expectation of a trading strategy where the probability of winning P = 0.5 (50%) and the RR =2.

**ME = (1+2) x 0.5 -1**

**= 3*0.5 – 1**

**= 1.5 – 1 **

**= 0.5 **

This means, on average, you’d get 50 cents for every dollar risked on this strategy.

This formula only works for the case of two possible outcomes. There is a general formula when there are more outcomes:

**ME = ∑(P _{i}x RR_{i}) ¡, for i = 1 to N.**

Where N is the Nr. of events.

For the purposes of measurement of the performance of a system, the simplified formula is enough.

## Random Sequences

We have learned that a fair coin-toss game has two possible outcomes: Heads or tails. What happens if we use two coins?

With two coins we have four possible cases:

Coin 1 | Coin 2 | Probability |

Heads | Heads | 0.25 |

Heads | Tails | 0.25 |

Tails | Heads | 0.25 |

Tails | Tails | 0.25 |

We see that the chance of both heads is 0.25, both tails is 0.25 and a mix of heads and tails is the double, or 0.5

We could go on and use four coins. Its distribution of results would be similar to the chart below:

If we continue with this progression or more simultaneous coin

tosses, we end up with a Normal Gaussian Distribution

Using this analytical tool, we are able to plot the results of a trading system. Below, the histogram of the normalized losses of a system.

Or, based in its past history, plot the probability of a losing streak.

### Main Parameters of the Normal Distribution

The Normal Distribution can be described by two parameters: Its Mean and its Standard Deviation.

#### The Mean

The mean is the expected value of the distribution, its average value. For instance, on a coin-toss game, winning 1 dollar on heads and losing it on tails, the expected gain after 100 bets is zero. If had the same game, with a 2:1 reward as in our previous example, the expected value would be the expectancy of every bet by its number. On this case, 0.5×100 = 50.

The mean of the distribution D is the sum of its components d_{i }divided by its number N.

**Mean( D) = 1/N **x** ∑ d _{i}**

#### Ways to measure the Error or deviation from the Mean.

This quantity measures the average error of the distribution. If all the distribution had the same value on every member, its error would be zero.

There are many ways to measure deviation (dev). The most intuitive is to take the distance to its centre.

**dev = d _{i}– Mean.**

Some values will be positive, for samples above its mean, and some negative, when the sample’s value falls below its mean. Therefore we could take the absolute value of the deviation instead

**Abs dev = Abs(d _{i }– Mean)**

But this kind of measurement is not valuable when compared to distributions with different mean. The Relative dev comes to correct that:

**Relative dev = Abs dev/Mean.**

One way to get rid of the absolute operation is to square the error

**Square dev = (d _{i }– Mean)^{2}**

#### The Variance and the Standard Deviation

We see that every member of the data set will have its own Square deviation value. Well, the Variance is the expected value (the average) of the distribution of Square dev.

**Var(D) = E[Square dev] ^{2}**

**Var(D) = 1/N **x** ∑(d _{i }– Mean)^{2}**

The **Standard Deviation (std)** is the square root of the Variance.

**std(D) = sqrt(Var(D) )**

On normal distributions, 68% of the values fall within 1 std of its centre, 95% fall within 2 std and 99% fall within 3 std.

The next article of this series we will analyse the properties of the Normal Distribution and how it will help us to measure our trading system’s performance.