Monthly Archives: April 2013

Random triangles

At a basic level, a random triangle is simply a triangle whose corners are three random points on a piece of paper.

Mathematically speaking, a few decisions have to be made characterize exactly how the random point selection works. Think of it this way : should every place on the piece of paper be equally likely, or should the middle of the page be more likely to be selected than near the borders?

In this module, we assume that the points are coming from a bivariate normal distribution with unit variances and correlation \rho.

Mathematical Overview and Video Lecture by Gil Strang (MIT)

 

Play with random triangles!

The following module generates bunches of random triangles using the bivariate normal distribution with correlation coefficient \rho. The red triangles are obtuse, and the green triangles are acute (the likelihood of seeing a right triangle is 0, so it doesn’t get a color.) You can change \rho with the slider under the module. What happens as \rho approaches -1 or 1?


In Professor Strang’s lecture he discusses what the triangles look like in “triangle space”. The basic idea is that every triangle has three angles which sum to 180^{\circ}, call them \alpha, \beta, and \gamma. Every triangle is therefore represented by a single point in the “triangle space”. Further, the triangle space itself can be broken into four regions.

In the diagram below, the regions of the triangle are colored according to the kinds of triangles which are “zoned” to those regions : the red regions represent obtuse triangles, and the green region represents acute triangles. Notice that as \rho approaches -1 or 1, all of the triangles get pulled towards the corners. Why?


Statistics notation


Introduction

Probability and statistics is replete with all sorts of strange notation. In this module, we try to clarify some notation that we use in other modules. In doing so, we provide a very brief outline of the foundations of probability and statistics. We do this at various levels of mathematical sophistication. Feel free to peruse the levels to find the one which best fits where you’re at.

The experimental setup

Every statistics problem begins with an experiment denoted \mathcal{E}. It can be someone flipping a coin, determining the time it takes for a cell to divide, or determining whether a certain drug is effective – it doesn’t matter.

Of course, every experiment \mathcal{E} has an outcome. For example, when flipping a coin, there are two possible outcomes, heads H and tails T. The collection of all possible outcomes of an experiment we denote \mathcal{S} and call the sample space. Mathematically, \mathcal{S} is a set. For example, in the case of flipping a coin, \mathcal{S} = \{H, T\}.

The set-theoretic foundations of probability

Subsets of the sample space, i.e. collections of outcomes of the experiment \mathcal{E}, are called events. In most cases, it is not useful to simply assign every element of the sample space s a probability. Instead, we usually

At this point a little set theory helps and sets the stage for all of probability theory. In this article we just give the basic idea; for a more advanced exposition, look for books on measure theoretic probability such as the Resnick’s A Probability Path or Billingsley’s Probability and Measure. These are both advanced texts and are only accessible with an undergraduate level of mathematics. The Wikipedia probability outline is also a helpful handy resource.

Onward! For any set \mathcal{A}, the power set of \mathcal{A} is the set of all subsets of \mathcal{A}; it’s denoted \mathcal{P}(\mathcal{A}). For example, the subsets of \mathcal{A} are \mathcal{S} = \{H, T\}, \{H\}, \{T\}, and \emptyset = \{\}, the so-called empty set, which is by definition a subset of any set (as is the set itself). So the power set is \mathcal{P}(\mathcal{A}) = \big\{\{H,T\}, \{H\}, \{T\}, \{\}\big\}. In general, if a set has n elements, then its power set will have 2^n elements. In the coin flipping case, $\mathcal{S}$ has 2 elements, and the power set has 2^2 = 4 elements.

We are now at a place where we can define a probability. A probability is a function, usually denoted P, which assigns to every element of the power set of the sample space a number. Of course, not just any function will do. The function P must satisfy the three following properties to be a probability :

  1. The probability of the sample space is 1 : P(S) = 1.
  2. Probabilities can’t be negative : for any event \mathcal{A} \in \mathcal{P}(\mathcal{S}) P(\mathcal{A}) \geq 0.
  3. If \mathcal{A} and \mathcal{B} are disjoint sets (they don’t contain any of the same elements), then P(\mathcal{A} \cup \mathcal{B}) = P(\mathcal{A}) + P(\mathcal{B}).

Random variables and the relabeling of the elements of the sample space \mathcal{S}