The European Space Agency has just published a collection of Science Talks on the Planck and Herschel missions (and a few joint ones). These have been prepared by some of the leading scientists involved in the missions, and I think it’s good that they’re being made public. Talks like this can sometimes be a little hard to follow without the voice of the speaker explaining what each slide is about. In particular, there a quite a few graphs – which can be confusing to anyone unfamiliar with the subject.
One of the Planck Science talks titles “Planck: Understanding the Big Bang”, by George Efstathiou, shows how much more accurately Planck will measure the properties of the Cosmic Microwave Background (CMB for short) than previous missions. Most of the graphs which illustrate this show the various detections with an x axis labelled simply “l” (an “ell” in a script font). This represents something called a “multipole“, which is a way of characterising the statistical properties of, say, the CMB. A higher multipole means a smaller angle, with a multipole of 200 corresponding to a size of 1 degree on the sky (twice the size of the full moon).
When the points are marked on these graph, they almost all have error bars (the vertical lines through the points, or sometimes shaded areas), which indicate the uncertainty in the measurement. The key thing to note is that the points for Planck are much smaller than the points for previous experiments. The uncertainties which result in the errors are inherent to almost any scientific measurement, because there is always a certain (hopefully small) amount of random variation, or “noise” in the measurement. To reduce the noise, an experiment has to measure the same thing many many times. There are several ways in which Planck ensures it has small error bars relative to other experiments: the detectors are more sensitive, so they have less inherent noise; there are several detectors at each frequency, so each bit of sky is seen by several detectors; Planck will scan the whole sky several times over the course of its lifetime, so that each bit of sky is measured many, many times by each detector.
There’s an additional source of errors which limit any experiment measuring the Universe on its largest scales: that’s the fact that we only have one Universe. The theories of the Universe’s evolution say that the Universe should follow the rules on average. For example, the number of galaxies per square degree on the sky is “X”. We can’t just pick a square degree and count the galaxies, since galaxies exist in clusters, and we might just happen to pick the middle of a big cluster, or a space between two clusters, which would affect the value we measure. This is something called “sample variance”, because there is variation between the separate “samples” you pick. So to get a better idea of what value X is, you have to look at a lot of square degrees over the sky.
The CMB is very similar. We can measure how much the CMB varies over small scales by measuring a lot of these small scales over the entire sky. But what about the large scales, such as those close to the size of the Universe (or at least as much of it as we can see)? We can’t pick a lot of those scales to measure, because there are only a few of them to look at. So we don’t know if our bit of the Universe is a bit special – a bit more or less dense for example. This is known as “cosmic variation”, because there is variation between the bits of the cosmos we pick. It is unfortunately completely impossible to overcome.
So the conclusion of this mini-diatribe is to say that Planck will do far better than the past and current experiments in terms of measuring the properties of the CMB. But it can only do so well, since there’s an inherent uncertainty in out Universe itself.