Getting the Measure of the Cosmos
Benjamin Skuse
Take away satellites, telescopes and all other modern technology (including Wikipedia and ChatGPT) we have at our beck and call, could you apply logic and mathematics to the movement of the heavens and figure out the distance to the nearest stars, the Sun or the Moon? Could you calculate the circumference of the Earth? Measuring these cosmic distances is much more than tedious bookkeeping. It’s critical to understanding our place in the universe.
Around 240 BC, Ancient Greek scholar Eratosthenes of Cyrene was the first to attempt to make one of these collosal measurements, estimating the Earth’s circumference. Perhaps surprisingly, it had been widely believed for 250 years at the time that the Earth was round, not flat, but no one had managed to devise a method to estimate how big our globe was. Nicknamed ‘beta’ by his contemporaries for being a jack of all trades, master of none, Eratosthenes used this quality to his advantage, combining accurate measurements with simple geometry to make his most famous accomplishment.
Eratosthenes had heard from travellers about a special well in Syene (now Aswan, Egypt). At noon on the summer solstice, 21 June, the entire bottom of this well was illuminated by the Sun without casting any shadows, indicating that the Sun was directly overhead (Syene roughly lay on the Tropic of Cancer). Knowing this, Eratosthenes waited until the next solstice and measured the angle of the shadow cast by the Sun in Alexandria where he lived with a gnomon (measuring stick) and found it made an angle \(\alpha \) of about 1/50th of a full circle.
Assuming the Sun was so far away that its rays run parallel and that Alexandria was due north of Syene, all he needed was an accurate measure of the distance \( D \) between Alexandria and Syene to figure out the circumference of the Earth. For this, he called on the experts: bematists, professional surveyors trained to walk with equal length steps. These bematists obtained a distance of 5000 stadia (~900 kilometres; though a stadion’s exact value is disputed, ranging from 150 to 210 metres, a reasonable guesstimate is 180 metres). 5000 \( \times \) 50 = 250,000 stadia, or 45,000 kilometres – pretty close to today’s universally agreed circumference of 40,075 km.

Fly Me to the Moon… and Sun
With the Earth’s circumference in your back pocket, could you then go on to measure the distance to the Moon, or even the Sun? Another Ancient Greek philosopher, Aristarchus of Samos, had already made huge strides towards making these calculations about 10 years before Eratosthenes’ epiphany. Though ignored by his peers, Aristarchus correctly assumed that the Sun lay at the centre of the Solar System with the Earth rotating and orbiting around it, and the Moon orbiting the Earth.

Continuing this reasoning, he deduced that lunar eclipses are caused by the Moon passing through Earth’s shadow, which he could assume was roughly the same size as Earth’s diameter. From these deductions, observing lunar eclipses and making inferences to do with the lunar cycle related to the time it takes for the Moon to set, Aristarchus judged that the Moon was roughly 1/3 the size of Earth.
This was the critical information he needed to calculate the distance to the Moon in terms of Earth’s diameter. If \( d = \) distance to the Moon, \( M = \) Moon diameter, and \( \theta = \) angular diameter of the Moon, which can be observed as the angle covered by the diameter of the Moon, then: \( d = M/\theta \).
Sadly for him, Aristarchus had no idea what the diameter of the Earth was and seemingly poorly estimated the Moon’s angular diameter, resulting in a gross underestimation of the distance. Nevertheless, the likes of Hipparchus, Posidonius and Ptolemy later used similar principles to those of Aristarchus to estimate the Moon’s distance, and came up with comparable numbers to modern measurements. The distance is now known extremely accurately, on average 384,400 km, using the time it takes for a laser to travel to and bounce back to the Earth from a mirror that astronauts left on the Moon’s surface.

Aristarchus, Hipparchus and various other ancient astronomers were also interested in the distance to the Sun. However, none got close to measuring it with any accuracy. This would have to wait until a critical astrometric method was first wielded properly in 1716 by English astronomer Edmond Halley: parallax.
Parallax was actually proposed by Hipparchus centuries earlier as a way to measure the distance to the stars. He argued that if you know the distance to the Sun, you can measure how much the apparent position of a nearby star changes in relation to background stars between two opposite points of the Earth’s orbit (21 June and 21 December, for example). This gives you the parallax angle, from which you can calculate the distance to the star. But with just his eyes to go by, measuring parallax was impossible, and the idea faded into obscurity.

However, Halley revived it in a different context. Instead of using the distance between the Sun and Earth as a cosmic yardstick and the movement of the apparent position of stars against background stars, Halley used the distances between different locations on Earth as his yardstick and the apparent position of Venus as it transited in front of the background Sun. Sadly for Halley, Venus only passed across the Sun’s surface from Earth’s perspective twice every 120 years, and the next transits were expected 45 and 53 years in the future, respectively.

Halley would not live to see it, but the Venus Transit Expeditions of 1761 and 1769 are regarded as the first time astronomers from all over the world collaborated to measure an astronomical event. The combined data yielded a distance to the Sun of 153 \( \pm \)1 million kilometres. This is within 1% of today’s accepted average distance, 149,597,871 kilometres; otherwise known as one astronomical unit (1 AU), established using modern methods like radar ranging and spacecraft telemetry.

Freed from the Solar System
With a reasonable estimate for the AU finally established, and telescope technology getting ever more impressive and accurate, parallax could be used for what Hipparchus originally intended: measuring distances to other stars. Friedrich Bessel is generally considered the first person to make a significant measurement of the distance to a star, 61 Cygni, which he published in 1838. His distance of 10.4 light-years (equivalent to 65,7710 AU or \( 9.8 \times 10^{13} \) km) is not too far off from measurements made today using the exact same parallax premise: 11.41 light-years.
Since this early result, the parallax method has been wielded in concert with evermore sophisticated telescopes. Most notable have been two recent satellite-borne telescopes from the European Space Agency. Between 1989 and 1993, the aptly named Hipparcos mission measured the parallax distances to more than 100,000 stars; 20,853 of which to better than 10% accuracy. More recently, Gaia operated from 2013 until decommissioning earlier this year. Gaia provided extremely precise measurements of orders of magnitude more stars; measuring parallax to nearly 2 billion stars, around 150 million of which are expected to have better than 10% accuracy.

But even Gaia had its limits. An accuracy of 0.001% for the nearest stars plummeted to 20% for stars near the Milky Way’s centre 30,000 light-years away, and fell off even further for distant stars in other galaxies. These extragalactic stars are generally part of the fixed background for the parallax technique. Completely new methods would be needed to figure out their distances.
The key to the first of these new methods was discovered in the early 20th Century by US astronomer Henrietta Swan Leavitt. Leavitt observed a critical property in a type of star called a Cepheid variable, whose brightness rapidly increases and then slowly dims repeatedly. Observing Cepheids in the Small Magellanic Cloud, Leavitt uncovered a strong relation between these stars’ apparent brightness and the timescales over which their brightening and dimming pattern repeated: the brighter the star, the slower it blinked. With all the stars in the Small Magellanic Cloud presumably roughly the same distance from Earth, it could be inferred that each Cepheid’s average luminosity must be related to its period. The period–luminosity relationship, Leavitt’s law, was born.

If astronomers know the luminosity of a star and observe its apparent brightness here on Earth, the inverse square law of light immediately provides the distance to the star. However, at the time of Leavitt’s discovery, the period-luminosity could not yet provide this luminosity measure, because the relationship had not been calibrated to actual distances of Cepheid variable stars.

Just a year after Leavitt reported her results, Ejnar Hertzsprung (of Hertzsprung-Russell diagram fame) determined the parallax distances of several Cepheids in the Milky Way. This was all that was needed to wield Leavitt’s law to calculate the distance to any Cepheid. This achievement made Cepheids the first ‘standard candle’; an astronomical object with a known luminosity.
Though different types of Cepheid were found after these early breakthroughs to slightly complicate measurements, because they are so bright and common, Cepheids allow distance measurements today way out to over 100 million light-years. But just as parallax becomes less accurate at greater distances, the same is true for Cepheids, even with the most advanced telescopes. To measure further, many astronomers use some of the most dramatic and energetic events in the universe: supernovae.
The Distant Universe
Supernovae are powerful stellar explosions that come at the end of a star’s life. “Type Ia” supernovae specifically are thought to be the explosions of white dwarfs – long-dead remnants of stars that had been slightly more massive than the Sun. Though they’re not quite standard candles like Cepheid variables, they are what could be described as really useful standardisable candles.

Type Ia supernovae are common throughout the universe, and therefore capable of being used as a yardstick wherever you look across the cosmos. More importantly, though, they have the useful property that the time it takes them to reach peak brightness and then to decline is correlated with their intrinsic luminosity. More luminous type Ia supernovae have broader light curves than less luminous type Ia supernovae, and there is a mathematical relation between the two properties.
From there, type Ia supernovae can be treated like Cepheids, inputting their luminosity and apparent brightness into the inverse square law of light to extract their distance. The most distant observed type Ia supernova is SN UDS10Wil, announced in 2013 by the Hubble Space Telescope. Sitting 16.6 billion light-years away, SN UDS10Wil exploded in the universe’s early formative years when it was just a third of its current size.
To the Edge of Reason
Could astronomers try any other tricks to measure even further than this? The answer is yes and it all stems from Leavitt’s breakthrough over a century ago. In work published in 1929, Hubble combined Leavitt’s law to measure distances, with galactic redshift (a shift in the frequency of light emitted by the galaxy) to measure the velocity at which galaxies are generally receding from Earth.
From this, he outlined his eponymous law, which says that that this galactic recession is happening at speeds \( v \) proportional to \( D \) their distance, \( v = {H_{0}} \times D \), and that the universe is therefore expanding. Like many of his forebears, Hubble got the theory right but the measurements wrong, grossly overestimating the factor \( H_{0} \) known as the Hubble constant as 500 km/s/Mpc, when it is now known to be somewhere around 70 km/s/Mpc (we will not get into modern discrepancies in the value of the Hubble constant, known as the Hubble tension, in this post).
With a much better estimate of \( H_{0} \) today, Hubble’s law can be used to estimate the distance to galaxies out to the edge of the observable universe, simply by determining a galaxy’s recessional velocity (\( v \)) via observations of the redshift of the light it emits.
Amazingly, there are other techniques that offer a means of estimating great distances for extremely high-redshift objects. One is not really a separate technique, more an expansion on Hubble’s law to account for the changing expansion rate of the universe using physicists’ current best understanding of how the universe formed and evolved, known as Lambda-CDM. This work requires mathematics too advanced to describe in a short blog post.
Another technique offers a glimpse into the very earliest phase of the universe’s creation. Baryon acoustic oscillations are the product of how the chaotic cosmic soup that was the early universe expanded and cooled, generating sound waves. These sound waves froze in place when atoms formed, around 380,000 years after the Big Bang. The imprint of these waves can be detected in the temperature power spectrum of the cosmic microwave background (the relic radiation left over from the Big Bang that fills all space in the observable universe) and even in wrinkles in the density distribution of clusters of galaxies spread across the universe today.
High-level mathematics, physics and astronomy combine to detect and interpret these oscillations in the hope of taking a peek behind the cosmic curtain that is the cosmic microwave background to the most distant and earliest structures imaginable that formed during the universe’s birth.

The post Getting the Measure of the Cosmos originally appeared on the HLFF SciLogs blog.