# Astronomer Discovers New Way to Measure the Distance to Other Galaxies

A research astronomer at Mount Wilson Observatory has discovered a new way to measure the distances to other galaxies.

E. P. Hubble is a former college athlete and US Army Officer who somehow ended up in the stargazing business. Shortly after his discharge, he was hired by Mount Wilson and soon developed his innovative trick.

Hubble’s technique builds on the work of previous scientists, and uses a special type of variable star. Early on, astronomers recognized the bright star Delta Cephei changes brightness about every five and a half days. Over time, quite a few similar stars have been discovered, with different periods and of different brightnesses.

The brightness of Delta Cephei varies regularly over several days.

For a long time, it wasn’t clear if brightness related to period, because how bright a star looks from Earth (known as its apparent magnitude) depends both on the star’s intrinsic luminosity (or true magnitude) and how far away it is. Since scientists couldn’t measure the distance to any of these stars, called Cepheid variables after the nearest specimen, the relationship between their brightness and period was a mystery.

That is, until an assistant astronomer named Henrietta Leavitt started studying them.

Leavitt focused on stars in the Small Magellanic Cloud, an irregular region near the edge of the Milky Way. The Cloud is so far away that its size is negligible compared to its distance from the solar system. Leavitt reasoned that brighter Cepheid variables in the Small Magellanic Cloud would be be more luminous not just in appearance but in fact, so she could determine whether that had anything to do with the period.

As it turns out, brighter Cepheid variables have a longer period, while smaller ones have a shorter period. With a little math, astronomers can work out the true brightness of a Cepheid variable from how long it takes to complete a full cycle.

Leavitt’s supervisor at Harvard College Observatory, Harlow Shapley, realized that this relationship could determine true distances in space. Shapley used a statistical analysis of Cepheid variables to determine their true distance in space, as indicated by the Doppler shift of their spectral lines.

Hubble was the first astronomer to apply this tool to measuring galactic distances. He made a dedicated search for Cepheid variables in other galaxies, the Andromeda Galaxy in particular.

The results were shocking. All the galaxies where Hubble found Cepheids turned out to be much further away than scientists had previously thought. Several other teams have already confirmed the findings, but it will probably take astronomers and physicists a years to figure out all the implications.

Happy Amazing Breakthrough Day!

# Asimov on Entropy

Isaac Asimov’s science book View From A Height dedicates an entire chapter to explain concept of entropy. Assuming you have a decent background in the physical sciences, it does an excellent job, even better, I daresay, than my thermodynamics professor managed. So far the whole book has been a worthwhile read, but that essay in particular may be instructive to those interested in the topic.

Asimov concludes the chapter by presenting a very appealing hypothesis:

[E]ven if the universe were finite, and even if it were to reach “heat-death,” would that really be the end?

Once we have shuffled the deck of cards into complete randomness, there will come an inevitable time, if we wait long enough, when continued shuffling will restore at least a partial order.

Well, waiting “long enough” is no problem in a universe at heat-death, since time long longer exists there. We can therefore be certain that after a timeless interval, the purely random motion of the particles and the purely random flow of energy in a universe at maximum entropy might, here and there, now and then, result in a partial restoration of order.

It is tempting to wonder if our present universe, large as it is and complex though it seems, might not be merely the result of a very slight random increase in order over a very small portion of an unbelievably colossal universe which is virtually entirely in heat-death.

Perhaps we are merely sliding down a gentle ripple that has been set up, accidentally and very temporarily, in a quiet pond, and it is only the limitation of our own infinitesimal range of viewpoint in space and time that makes it seem to ourselves that we are hurtling down a cosmic waterfall of increasing entropy, a waterfall of colossal size and duration.

This is an intriguing idea. It suggests an alternative possibility for the fate of the cosmos than “eternal coldness”. Presuming that black holes don’t end up consuming all the matter in the universe, and proton decay turns out to not occur, then it might be possible to square a sort of steady-state theory with the existence of entropy.

Entropy isn’t merely disorder—though disorder is certainly a part of it. The Second Law of Thermodynamics tells us that energy does not spontaneously flow from cold areas to hot areas. Only by applying work can we force the flow in the opposite direction. Work, however, can only be extracted from the flow of energy from a hot reservoir to a cold one. Thermal efficiency is based on the temperature difference between the two reservoirs:

$\eta_{th} = \frac{T_H - T_C}{T_H}$

Where $\eta_{th}$ is the thermal efficiency, and $T_H$ and $T_C$ are the temperature of the hotter and cooler reservoirs, respectively. As an example, if a warm reservoir is at 500 K and a cooler reservoir is at a mere 350 K, then the maximum thermal efficiency of a work-extracting cycle between these two reservoirs is 30%. (But don’t take my numbers for granted. Check it yourself!)

Note that the absolute quantity of energy in either reservoir is irrelevant. We are only concerned with their relative values. Work cannot be extracted by placing two equally hot reservoirs in contact, even if both are at 10,000 °C.

It is, of course, theoretically possible that random motion of individual particles might provide a very small about of usable work. However, this is exceedingly unlikely. Asimov gives an extreme example: could water in a pot freeze while the fire beneath grows hotter? Theoretically, yes. The laws of statistical thermodynamics do not forbid it. But even if the entire universe were filled with such pots, and we waited for eons and eons, we would not realistically expect to see a single pot significantly cool, let alone freeze.

As time goes on, we will approach universal thermal equilibrium. Extracting useful work, of any form, will become impossible. Work is energy, and life depends on a continuous source of energy, so all forms of life will perish.

This, naturally, can be a bit frightening to think about, especially if you are very young when you learn about it, as I was. A cosmological expiration date seemed like a very serious problem, because it meant that all of our efforts would necessarily be in vain. If the universe ends end regardless, social morality seems farcical. Rank hedonism looked like the only alternative, so my early attempts to reject

Objectivism helped me out of that trap, though its presumption of an inexhaustible universe remains problematic. But that doesn’t matter if morality is not social but personal, and the purpose of existence is Apollonian joy rather than a greater obligation.

Still, the possibility that the universe, even after heat death, can randomly reorganize will offer hope to the last mind that joy won’t go out of existence forever. A silent eternity would pass, and then…something. A universe appears again from darkness.

Doesn’t that sound familiar?

I’m tempted by this hypothesis not merely because it offers hope for the universe, but also because it helps get around one of the frequently-asked unanswered questions about the Big Bang: what happened beforehand?

So far, we don’t know. Did time even exist before the Big Bang? Did conservation of mass-energy already apply? I haven’t studied the astrophysics to pretend to answer such questions.

Random reordering gets around this issue. The universe we know is a ripple is the wider open of equilibrium particles. Entropy is maximized. Everything is in the lowest possible energy state. By chance, some of this matter happened to organize itself. Net entropy will now continue to increase, so I think this is allowed.

Since the time for a “dead” universe to randomly form an orderly patch of significant size must be incredible, it would be no surprise if the photonic evidence of previous ordered periods had been entirely absorbed or diffused. Photons, being massless, don’t decay, but in such a long period would no doubt either be absorbed by the near-equilibrium particles. The remainder would be spread out over such a large area that the number of photons are simply swamped by more recent light. No instruments could possibly detect them.

But just because a hypothesis would be personally comforting does not forgive a lack of evidence. We should seek contradicting evidence for all hypotheses, regardless of our feelings toward them. Falsification is how science works.

Does random reordering fit the evidence we’ve already gathered about the early universe?

I’m little more than a layman, but generally speaking, the answer is: not really.

We have a pretty good picture of everything that happened more than a second after the Big Bang, and for a good while before that. A lot is based on astronomical data, such as the cosmic background data gathered by COBE, WMAP, and Planck. The remainder comes from particle accelerator experiments, from which physicists can build up models that extrapolate back even further.

The current theories don’t look very much like the result of randomly reshuffling baryons or leptons. It looks like a lot matter being created ex nihilo, with somehow antimatter being in the slight minority. Possibly a reshuffling at a much lower scale occurred, well after proton decay and whatnot evaporate the particles we’ve come to expect—I have an idea about how that might work, but I won’t burden you with more unwarranted speculation.

More study is clearly needed: better space telescopes and more powerful particle accelerators to give us data, faster supercomputers to process it, maybe some mathematical breakthroughs. It will probably take awhile to get a better estimate on the odds, but until then, I would put a low prior on the likelihood that our universe is a temporary reprieve from heat-death.

Full-sky image of the cosmic microwave background, gathered over nine years by the Wilkinson Microwave Anisotropy Probe.

Source: NASA/WMAP Science Team

However, there is a bigger philosophical question here: a reorganization hypothesis does not explain the origin of the universe, it just moves the cosmological problem up a step. Our universe being a mere ripple on a larger heat-dead ocean doesn’t tell us where that ocean came from. Did that universe have a Big Bang? Is it cyclical? Is it Steady-State? We still have to answer the same questions, and now we have less data!

(Of course, if it does turn out to be true, then we’ll just have to make do with less data. But that’s a methodological question.)

Trying to explain why there is something at all isn’t necessarily a hard question, but to explain why existence started to exist 13.8 billion years ago is a bit trickier. At this point, perhaps the simulation hypothesis is a decent pseudo-explanation. You can’t make very many predictions with it, so I wouldn’t call it a real explanation. That said, it does manage to constrain our anticipation to some degree. And there is some evidence for it.

Whatever reality is, we’ve still got at least some distance further to walk on the path to Truth. It’s tempting to take a short-cut through speculation and a priori arguments, but those are distractions. If we want to be sure, we have to do things right. Proposing hypotheses is part of that process—but so is rejecting them. As tempting as random reorganization is, I’d be happy to reject it with a little counterevidence.

# Book Review: Life in the Universe

Life in the Universe was the assigned textbook for the astronomy class I took in the fall of 2015. The course is titled the Quest for Extraterrestrial Life, and is sometimes taught by retired astronaut Dr. Steven Hawley. Another professor taught it that semester, but I really can’t complain about that—my issues with the course mainly came from the fact that it was open-enrollment. The majority of the term covered material I already knew, and we never got to the more interesting questions. But that’s my fault for dabbling in astrobiology.

The textbook is great for this purpose, however. Bennett and Shostak don’t assume much prior knowledge of astronomy, chemistry, or biological science. The first several chapters are a crash course on the history of the field and the relevant background material. For those who want to learn more about science and don’t know where to be begin, this book might be a good introduction.

But the book does get into the minutiae of estimating the odds of alien neighbors. It looks at the habitability of the planets and major moons of the Solar System, both now and in the past, and then moves on to discuss (what was then) the state of exoplanet research.

What makes a planet habitable is a complicated question. Even on our single planet, where every organism shares the same general biochemistry, environments in which one species thrives can be instantly fatal to another. We have to constrain the search space.

Bennett and Shostak look primarily at carbon-based life using water as its solvent, i.e. life as we know it. They offer justification for this focus, but I’m not entirely convinced. It’s especially weird when contrasted with their willingness to argue that we should take an optimistic stance towards the possibility of life overcoming any given obstacle.

To be clear, lifeforms which use something other than carbon as a base-structure would have more trouble evolving. Carbon is wonderful because it can easily bond with up with other molecules, and is conveniently small. Science fiction writers sometimes swap out our biochemical components with elements from a row down on the periodic table, ignoring the fact that those elements are larger and thus less electronegative. In layman’s terms, such molecules aren’t going to form as easily and will be less stable. That’s not so good for life.

Similarly, using other solvents than water presents some issues, because water has an unusually wide liquid temperature range, even for other polar molecules. This is a product of hydrogen bonding, polarity, and molecular weight. Almost any other conceivable solvent trades off at least one of these.

The same problem repeats for all the other chemicals in our bodies. There is good reason to focus on “life as we know it”. What I find harder to justify is the assumption that life won’t encounter very many major obstacles on the path to intelligence.

First and foremost is the fact that evolution doesn’t move in a straight line. Intelligence was not an end-goal for evolution. Even once intelligence developed, there wasn’t any rule that said we should develop modern technology. That has been the general trend since the development of farming, but by no means inevitable. Suppose a second round of glaciation had hit around 8,000 BCE. Agriculture would have been abandoned, and later selection pressures might have bred the genes for intelligence out of the population. Who knows how many species that’s happened to in the history of Earth, let alone across the galaxy.

Life isn’t pursuing complexity—complexity is often a cost which requires immediate justification. Inventing a wing is easy; evolving a wing is hard, because each intermediate step needs to provide a positive benefit to the organism, too.

Second, abiogenesis. We have just one example of life developing, and we’re not entirely sure how it happened. We have a good picture of the biochemistry that was probably involved, much of which would have occurred before the first cells came into existence. We’re pretty sure it involved RNA replicators, but that’s still an unclear picture.

On Earth, life developed about a billion years after planetary formation, which is pretty early in the grand scheme of things. This suggests that life can emerge from non-life pretty easily in the conditions that existed on Earth about 3.5 billion years ago.

But how common are those conditions? What percentage of rocky planets in the habitable zone will have that sort of environment for a sufficiently long time for life to develop? What percentage of those planets will stick around long enough for complex life to evolve? Intelligent life?

This is where I found the book’s analysis lacking. It took three billion years for life on Earth to develop complexity and we still don’t entirely know why that changed so suddenly. There’s a number of possible explanations: a mass-extinction eliminating competition, planetary glaciation eliminating competition, development of eyes or teeth causing runaway competition, the evolution of aerobic organisms, the formation of the ozone layer making surface environments more habitable—the list goes on. It’s not even clear that the Cambrian Explosion was an explosion. Perhaps it’s just an artifact of an incomplete fossil record.

At least weak evidence exists, however, of complexity being a hard step along the road to intelligence. The fact that so many explanations have been suggested to me suggests that perhaps multiple factors were actually at play.

I would go so far as to postulate that an additional term should be added to the Drake Equation to account for this fact, but that post will have to wait a few months at least.

The book concludes with a discussion of the Fermi Paradox. For those who don’t know, the Fermi Paradox comes from a question asked by the famous nuclear physicist: “Where is everybody?” Considering all the evidence that Bennett and Shostak had marshalled above, why can they not point to any real examples of extra-terrestrial intelligence?

Italian-American physicist Enrico Fermi

Many answers are possible, which they explore in reasonable detail. Perhaps the strongest argument is that interstellar travel is extremely difficult (and we haven’t sent enough interplanetary probes to effectively assess the likelihood of non-intelligent life in our Solar System). The aliens are out there, but they won’t be coming here. Odds are that we won’t be going there, either.

These are intriguing ideas, which I’d hoped to learn a lot about. Life in the Universe is a decent introduction to them, but once again doesn’t discuss the topic in adequate depth. I would recommend it, then, as only an introductory text. There’s probably better astrobiology books out there, and I look forward to reading them.

The single most frustrating aspect of the book, however, was the authors’ expectation that readers wouldn’t take the material seriously. Almost every chapter has a dedicated section for taking pot-shots, fair or not, at some famous works of science fiction. Now there’s certainly a degree to which we must remove misconceptions before the truth can succeed, but there’s also a risk of anchoring. I’m not convinced that the trade-off was worth it.

Even in the final chapter, by which point the reader is, presumably, sufficiently credulous to believe what the relevant PhDs are saying, they’re still on the defensive, even about phenomena as real as anti-matter. It was frankly embarrassing to read, but fits pretty well with the mood of the class. One of the rewards of reaching higher-numbered courses is that everyone takes the material halfway seriously. I don’t suppose I’ll ever study astronomy at that level, but who knows. The future is long and full of possibilities.

# On the Implications of Nonlinearity and Chaos

I picked up James Gleick’s book Chaos on the recommendation of a friend, mistakenly expecting to learn about physics. The cover misled me, conjuring visions of subatomic particles and string theory. There is physics in Chaos, and physicists playing major roles, but really it’s a book about mathematics. Specifically, nonlinear mathematics.

Nonlinear can mean different things depending on the context. For Chaos, we’re concerned with differential equations. Differential equations relate a variable and that variable’s derivative. For example:

$\frac{dx}{dt} = x(t) + C_1$

Nonlinear differential equations entangle the variable and its derivatives in the same term. A simple nonlinear equation would be:

$x\frac{dx}{dt} = x(t) + C_2$

This equation is relatively benign. $C_2$ is a constant, so we can separate the equation and rearrange it to a solvable form. We’re thrown this sort of thing in the first two weeks of diff eq, before moving onto harder problems.

Most conceivable differential equations are nonlinear. Certain nonlinear forms are solvable, such as the equation above. But the vast majority are not1.

This is a bit of a problem for us humans, because the universe essentially runs on differential equations. Scientists of all disciplines spent decades mistakenly assuming that unpredictable systems actually oscillated around unseen equilibria. Enough systems really do that that it wasn’t an unreasonable hypothesis—but it turns out that most of them don’t.

As the Twentieth Century progressed, things began to change. Mechanical calculators and digital computers finally let men run the numbers fast enough to see that, no, the systems weren’t doing what they’d previously thought. Edward Lorenz’s meteorological simulations are the canonical example, but biology researchers studying population changes, electrical engineers building signal processing systems, and physicists trying to get a handle on fluid mechanics discovered related phenomena around the same time.

Researchers found patterns in the noise. Lorenz discovered his attractors2. Mitchell Feigenbaum noticed period-doubling bifurcation. Benoit Mandelbrot did . . . honestly what didn’t Mandelbrot do? A quartet of physics grad students at UC Santa Cruz calling themselves the Dynamical Systems Collective (among other names) did a lot of the work, flushing out what became chaos theory and bringing it forward for publication.

Chaos started showing up everywhere. The Dynamical Systems Collective occasionally sat down in a public place and just looked for the nearest pattern of nonlinear behavior, what we now call strange attractors. Was it the dripping faucet in the coffeehouse kitchen? Even massively simplified models of dripping water are nonlinear. Was it that flag blowing in the breeze? One of the members even argued that the needle on his car’s speedometer bounced in a nonlinear fashion.

Once you notice the pattern, you’ll see nonlinear dynamics constantly. It’s easy to quell your curiosity about the world when you think everything has nice, simple governing equations. Some algebraic expression or trigonometric function, with a linear differential equation at worst. And surely that won’t be more than second order!

No. Chaotic systems are all around us. The electrons bouncing through your Ethernet cable behave nonlinearly. Do you know someone with an irregular heartbeat? That’s a nonlinear pattern. Medicine was slow to embrace chaos theory, but the human body is a massively nonlinear system.

Let me intimate that clearly: biological systems tend to be very nonlinear. They cannot be predicted with anything approaching the certainly of simple mechanical systems. And remember, there is no general solution for a simple three body problem.

No equation or set of equations can predict the location of just three lousy planets, approximated as point masses. We’ve known this since the 1880s, but it’s still beginning to sink in to the consciousness of modern civilization. Numerical integration can do wonders, but eventually the system necessarily becomes unpredictable. Only special arrangements can be described as “stable”. In fact, there’s a strong case to be made that the solar system did not form in its current configuration. This arrangement may be an equilibrium reached only after major disruptions, possibly including the ejection of multiple planets to interstellar space.

Now, try developing a semi-functional model of the human brain, neurotransmitters and all. I’ll wait.

When you begin to really think about these things, it can become truly terrifying. The size and degree of our ignorance is difficult to communicate. Engineering and science are considered hard when everything is linearized and simplified to death—the real deal makes that look nearly trivial. Economics and culture are probably even more complex3. After all, molecules don’t have minds of their own.

My biggest criticism of Chaos would probably be that the book doesn’t spend enough time emphasizing this point. There’s a lot of great factual information, but the full implications are barely sketched out. Equations are few and far between—but Mr. Gleick deserves a lot of credit for including equations at all! So despite that flaw, I would very highly recommend Chaos as an introduction to the higher mathematics which makes the world such an interesting place to live.

1Even the solvable ones can be real beasts. I still have this monstrosity bookmarked from an analytical homework problem. They told us not to attempt solving it ourselves, and I can see why.

2I took differential equations multiple times, and Lorenz Attractors were the only nonlinear form we discussed in any real detail, and even that avoided calculations.

3Yet, so far as I can tell, the sociology program at my school requires nothing more than the bare minimum in mathematics. Most of the serious work ends up getting published in econ journals.

# Beware Scientific Metaphors

I’m about a quarter finished with Isabel Paterson’s The God of the Machine, which I’m finally reading after several years of intending to. So far, it’s been both pleasurable and interesting. My main reservation, however, has been an extended metaphor which both illustrates the central idea and potentially undermines it.

Paterson develops a notion of energy to describe the synthesis of material resources, cultural virtue, and human capital which results in creativity and production. As metaphors go, this is not a bad one. That said, my engineering background gives me cause for concern. It isn’t clear that Paterson has a clear understanding of energy as a scientific concept, and her analogy may suffer for it. Complicating matters, she sometimes also phrases “energy” as if it were electricity, which is another can of worms in and of itself.

Mechanical energy behaves oddly enough for human purposes, being generally conserved between gravitational potential and kinetic energy, and dissipated through friction and heating. It emphatically does not spring ex-nihilo into cars and trains. Coal and oil have chemical potential energy, which is released as thermal energy, then converted into kinetic energy and thus motion to drive an internal combustion engine.

Electrical energy is even weirder. It’s been enough years since I finished my physics that I won’t attempt to explain the workings in detail. (My electronics class this spring bypassed scientific basis almost entirely.) Suffice to say that the analogy of water moving through a pipe is not adequate beyond the basics.

Atomic energy, the most potent source yet harnessed, does create energy, but at a cost. A nuclear generating station physically destroys a small part of a uranium atom, converting it via Einstein’s famous relation to useful energy. But more on that in later posts.

I won’t say that the “energy” metaphor is strictly-speaking wrong, because I haven’t done the work of dissecting it in detail. Paterson was a journalist and writer, but she was also self-educated, and therefore we cannot easily assess the scope and accuracy of her knowledge of such phenomena. But I don’t think it matters: even if the metaphor is faulty, the concept which it tries to communicate seems, on the face, quite plausible without grounding in the physical sciences.

I bring this up now, well before I’ve finished the book, because I’ve seen much worse analogies from writers with much less excuse to make them. The God of the Machine was published in 1943. Authors today have a cornucopia of factual knowledge at their fingertips and still screw it up. For instance, take this caption from my statics textbook:

Hibbeler, R. C., Engineering Mechanics: Statics & Dynamics, 14th ed., Pearson Prentice Hall, Hoboken, 2016.

There is no excuse for a tenured professor (or, more plausibly, his graduate students) to screw this up. The correct equation is on that very page and they couldn’t even be bothered to run the numbers and see that, no, you’re not significantly lighter in low Earth orbit. From my perspective, such a blatant error is unconscionable in the opening pages of a professional text.

Now that isn’t exactly a metaphor, but it illustrates the risks of discussing fields nominally close to your own which nevertheless you know very little about. Imagine the danger of using metaphors from totally different fields you’ve never formally studied.

So, I would advise writers to be sparing with scientific metaphors. If you can learn the science correctly, that’s great: you’ll construct metaphors that are both interesting and accurate. But as we’ve seen above, even PhDs make stupid mistakes. Err on the side of caution.

# Book Review: The Signal and the Noise

Supposedly Nate Silver’s credibility took a major hit last November, which will no doubt discourage many potential readers of his book. This interpretation is wrong, but palatable, because the sorts of commentators who would come to such conclusions shouldn’t be trusted with it. This book is about how to be more intelligent when making predictions and be wrong less often. Such an attitude is not common—most “predictions” are political pot-shots or, as discussed previously, avaricious attempts to put the cart before the horse.

Let’s begin with a discussion of a few major tips. Most of these things should be taught in high school civics (how can you responsibly vote without a concept of base rates?!), but aren’t. Perhaps the most important thing is to limit the number of predictions made, so you can easily come back and score them. Calibration is recommended—nine out of ten predictions made with 90% confidence should come true.

Political pundits are terrible about these sorts of things. Meteorologists are actually great at it. Now your local weatherman is regularly wrong, but the National Weather Service makes almost perfectly calibrated forecasts1. This is, in part, because their models are under constant refinement, always seeking more accuracy. And it pays off: NWS predictions have improved drastically over the last few decades, due to improved models, more data collection, and faster computers. But more on that later.

Local meteorologists, on the other hand, are incentivized to make outlandish forecasts which drive viewership (and erode trust in their profession). One might see this as evidence that public entities make better predictions than private ones, but we quickly see that that is no panacea when we turn to seismology and epidemiology.

Part of the problem, in those fields, is that government and university researchers are under considerable pressure from their employers to develop new models which will enable them to predict disasters. This is a reasonable enough desire, but a desire alone does not a solution make. We can quite easily make statistical statements about approximately how frequently certain locations will experience earthquakes, for instance. But attempts beyond a simple logarithmic regression have so far been fruitless, not just failing to predict major earthquakes but specifically prediction that some of the most destructive earthquakes in recent memory would not occur.

Silver’s primary case study in this comes from the planning for Fukushima Daiichi Nuclear Power Plant. When engineers were designing it in the 1960s, it was necessary to extrapolate what sort of earthquake loads it might need to withstand. Fortunately, the sample size of the largest earthquakes is necessarily low. Unfortunately, there was a small dogleg in the data, an oh-so-tempting suggestion that the frequency of extremely large earthquakes was exceedingly low. The standard Gutenberg-Richter model suggests that a 9.0-magnitude earthquake would occur in the area about once every 300 years; the engineers’ adaptation suggested every 13,000. They constructed fantastical rationalizations for their model and a power station able to withstand 8.6. In March of 2011 a 9.0-magnitude earthquake hit the coast of Japan and triggered a tsunami. The rest, as they say, is history.

The problem in seismology comes from overfitting. It is easy, in the absence of hard knowledge, to underestimate the amount of noise in a dataset and end up constructing a model which predicts random outliers. Those data points don’t represent the underlying reality; rather, they are caused by influences outside the particular thing you’re wishing to study (including the imprecision of your instruments).

And it can take awhile to realize that this is the case, if the model is partially correct or if the particular outlier doesn’t appear frequently. An example would be the model developed by Professor David Bowman at California State University-Fullerton in the mid-2000s, which identified high-risk areas, some of which then experienced earthquakes. But the model also indicated that an area which soon thereafter experienced an 8.5 was particularly low-risk. Dr. Bowman had the humility to retire the model and admit to its faults. Many predictors aren’t so honest.

On the other hand, we see overly cautious models. For instance, in January of 1976, Private David Lewis of the US Army died at Fort Dix of H1N1, the same flu virus which caused the Spanish Influenza of 1918. The flu always occurs at military bases in January, after soldiers have been spread across the country for Christmas and New Year’s. The Spanish Influenza had also first cropped up at a military base, and this unexpected reappearance terrified the Center for Disease Control. Many feared an even worse epidemic. President Ford asked Congress to authorize a massive vaccination program at public expense, which passed overwhelmingly.

The epidemic never materialized. No other cases of H1N1 were confirmed anywhere in the country and the normal flu strain which did appear was less intense than usual. We still have no idea how Private Lewis contracted the deadly disease.

Alarmism, however, broke public confidence in government predictions generally and on vaccines particularly. The vaccination rate fell precipitously in the following years, opening the way to more epidemics later on.

Traditionally, this category of error was known as crying wolf. Modern writers have forgotten it and have to be reminded to not do that. Journalists and politicians make dozens if not hundreds of “predictions” each year, few if any of which are scored, in no small part because most of them turn out wrong or even incoherent.

Sadly, the pursuit of truth and popularity are uncorrelated at best. As Mr. Silver has learned, striving for accuracy and against premature conclusions is a great way to get yourself berated2. Forecasting is not the field for those seeking societal validation. If that’s your goal, skipping this book is far better than trying to balance its lessons and the public’s whim.

But let’s suppose you do want to be right. If you do, then this book can help you in that quest, though it is hardly a comprehensive text. You’ll need to study statistics, history, economics, decision theory, differential equations, and plenty more. Forecasting could be an education in its own right (though regrettably is not). The layman, however, can improve vastly by just touching on these subjects.

First and foremost is an understanding of probability, specifically Bayesian statistics. Silver has the courage to show us actual equations, which is more than can be said for many science writers. Do read this chapter.

Steal an example from another book, suppose two taxi companies operate in a particular region, based on color. Blue Taxi has a larger market share. If you think you see a Green Taxi, there’s a small chance that it’s really Blue and you’re mistaken (and a smaller chance if you see Blue, it’s really Green). The market share is the base rate, and you should adjust up or down based on the reasons you might feel uncertain. For instance, if the lighting is poor and you’re far away, your confidence should be lower that if you’re close by at mid-day. Try thinking up a few confounders of your own.

To better develop your Bayesian probability estimate of a given scenario, you need to assess what information you possess and what information you don’t possess. These will be your Known Knowns and Known Unknowns. The final category is Unknown Unknowns, the thing you aren’t aware are even a problem. A big part of rationality is trying to consider previously ignored dangers and trying to mitigate risk from the unforeseen.

This is much easier to do ex post facto. By that point, the signal you need to consider stands out against hundreds you can neglect. Beforehand, though, it’s difficult to determine which is the most important. Often, you’re not even measuring the relevant quantity directly but rather secondary and tertiary effects. Positive interference can create a signal where none exists. Negative interference can reduce clear trends to background noise. There’s a reason signal processing pays so well for electrical engineers.

The applications range from predicting terrorist attacks to not losing your shirt gambling. An entire chapter discusses the Poker Bubble and how stupid players make the game profitable for the much smaller pool of cautious ones. In addition to discussing the mechanics and economics of the gambling, I got a decent explanation of how poker is played. Certainly interesting.

Another chapter tells the story of how Deep Blue beat Gary Kasparov. Entire books have been written on the subject, but Silver gives a good overview of the final tournament and what makes computers so powerful in the first place.

Computers aren’t actually very smart. Their strength comes from solving linear equations very, very quickly. They don’t make the kinds of arithmetic mistakes which humans make, especially when the iterations run into the millions. Chess is a linear game, however, so it was really a matter of time until algorithms could beat humans. There’s certainly a larger layer of complexity and strategy than many simpler games, but it doesn’t take a particularly unique intelligence to look ahead and avoid making mistakes in the heat of the moment.

Furthermore, the stating position of chess is always the same. This is not the case for many other linear systems, let alone nonlinear ones. Nonlinear systems exhibit extreme sensitivity to initial conditions; the weather a classical example. The chapter on meteorology discusses this in detail—we have very good models of how the atmosphere behaves, but because we don’t know every property at every location, we’re stuck making inferences about the air in-between sampling points. Add to this finite computing power, and the NWS can only (only!) predict large-scale weather systems with extreme accuracy a few days ahead.

With more sampling points, more computing capacity, or more time, we could get better predictions, but all of these factors play off one another. This dilemma arises throughout prediction. More research will allow for more accurate results but delays your publication data. (This assumes that the data you need is even available: frequently, it isn’t3.)

Producing useful predictions is not about having the best data or the most computing power (though they certainly help). It is primarily about constraining your anticipation to what the evidence actually implies. Nate Silver lays out several techniques for pursuing this goal, with examples. It’s a good introduction for us laymen; experienced statisticians will probably find little they didn’t already know.

I would not recommend this book, however, unless you’re willing to do the work. Prediction is a difficult skill to master, and those without the humility to accept their inexperience can get into a lot of trouble. Should you want to test your abilities, try doing calibrated predictions and see how accurate you are. Julia Galef has a number of mostly harmless suggestions for trying this out.

If you are serious, however, The Signal and the Noise offers a quality primer on several important rationality techniques, and a good deal of information about a variety of other topics. I found it an enjoyable read and hope Nate Silver writes more books in the future.

1Major aggregators like the Weather Channel and AccuWeather tend to take the NWS predictions and paste an additional layer of modelling on top of it, for better or for worse.

2In the week before the 2016 election, several liberal commentators accused Mr. Silver of throwing the nation into unwarranted fear for only having Hillary Clinton’s odds of winning at ~70%. As it turns out, his model was one of the most balanced of mainstream predictions, yet everyone then acted as if he had reason to be ashamed for getting it wrong.

3The data may be concealed in confidential documents, nominally available but out of sight, or sitting right under your nose. Most often, however, it’s hiding in the noise. Economic forecasts suffer from this last problem. There’s econometric data everywhere, but basically no one has found more than rudimentary ways to make predictions with it. Perverse incentives complicate matters for private sector analysts, who often then ignore the few semi-reliable indicators we’ve got.

# What Constitutes Space?

I’ve been writing about the assorted difficulties faced in astronautical engineering, but this presupposes a certain amount of background knowledge and was quickly getting out of hand. So let’s start with a simpler question: what is space, anyway?

Generally speaking, space is the zone beyond Earth’s atmosphere. This definition is problematic, however, because there’s no clean boundary between air and space. The US Standard Atmosphere goes up to 1000 km. The exosphere extends as high as 10,000 km. Yet many satellites (including the International Space Station) orbit much lower, and the conventional altitude considered to set the edge of space is only 100 km, or 62.1 miles.

This figure comes from the Hungarian engineer Theodore von Kármán. Among his considerable aerodynamic work, he performed a rough calculation of the altitude at which an airplane would need to travel orbital velocity to generate sufficient lift to counteract gravity, i.e. the transition from aeronautics to astronautics. It will vary moderately due to atmospheric conditions and usually lies slightly above 100 km, but that number has been widely accepted as a useful definition for the edge of space.

To better understand this value, we need to understand just what an orbit is.

Objects don’t stay in space because they’re high up. (It’s relatively easy to reach space, but considerably harder to stay there.) The gravity of any planet, Earth included, varies with an inverse square law, that is, the force which Earth exerts on an object is proportional to the reciprocal of the distance squared. This principle is known as Newton’s Law of Universal Gravitation. Its significance for the astronautical engineer is that moving a few hundred kilometers off the surface of Earth results in only a modest reduction of downward acceleration due to gravity.

To stay at altitude, a spacecraft does not counteract gravity, as an aircraft does. Instead, it travels laterally at sufficient speed that the arc of its curve is equal to the curvature of Earth itself. An orbit is a path to fall around an entire planet.

The classic example to illustrate this concept, which also comes from Newton, is a tremendous cannon placed atop a tall mountain (Everest’s height was not computed until the 1850s). As you can verify at home, an object thrown faster will land further away from the launch point, despite the downward acceleration being identical. In the case of our cannon, a projectile shot faster will land further from the foot of the mountain. Fire the projectile faster enough, and it will travel around a significant fraction of the Earth’s curvature. Firing it fast enough1 and after awhile it will swing back around to shatter the cannon from behind.

Source: European Space Agency

In this light, von Kármán’s definition is genius. While there is no theoretical lower bound on orbital altitude2, below about 100 km travelling at orbital velocity will result in a net upwards acceleration due to aerodynamic lift. Vehicles travelling below this altitude will essentially behave as airplanes, balancing the forces of thrust, lift, weight and drag—whereas vehicles above it will travel like satellites, relying entirely on their momentum to stay aloft indefinitely.

But we should really give consideration to aerodynamic drag in our analysis, because it poses a more practical limit on the altitude at which spacecraft can operate. Drag is the reason you won’t find airplanes flying at orbital speeds in the mesosphere, and the reason satellites don’t orbit just above the Kármán line. Even in the upper atmosphere, drag reduce a spacecraft’s forward velocity and therefore its kinetic energy, forcing it to orbit at a lower altitude.

This applies to all satellites, but above a few hundred kilometers is largely negligible. Spacecraft in low Earth orbit will generally decay after a number of years without repositioning; the International Space Station requires regular burns to maintain altitude. At a certain point, this drag will deorbit a satellite within a matter of days or even hours.

The precise altitude will depend on atmospheric conditions, orbital eccentricity, and the size, shape, and orientation of the satellite, but generally we state that stable orbits are not possible below 130 kilometers. This assumes a much higher apoapsis: a circular orbit below 150 km will decay just as quick. To stay aloft indefinitely, either frequent propulsion or a much higher orbit will be necessary3.

On the other hand, it is exceedingly difficult to fly a conventional airplane above the stratosphere, and even the rocket-powered X-15 had trouble breaking 50 miles, which is the US Air Force’s chosen definition. Only two X-15 flights crossed the Kármán line.

Ultimately, then, what constitutes the edge of space? From a strict scientific standpoint, there is no explicit boundary, but there are many practical ones. Which one to chose will depend on what purposes your definition needs to address. However, von Kármán’s suggestion of 100 km has been widely accepted by most major organizations, including the Fédération Aéronautique Internationale and NASA. Aircraft will rarely climb this high and spacecraft will rarely orbit so low, but perhaps having few flights through the ambiguous zone helps keep things less confusing.

1For most manned spaceflights, this works out to about 7,700 meters per second. The precise value will depend on altitude: higher spacecraft orbit slower, and lower spacecraft must orbit faster4. In our cannon example, it would be a fair bit higher, neglecting air resistance.

2The practical lower bound, of course, is the planet’s surface. The Newtonian view of orbits, however, works on the assumption that each planet can be approximated as a single point. This isn’t precisely true—a planet’s gravitation force will vary with the internal distribution of its mass, which astrodynamicists exploit to maintenance the orbits of satellites. That, however, goes beyond the scope of this introduction.

3The International Space Station orbits so low in part because most debris below 500 km reenters the atmosphere within a few years, reducing the risk of collision. This is no trivial concern—later shuttle missions to service the Hubble Space Telescope, which orbits at about 540 km, were orchestrated around the dangers posed by space junk.

4Paradoxically, we burn forward to raise an orbit, speeding up to eventually slow down. This makes perfect sense when we consider the reciprocal relationship between kinetic and potential energy, but that’s another post.

# Book Review: Your Inner Fish

This book is not what I expected, but quite pleasurable to read nonetheless. Your Inner Fish does not detail the ichthyologic nature of the human body. Rather, it explores how fish moved onto land, where many now-ubiquitous adaptations came from, and how scientists figured it out.

Dr. Shubin begins with the story we all came to hear: how his team of paleontologists discovered Tiktaalik Roseae. This ancient, shallow-water fish  Tiktaalik is an important transitional fossil because it was one of the first discovered with rudimentary hands. Biologists comparing the limbs of species noticed pattern in the limbs of land animals as far back as the mid-1800s. This patter held only for land-adapted species—reptiles, amphibians, mammals (including aquatic mammals that returned to the seas).

For a long time, it was believed that fish don’t exhibit this pattern. Then lungfish were discovered: living fossils which exemplify, in some ways, the transition from ocean to land. As their name implies, they possess basic lungs, and, interestingly, the beginnings of limbs.

Tiktaalik was an improvement on the lungfish. It had a flat head, for swimming in shallow water, and fin bones that show the beginning of a wrist. Together, we see why fins evolved into arms: shallow water fish needed to do pushups. In their fish-eat-fish world, the ability to push oneself through extra-shallow patches was likely a critical advantage.

Let me tell you, exercising seems a lot less mundane when you consider that your lungfish ancestors did it to survive. That’s what your arms evolved to do. It’s only more recently we found further applications for them.

Throughout this book, Shubin is trying to explain how scientists managed to figure out our evolutionary history. He has perhaps a unique perspective to explain this process, as a paleontologist turned anatomy professor. Knowing what came before helps explain the ways in which earlier species were contorted to become the ones we see today.

Comparative anatomy and the fossil record tell us a lot about how modern species came to be. But genetics also offers considerable insight. Looking at the differences between genomes can tell us a lot about how recently certain categories of features evolved. In many cases, we can take genes from mice or fish and insert them into the DNA of invertebrates like fruit flies and get the same result. Such experiments are strong evidence that features like body plans and eyes evolved a really long time ago.

To be clear, there’s a lot of uncertainty which can probably never be resolved. We can prod algae in tanks to evolve the beginnings of multicellular bonding, but we have no idea if that particular direction is the one that our forerunners took.

Nevertheless, Your Inner Fish gives a good overview of how bacteria became bugs and fish, and how those bugs and fish became the bugs, fish, and people alive today. I certainly came away with an improved picture of how weird our bodies are and their many imperfections, though far from the whole picture. My curious is fairly sated, however—I’ve no plans to read the kinds of human anatomy texts I would need to really appreciate the magnitude of making men from microbes.

All told, I’d recommend Your Inner Fish as an entertaining and informative read about how human beings came to be. Neil Shubin has packed a lot of interesting scientific research into it, and with the exception of an example about hypothetical clown people in the final chapter, does a pretty good job of explaining it clearly. Definitely worth your time if the history of life on Earth intrigues you.