Wrong, Wrong, Wrong

Aside

[You’ll have to take my word that this post was planned before Megan left a comment to much the same effect on Facebook. I still wrote it, because I love sharing space trivia, but because it took several more days to finish, my summer schlamperei cannot be questioned.]

Last week I discussed a mistaken explanation in one of my engineering textbooks. The specific explanation was wrong, but there’s two other issues with that innocuous caption in chapter one.

Hibbeler attributes a NASA image to a PhotoShop artist on Shutterstock. Now there’s nothing wrong with getting images from there. I find them a bit overthetop personally, but when has that ever been an impediment to the authors of textbooks on serious topics? Never, that’s when.

But why would you go through a stock image company when NASA images are public domain? It seems unnecessary, strongly suggests that author or graphic artist did a quick Google search, rather than having a clear image in mind. That brings us to the other problem.

This is a very specific, very recognizable image of Bruce McCandless operating the Manned Maneuvering Unit during STS-41B. You even see the same cloud patterns on the NASA website. And the last I heard, Bruce McCandless is a man.

Misgendering McCandless was obviously done for inclusion, but would it have been so hard to get a picture of a woman astronaut? It certainly would go further toward fixing the image of female spacetravelers in the reader’s mind.

Unfortunately for Hibbeler, there aren’t any pictures of women EVAing without a tether because no woman has ever operated the MMU. If that seems problematic, consider that the MMU was an experimental piece of technology that was really too dangerous to use regularly. Only six astronauts ever wore them, the last in November of 19841. After the Challenger disaster, NASA cancelled all the potential flights which might use the MMU and unofficially retired them from use.

But, we don’t really need a picture of a female astronaut operating untethered, because Hibbeler got it wrong. Astronauts aren’t weightless in orbit because they’re far removed from Earth’s gravity; they’re weightless because they’re falling around Earth at the same rate as their spacecraft and everything in it2. We can use an interior photo, this classic picture of Mae Jemison aboard Endeavour being a logical choice:

mae_jemison_in_space

Now was that really so hard?


1To be explicit about it: including that final mission, only four American women had reached orbit, and only one had performed an EVA. The Soviet Union, meanwhile, only ever sent up two female cosmonauts. Tereshkova, in 1963, was basically a publicity stunt: she never returned to space and the Soviets resumed all-male crews until Svetlana Savitskaya launched in 1982, by which point it was clear that American women were going to space to stay. Nevertheless, Savitskaya beat Kathryn Sullivan to the first female spacewalk by 78 days.

2If we wanted pictures where the inverse squared law has kicked in, our selection is limited to the deep space EVAs during Apollo 15, 16, and 17. Performed during the coast back to Earth, terrestrial gravity had lessened, though their weightlessness was still attributable to free fall. However, our selection is all-male: Al Worden, Ken Mattingly, and Ron Evans.

Advertisements

Beware Scientific Metaphors

I’m about a quarter finished with Isabel Paterson’s The God of the Machine, which I’m finally reading after several years of intending to. So far, it’s been both pleasurable and interesting. My main reservation, however, has been an extended metaphor which both illustrates the central idea and potentially undermines it.

Paterson develops a notion of energy to describe the synthesis of material resources, cultural virtue, and human capital which results in creativity and production. As metaphors go, this is not a bad one. That said, my engineering background gives me cause for concern. It isn’t clear that Paterson has a clear understanding of energy as a scientific concept, and her analogy may suffer for it. Complicating matters, she sometimes also phrases “energy” as if it were electricity, which is another can of worms in and of itself.

Mechanical energy behaves oddly enough for human purposes, being generally conserved between gravitational potential and kinetic energy, and dissipated through friction and heating. It emphatically does not spring ex-nihilo into cars and trains. Coal and oil have chemical potential energy, which is released as thermal energy, then converted into kinetic energy and thus motion to drive an internal combustion engine.

Electrical energy is even weirder. It’s been enough years since I finished my physics that I won’t attempt to explain the workings in detail. (My electronics class this spring bypassed scientific basis almost entirely.) Suffice to say that the analogy of water moving through a pipe is not adequate beyond the basics.

Atomic energy, the most potent source yet harnessed, does create energy, but at a cost. A nuclear generating station physically destroys a small part of a uranium atom, converting it via Einstein’s famous relation to useful energy. But more on that in later posts.

I won’t say that the “energy” metaphor is strictly-speaking wrong, because I haven’t done the work of dissecting it in detail. Paterson was a journalist and writer, but she was also self-educated, and therefore we cannot easily assess the scope and accuracy of her knowledge of such phenomena. But I don’t think it matters: even if the metaphor is faulty, the concept which it tries to communicate seems, on the face, quite plausible without grounding in the physical sciences.

I bring this up now, well before I’ve finished the book, because I’ve seen much worse analogies from writers with much less excuse to make them. The God of the Machine was published in 1943. Authors today have a cornucopia of factual knowledge at their fingertips and still screw it up. For instance, take this caption from my statics textbook:

HibbelerAstronaut

Hibbeler, R. C., Engineering Mechanics: Statics & Dynamics, 14th ed., Pearson Prentice Hall, Hoboken, 2016.

There is no excuse for a tenured professor (or, more plausibly, his graduate students) to screw this up. The correct equation is on that very page and they couldn’t even be bothered to run the numbers and see that, no, you’re not significantly lighter in low Earth orbit. From my perspective, such a blatant error is unconscionable in the opening pages of a professional text.

Now that isn’t exactly a metaphor, but it illustrates the risks of discussing fields nominally close to your own which nevertheless you know very little about. Imagine the danger of using metaphors from totally different fields you’ve never formally studied.

So, I would advise writers to be sparing with scientific metaphors. If you can learn the science correctly, that’s great: you’ll construct metaphors that are both interesting and accurate. But as we’ve seen above, even PhDs make stupid mistakes. Err on the side of caution.

Book Review: The Signal and the Noise

Supposedly Nate Silver’s credibility took a major hit last November, which will no doubt discourage many potential readers of his book. This interpretation is wrong, but palatable, because the sorts of commentators who would come to such conclusions shouldn’t be trusted with it. This book is about how to be more intelligent when making predictions and be wrong less often. Such an attitude is not common—most “predictions” are political pot-shots or, as discussed previously, avaricious attempts to put the cart before the horse.

Let’s begin with a discussion of a few major tips. Most of these things should be taught in high school civics (how can you responsibly vote without a concept of base rates?!), but aren’t. Perhaps the most important thing is to limit the number of predictions made, so you can easily come back and score them. Calibration is recommended—nine out of ten predictions made with 90% confidence should come true.

Political pundits are terrible about these sorts of things. Meteorologists are actually great at it. Now your local weatherman is regularly wrong, but the National Weather Service makes almost perfectly calibrated forecasts1. This is, in part, because their models are under constant refinement, always seeking more accuracy. And it pays off: NWS predictions have improved drastically over the last few decades, due to improved models, more data collection, and faster computers. But more on that later.

Local meteorologists, on the other hand, are incentivized to make outlandish forecasts which drive viewership (and erode trust in their profession). One might see this as evidence that public entities make better predictions than private ones, but we quickly see that that is no panacea when we turn to seismology and epidemiology.

Part of the problem, in those fields, is that government and university researchers are under considerable pressure from their employers to develop new models which will enable them to predict disasters. This is a reasonable enough desire, but a desire alone does not a solution make. We can quite easily make statistical statements about approximately how frequently certain locations will experience earthquakes, for instance. But attempts beyond a simple logarithmic regression have so far been fruitless, not just failing to predict major earthquakes but specifically prediction that some of the most destructive earthquakes in recent memory would not occur.

Silver’s primary case study in this comes from the planning for Fukushima Daiichi Nuclear Power Plant. When engineers were designing it in the 1960s, it was necessary to extrapolate what sort of earthquake loads it might need to withstand. Fortunately, the sample size of the largest earthquakes is necessarily low. Unfortunately, there was a small dogleg in the data, an oh-so-tempting suggestion that the frequency of extremely large earthquakes was exceedingly low. The standard Gutenberg-Richter model suggests that a 9.0-magnitude earthquake would occur in the area about once every 300 years; the engineers’ adaptation suggested every 13,000. They constructed fantastical rationalizations for their model and a power station able to withstand 8.6. In March of 2011 a 9.0-magnitude earthquake hit the coast of Japan and triggered a tsunami. The rest, as they say, is history.

The problem in seismology comes from overfitting. It is easy, in the absence of hard knowledge, to underestimate the amount of noise in a dataset and end up constructing a model which predicts random outliers. Those data points don’t represent the underlying reality; rather, they are caused by influences outside the particular thing you’re wishing to study (including the imprecision of your instruments).

And it can take awhile to realize that this is the case, if the model is partially correct or if the particular outlier doesn’t appear frequently. An example would be the model developed by Professor David Bowman at California State University-Fullerton in the mid-2000s, which identified high-risk areas, some of which then experienced earthquakes. But the model also indicated that an area which soon thereafter experienced an 8.5 was particularly low-risk. Dr. Bowman had the humility to retire the model and admit to its faults. Many predictors aren’t so honest.

On the other hand, we see overly cautious models. For instance, in January of 1976, Private David Lewis of the US Army died at Fort Dix of H1N1, the same flu virus which caused the Spanish Influenza of 1918. The flu always occurs at military bases in January, after soldiers have been spread across the country for Christmas and New Year’s. The Spanish Influenza had also first cropped up at a military base, and this unexpected reappearance terrified the Center for Disease Control. Many feared an even worse epidemic. President Ford asked Congress to authorize a massive vaccination program at public expense, which passed overwhelmingly.

The epidemic never materialized. No other cases of H1N1 were confirmed anywhere in the country and the normal flu strain which did appear was less intense than usual. We still have no idea how Private Lewis contracted the deadly disease.

Alarmism, however, broke public confidence in government predictions generally and on vaccines particularly. The vaccination rate fell precipitously in the following years, opening the way to more epidemics later on.

Traditionally, this category of error was known as crying wolf. Modern writers have forgotten it and have to be reminded to not do that. Journalists and politicians make dozens if not hundreds of “predictions” each year, few if any of which are scored, in no small part because most of them turn out wrong or even incoherent.

Sadly, the pursuit of truth and popularity are uncorrelated at best. As Mr. Silver has learned, striving for accuracy and against premature conclusions is a great way to get yourself berated2. Forecasting is not the field for those seeking societal validation. If that’s your goal, skipping this book is far better than trying to balance its lessons and the public’s whim.

But let’s suppose you do want to be right. If you do, then this book can help you in that quest, though it is hardly a comprehensive text. You’ll need to study statistics, history, economics, decision theory, differential equations, and plenty more. Forecasting could be an education in its own right (though regrettably is not). The layman, however, can improve vastly by just touching on these subjects.

First and foremost is an understanding of probability, specifically Bayesian statistics. Silver has the courage to show us actual equations, which is more than can be said for many science writers. Do read this chapter.

Steal an example from another book, suppose two taxi companies operate in a particular region, based on color. Blue Taxi has a larger market share. If you think you see a Green Taxi, there’s a small chance that it’s really Blue and you’re mistaken (and a smaller chance if you see Blue, it’s really Green). The market share is the base rate, and you should adjust up or down based on the reasons you might feel uncertain. For instance, if the lighting is poor and you’re far away, your confidence should be lower that if you’re close by at mid-day. Try thinking up a few confounders of your own.

To better develop your Bayesian probability estimate of a given scenario, you need to assess what information you possess and what information you don’t possess. These will be your Known Knowns and Known Unknowns. The final category is Unknown Unknowns, the thing you aren’t aware are even a problem. A big part of rationality is trying to consider previously ignored dangers and trying to mitigate risk from the unforeseen.

This is much easier to do ex post facto. By that point, the signal you need to consider stands out against hundreds you can neglect. Beforehand, though, it’s difficult to determine which is the most important. Often, you’re not even measuring the relevant quantity directly but rather secondary and tertiary effects. Positive interference can create a signal where none exists. Negative interference can reduce clear trends to background noise. There’s a reason signal processing pays so well for electrical engineers.

The applications range from predicting terrorist attacks to not losing your shirt gambling. An entire chapter discusses the Poker Bubble and how stupid players make the game profitable for the much smaller pool of cautious ones. In addition to discussing the mechanics and economics of the gambling, I got a decent explanation of how poker is played. Certainly interesting.

Another chapter tells the story of how Deep Blue beat Gary Kasparov. Entire books have been written on the subject, but Silver gives a good overview of the final tournament and what makes computers so powerful in the first place.

Computers aren’t actually very smart. Their strength comes from solving linear equations very, very quickly. They don’t make the kinds of arithmetic mistakes which humans make, especially when the iterations run into the millions. Chess is a linear game, however, so it was really a matter of time until algorithms could beat humans. There’s certainly a larger layer of complexity and strategy than many simpler games, but it doesn’t take a particularly unique intelligence to look ahead and avoid making mistakes in the heat of the moment.

Furthermore, the stating position of chess is always the same. This is not the case for many other linear systems, let alone nonlinear ones. Nonlinear systems exhibit extreme sensitivity to initial conditions; the weather a classical example. The chapter on meteorology discusses this in detail—we have very good models of how the atmosphere behaves, but because we don’t know every property at every location, we’re stuck making inferences about the air in-between sampling points. Add to this finite computing power, and the NWS can only (only!) predict large-scale weather systems with extreme accuracy a few days ahead.

With more sampling points, more computing capacity, or more time, we could get better predictions, but all of these factors play off one another. This dilemma arises throughout prediction. More research will allow for more accurate results but delays your publication data. (This assumes that the data you need is even available: frequently, it isn’t3.)

Producing useful predictions is not about having the best data or the most computing power (though they certainly help). It is primarily about constraining your anticipation to what the evidence actually implies. Nate Silver lays out several techniques for pursuing this goal, with examples. It’s a good introduction for us laymen; experienced statisticians will probably find little they didn’t already know.

I would not recommend this book, however, unless you’re willing to do the work. Prediction is a difficult skill to master, and those without the humility to accept their inexperience can get into a lot of trouble. Should you want to test your abilities, try doing calibrated predictions and see how accurate you are. Julia Galef has a number of mostly harmless suggestions for trying this out.

If you are serious, however, The Signal and the Noise offers a quality primer on several important rationality techniques, and a good deal of information about a variety of other topics. I found it an enjoyable read and hope Nate Silver writes more books in the future.

22891356


1Major aggregators like the Weather Channel and AccuWeather tend to take the NWS predictions and paste an additional layer of modelling on top of it, for better or for worse.

2In the week before the 2016 election, several liberal commentators accused Mr. Silver of throwing the nation into unwarranted fear for only having Hillary Clinton’s odds of winning at ~70%. As it turns out, his model was one of the most balanced of mainstream predictions, yet everyone then acted as if he had reason to be ashamed for getting it wrong.

3The data may be concealed in confidential documents, nominally available but out of sight, or sitting right under your nose. Most often, however, it’s hiding in the noise. Economic forecasts suffer from this last problem. There’s econometric data everywhere, but basically no one has found more than rudimentary ways to make predictions with it. Perverse incentives complicate matters for private sector analysts, who often then ignore the few semi-reliable indicators we’ve got.

What Constitutes Space?

I’ve been writing about the assorted difficulties faced in astronautical engineering, but this presupposes a certain amount of background knowledge and was quickly getting out of hand. So let’s start with a simpler question: what is space, anyway?

Generally speaking, space is the zone beyond Earth’s atmosphere. This definition is problematic, however, because there’s no clean boundary between air and space. The US Standard Atmosphere goes up to 1000 km. The exosphere extends as high as 10,000 km. Yet many satellites (including the International Space Station) orbit much lower, and the conventional altitude considered to set the edge of space is only 100 km, or 62.1 miles.

This figure comes from the Hungarian engineer Theodore von Kármán. Among his considerable aerodynamic work, he performed a rough calculation of the altitude at which an airplane would need to travel orbital velocity to generate sufficient lift to counteract gravity, i.e. the transition from aeronautics to astronautics. It will vary moderately due to atmospheric conditions and usually lies slightly above 100 km, but that number has been widely accepted as a useful definition for the edge of space.

To better understand this value, we need to understand just what an orbit is.

Objects don’t stay in space because they’re high up. (It’s relatively easy to reach space, but considerably harder to stay there.) The gravity of any planet, Earth included, varies with an inverse square law, that is, the force which Earth exerts on an object is proportional to the reciprocal of the distance squared. This principle is known as Newton’s Law of Universal Gravitation. Its significance for the astronautical engineer is that moving a few hundred kilometers off the surface of Earth results in only a modest reduction of downward acceleration due to gravity.

To stay at altitude, a spacecraft does not counteract gravity, as an aircraft does. Instead, it travels laterally at sufficient speed that the arc of its curve is equal to the curvature of Earth itself. An orbit is a path to fall around an entire planet.

The classic example to illustrate this concept, which also comes from Newton, is a tremendous cannon placed atop a tall mountain (Everest’s height was not computed until the 1850s). As you can verify at home, an object thrown faster will land further away from the launch point, despite the downward acceleration being identical. In the case of our cannon, a projectile shot faster will land further from the foot of the mountain. Fire the projectile faster enough, and it will travel around a significant fraction of the Earth’s curvature. Firing it fast enough1 and after awhile it will swing back around to shatter the cannon from behind.

Newton_s_cannon_large.gif (313×242)

Source: European Space Agency

In this light, von Kármán’s definition is genius. While there is no theoretical lower bound on orbital altitude2, below about 100 km travelling at orbital velocity will result in a net upwards acceleration due to aerodynamic lift. Vehicles travelling below this altitude will essentially behave as airplanes, balancing the forces of thrust, lift, weight and drag—whereas vehicles above it will travel like satellites, relying entirely on their momentum to stay aloft indefinitely.

But we should really give consideration to aerodynamic drag in our analysis, because it poses a more practical limit on the altitude at which spacecraft can operate. Drag is the reason you won’t find airplanes flying at orbital speeds in the mesosphere, and the reason satellites don’t orbit just above the Kármán line. Even in the upper atmosphere, drag reduce a spacecraft’s forward velocity and therefore its kinetic energy, forcing it to orbit at a lower altitude.

This applies to all satellites, but above a few hundred kilometers is largely negligible. Spacecraft in low Earth orbit will generally decay after a number of years without repositioning; the International Space Station requires regular burns to maintain altitude. At a certain point, this drag will deorbit a satellite within a matter of days or even hours.

The precise altitude will depend on atmospheric conditions, orbital eccentricity, and the size, shape, and orientation of the satellite, but generally we state that stable orbits are not possible below 130 kilometers. This assumes a much higher apoapsis: a circular orbit below 150 km will decay just as quick. To stay aloft indefinitely, either frequent propulsion or a much higher orbit will be necessary3.

On the other hand, it is exceedingly difficult to fly a conventional airplane above the stratosphere, and even the rocket-powered X-15 had trouble breaking 50 miles, which is the US Air Force’s chosen definition. Only two X-15 flights crossed the Kármán line.

Ultimately, then, what constitutes the edge of space? From a strict scientific standpoint, there is no explicit boundary, but there are many practical ones. Which one to chose will depend on what purposes your definition needs to address. However, von Kármán’s suggestion of 100 km has been widely accepted by most major organizations, including the Fédération Aéronautique Internationale and NASA. Aircraft will rarely climb this high and spacecraft will rarely orbit so low, but perhaps having few flights through the ambiguous zone helps keep things less confusing.


1For most manned spaceflights, this works out to about 7,700 meters per second. The precise value will depend on altitude: higher spacecraft orbit slower, and lower spacecraft must orbit faster4. In our cannon example, it would be a fair bit higher, neglecting air resistance.

2The practical lower bound, of course, is the planet’s surface. The Newtonian view of orbits, however, works on the assumption that each planet can be approximated as a single point. This isn’t precisely true—a planet’s gravitation force will vary with the internal distribution of its mass, which astrodynamicists exploit to maintenance the orbits of satellites. That, however, goes beyond the scope of this introduction.

3The International Space Station orbits so low in part because most debris below 500 km reenters the atmosphere within a few years, reducing the risk of collision. This is no trivial concern—later shuttle missions to service the Hubble Space Telescope, which orbits at about 540 km, were orchestrated around the dangers posed by space junk.

4Paradoxically, we burn forward to raise an orbit, speeding up to eventually slow down. This makes perfect sense when we consider the reciprocal relationship between kinetic and potential energy, but that’s another post.

Book Review: Ignition!

Subtitled “An Informal History of Liquid Rocket Propellants”, Ignition! is John D. Clark’s personal account of working with rocket fuels from 1949 until his retirement in 1970.

Dr. Clark is introduced to us by Isaac Asimov. Clark was roommates with L. Sprague de Camp during his undergrad years at Caltech, and wrote a pair of science fiction stories before deciding the market wasn’t for him, though he remained active in the community. Dr. Asimov met him during the war, when he came to work with de Camp and Heinlein at the Philadelphia Naval Yard.

John Clark, like Asimov, was a chemist, working on the problem of chemical rockets for the majority of his career. He writes this book, he tells us, both “for the interested layman” and for:

[T]he professional engineer in the rocket business. For I have discovered that he is frequently abysmally ignorant of the history of his own profession, and, unless forcibly restrained, is almost certain to do something which, as we learned fifteen years ago, is not only stupid but is likely to result in catastrophe.

For the layman, he attempts (and, I think, succeeds) at writing in a manner which is nevertheless very accessible. The sections with heavy technical content can be skimmed over without losing too much of the overall picture, though a little background knowledge certainly helps. I’m not sure you could use this book as a reference without a basic understanding of engineering thermodynamics, but if you haven’t studied that what business do you have designing rocket engines?

Unfortunately, Dr. Clark gives relatively little in the way of citations or suggestions for further reading. This is both an artifact of the era—when technical reports and journal articles were essentially inaccessible to the general public if your local library didn’t have a copy—and a consequence of the fact that much of the source material was at the time still officially classified. At several points the discussion is cut short because he’s not at liberty to discuss the matter. He acknowledges these difficulties and makes not pretense of this being an authoritative textbook.

On a related note, the content is heavily focused on the research done in America and the United Kingdom, with a chapter devoted to what information came out of the Soviet Union in later years. Due to the date of publication, this book does not cover modern developments (though the final chapter makes a series of predictions I might come back and grade).

Nor does Clark address solid propellants or hybrid combinations in any significant detail, which is slightly disappointing given my current studies, but would have made for a much longer and more complicated read. Not that I would have particularly minded; Dr. Clark is an engaging storyteller, frequently giving us various background information on the scientists and organizations trying to develop early rockets, first for abstract research, later for the military, and finally for the National Aeronautics and Space Administration.

These anecdotes keep the reading fun even through the most tedious of minutiae on monoprops and halogen fuels. Clark frequently (if unpredictably) goes into detail on the chemistry of a particular propellant and how the molecules interact with one another. Such interludes eventually rekindled my interest in chemistry as a subject, which is fortunate since I need another credit hour of it to graduate. Hopefully some of the material I learn this summer will be relevant to aerospace propulsion work.

Overall, I found this to be a good introduction to rocket fuels and the history of that field. While useful for beginners such as myself and as a refresher, it probably shouldn’t be treated as any sort of reference guide or definitive citation.

ignition back cover

An engraving by Dr. Clark’s wife, Inga Pratt, presented to NARTS in 1959.

Hopefully one day Ignition! will be in print again, but for now most of us are stuck reading it from PDFs found online. Hard copies went for hundreds of dollars before the likes of Elon Musk and Scott Manley began publicly praising the book.

Bike-Shedding and Bottomless Pits

I see a pair of failure modes in social activism. One is bike-shedding. The other is trying to empty bottomless pits.

Bike-shedding refers to the tendency to focus on insignificant but comprehensible tasks, the nominal example being materials selection for the bike shed at a nuclear power plant. Everyone can understand bike sheds, only nuclear engineers are qualified to comment on the minutiae of reactor design. The latter is clearly more important than the former, but the former will likely get more discussion time in a layperson’s committee.

The same goes for social activism, where thousands of wannabe intellectuals fixate on relatively trivial issues because that’s what everyone can wrap their heads around.

If the true intellectuals spent their time on tractable problems, then this wouldn’t be a particularly troubling failure mode, because at least the wannabes aren’t getting in the way of serious work. Unfortunately, the leaders of any particular movement tend to be pursuing status within their community rather than the movement’s supposed goals.

The usual way the status competitions play out is through purity signalling. In this context, purity refers to loyalty towards the movement’s beliefs.  Whoever believes in the cause the most will garner more respect and acclamation. Intentionally or not, they begin to argue less and less actionable questions and make increasingly impractical demands upon the movement as a whole.

I’ve experienced this first-hand during my time in the libertarian movement. Many libertarians have such an affective death spiral around the non-aggression principle that they argue voting for third-party candidates is an act of violence. NAP uber alles was my reason for ultimately leaving the movement.

This phenomena is almost synonymous with the far-left. Constant in-fighting and purity debates hamstring many socialist, communist, and left-anarchist organizations, which I can’t say is necessarily a bad thing. But conservatives experience it, too; neoreaction was essentially the invention of right-wing impossibilism.

My speculation is that so many people tolerate impossibilism because it accelerates the transition from movement to community. As ideological questions detach from any sort of actionable agenda, there’s less urgency and more time for friendship and non-central discussion. Preference for a better world is a largely philosophical question (though social status makes it easier to accept an objectively unpleasant situation), while even self-described individualists recognize the joy from finding like minds. Even when the movement fails as such, it provided a significant benefit to its adherents.

The pattern repeats itself time and time again. I’ve seen it with the libertarian movement, with the rationalists, and with all sorts of less pleasant groups. I see no clear solution, beyond trying to decouple community from activism. Whether this will work remains to be seen.

Book Review: How to Live on Mars

I first read this book in high school, flushed on newly-found philosophy and bristling with plans for life as a commercial astronaut. SpaceX was just ramping up their ISS resupply program; Bigelow Aerospace was planning to launch another module before 2014. The possibilities seemed limitless.

That’s not the world we ended up living in. Astronauts haven’t launched from the United States in over five years. Virgin Galactic experienced LCOV during a 2014 test flight and put space tourism plans on hold while fixing the spacecraft’s control system. The biggest leaps forward has been landing Falcon 9 first stages, but it’s only in the last week that a used stage flew again. Falcon Heavy  still hasn’t been tested flown.

As such, the overall mood of Zubrin’s book feels….overconfident. Misplaced. Premature.

Our narrator is a congenial Martian colonist, giving us the down-low on what it takes to survive on Mars. It’s quite easy, he informs us, provided your follow his advice.

From choosing the correct transfer method to how to start a family, Zubrin (the Martian, not the 20th century astronautical engineer) walks us through the steps of becoming an economic and social success on the red planet. While many of the specifics are tailored to a fictional future history, the basic science is strictly factual.

It ranges from the mundane to the transcendental. At the more everyday end of things, we learn how to make plastics and almost every other raw material from the Martian soil and atmosphere. Through this avatar, Dr. Zubrin is making the case that living on Mars is entirely feasible. Steel and cement for construction, oxygen for breathing, nitrates for food—it’s all there. A few things would be a challenge (fictional Zubrin recommends stealing rocket parts as the best way to obtain aluminum), but the low-gravity environment greatly reduces the difficulty imposed by all sorts of engineering projects.

On the other end of the scale, we’re explained the general process of terraforming Mars into a habitable planet (and how to profit off it in the meantime). Now quite a few of these suggestions rely on a fairly specific potential architecture for the project, but the technical information holds.

This future history is amusing, though evokes a more cynical reaction from me after the last few years. I’m less optimistic about the odds of us reaching Mars before 2040, and less skeptical of NASA’s ability to get things done. To me, the issue seems to be less one of organizational competence and more of insufficient dedication at the highest levels (mostly Congress). While I’d like to believe that the private sector can fill that gap, it seems increasingly unlikely that they can achieve those ends at a plausible cost as the march of 21st century politics continues.

One thing he’ll probably have gotten right: the decay of terrestrial society into atomized, post-modern nihilism. I hope he’ll be proven wrong but there’s no strong signals to suggest that that trend is slowing.

On the whole, though, an optimistic book about the capacity for human ingenuity to conquer new frontiers and expand our understanding of the universe. Those interested in the project of space colonization, but unsure where to begin learning about, would be well advised to start with How to Live on Mars.

51imedbmbyl-_sx323_bo1204203200_

German Researcher Discovers Most Efficient Path to Mars

A civil engineer in Essen, Germany has determined the transfer orbit which will get astronauts to Mars the quickest.

Walter Hohmann, a civil engineer, spent several years studying physics and astronomy before publishing his book The Attainability of the Celestial Bodies. It may become required reading for NASA mission planners.

Fuel requirements will be central to the architecture of interplanetary spaceflights, Dr. Hohmann expects. To account for this, he solved for the trajectory which requires the least amount of velocity change, or what scientists call “delta V”. Spacecraft produce this acceleration by firing rocket engines.

The most efficient orbit between two planets turned out to be an ellipse that lies tangent to the planets’ orbital paths.

hohmann

Source: University of Arizona

Such an orbit requires the least amount of energy to achieve when starting from Earth, but has a serious drawback. Least-energy trajectories are also the slowest. For a crewed mission, taking along enough food and oxygen could make a less efficient path ultimately cheaper.

Another problem is waiting for planets to be in the right place for launch. Because Earth orbits the sun faster than the outer planets and slower than the inner planets, the possible alignment for such a transfer trajectory only occurs occasionally. The window to leave for Mars only opens every two years, for example. Launching interplanetary spacecraft at other times would require vastly more fuel.

Nevertheless, astronomers and aerospace engineers find Dr. Hohmann’s discovery extremely useful for designing space missions.


Happy Amazing Breakthrough Day!

Book Review: Your Inner Fish

This book is not what I expected, but quite pleasurable to read nonetheless. Your Inner Fish does not detail the ichthyologic nature of the human body. Rather, it explores how fish moved onto land, where many now-ubiquitous adaptations came from, and how scientists figured it out.

Dr. Shubin begins with the story we all came to hear: how his team of paleontologists discovered Tiktaalik Roseae. This ancient, shallow-water fish  Tiktaalik is an important transitional fossil because it was one of the first discovered with rudimentary hands. Biologists comparing the limbs of species noticed pattern in the limbs of land animals as far back as the mid-1800s. This patter held only for land-adapted species—reptiles, amphibians, mammals (including aquatic mammals that returned to the seas).

For a long time, it was believed that fish don’t exhibit this pattern. Then lungfish were discovered: living fossils which exemplify, in some ways, the transition from ocean to land. As their name implies, they possess basic lungs, and, interestingly, the beginnings of limbs.

Tiktaalik was an improvement on the lungfish. It had a flat head, for swimming in shallow water, and fin bones that show the beginning of a wrist. Together, we see why fins evolved into arms: shallow water fish needed to do pushups. In their fish-eat-fish world, the ability to push oneself through extra-shallow patches was likely a critical advantage.

Let me tell you, exercising seems a lot less mundane when you consider that your lungfish ancestors did it to survive. That’s what your arms evolved to do. It’s only more recently we found further applications for them.


Throughout this book, Shubin is trying to explain how scientists managed to figure out our evolutionary history. He has perhaps a unique perspective to explain this process, as a paleontologist turned anatomy professor. Knowing what came before helps explain the ways in which earlier species were contorted to become the ones we see today.

Comparative anatomy and the fossil record tell us a lot about how modern species came to be. But genetics also offers considerable insight. Looking at the differences between genomes can tell us a lot about how recently certain categories of features evolved. In many cases, we can take genes from mice or fish and insert them into the DNA of invertebrates like fruit flies and get the same result. Such experiments are strong evidence that features like body plans and eyes evolved a really long time ago.

To be clear, there’s a lot of uncertainty which can probably never be resolved. We can prod algae in tanks to evolve the beginnings of multicellular bonding, but we have no idea if that particular direction is the one that our forerunners took.

Nevertheless, Your Inner Fish gives a good overview of how bacteria became bugs and fish, and how those bugs and fish became the bugs, fish, and people alive today. I certainly came away with an improved picture of how weird our bodies are and their many imperfections, though far from the whole picture. My curious is fairly sated, however—I’ve no plans to read the kinds of human anatomy texts I would need to really appreciate the magnitude of making men from microbes.

All told, I’d recommend Your Inner Fish as an entertaining and informative read about how human beings came to be. Neil Shubin has packed a lot of interesting scientific research into it, and with the exception of an example about hypothetical clown people in the final chapter, does a pretty good job of explaining it clearly. Definitely worth your time if the history of life on Earth intrigues you.

3902773

Do You Hate Math or Just Counting?

Last weekend KU held their annual Engineering Expo, a fairly typical sort of event for trying to get little kids interested in science and engineering. If you’re unsure what that might entail, come drop by next year.

Presumably this is fun for the little kids—most of them seemed pretty enthusiastic, and I never went to such events myself. And it’s fun for most of the students putting it on. We get to hang out with our friends, tell people about our majors, and get a day off class (though could you maybe have not scheduled it during midterms this year?). Half the fun is getting set up beforehand.

Because we’re nerds, we covered most of the leftover whiteboard space in the SEDS room with various engineering equations: thermo, flight dynamics, orbital mechanics, even the definition of a derivative. This predictably elicited a number of comments, mostly from parents but also from the kids themselves, about how much they hate(d) math.

This was particularly striking to me because when I was that age, I hated math. Or rather, hated what I thought was math. In elementary school, I didn’t know very much math. Pretty much all I knew how to do was glorified counting.

Addition? Counting. Subtraction? Backwards counting. Multiplication? Fast counting. Division? Backwards fast counting. Medians and means? Counting!

This pattern holds when we move onto the secondary school subjects. Algebra is almost entirely about counting graphically. Not all of geometry is counting, but a good fraction of it—vectors, for example—is. The same goes for a lot of trigonometry.

Only when you get to calculus, a senior year subject if it’s on the table at all, do you really start focusing on relationships versus the raw numbers. Calculus was the first math class which I enjoyed. That was the first time math became a tool rather than an obstacle. (Consistently, at least.)

A lot of students abandon mathematics long before they get to the stage where it becomes useful and meaningful. This concerns me. While there are plenty of people whose brains just aren’t made for math, there’s also a lot of great minds being lost because they aren’t made for endless dull arithmetic.

Spend years struggling away with contrived problems, truth held outside of grasp, and well, yes, giving up is certainly a reasonable response. You spent the last fifty years making science and math inaccessible and now, dear politicians and school administrators, you lament that more kids aren’t studying engineering. Behold the completely predictable consequences of your actions.

But this is not just an intellectual problem, or even an economic one. It’s a political nightmare.

Learning integral calculus completely rebuilt my perception of government finance. Just gaining an appreciation for statistics made me rethink dozens of policy positions. And yet I’d bet half our sitting Senators don’t even know what “differentiation” means. This should scare us shitless when they’re trying to plan for the next year, yet alone the next ten or twenty.

I hope that we changed a few minds last weekend, or at least planted a seed of doubt.  If technological civilization is going to survive the next 100 years, we need to rethink the way our culture approaches math and science. I’m severely tempted to do pro bono math education out of fear, but a few college kids can’t do it alone.

So, for your children’s sake: Do you hate math, or just counting?