Prediction and Primacy of Consciousness

I finished Leonard Peikoff’s Objectivism: The Philosophy of Ayn Rand in 2015, and on the whole, didn’t get that much out of it. It took a long time to slog through, and didn’t answer some of my longstanding questions about Rand’s intellectual history. I’d recommend it as a reference text, but not as an introduction to Objectivism.

This isn’t a review of OPAR; I’ve discussed it elsewhere. Today we’re going to discuss one of the few good new ideas I learned reading it: primacy of consciousness.

Objectivism advocates a worldview based on primacy of existence. Rand holds that consciousness has no power over reality in and of itself—consciousness is the processes of identifying existents, not creating them. Now a conscious mind can decide to alter existents through physical action, or extrapolate the possibility of not-yet-existing existents, but the mere act of thinking cannot produce physical phenomena.1

Primacy of consciousness puts the cart before the horse. Perception can neither create a percept, nor modify it, nor uncreate it.2 Sufficiently invasive methods of inquiry may do that, but the mental process of observation does not.

Let us consider a technical example. When solving engineering assignments, it is often tempting to avoid checking my work. The correct answer is independent of whether I’ve made an exhaustive search for mistakes. Yet, on a certain level, it might seem that not looking will make an error go away.

But it won’t. As my structures professor often says, in aerospace engineering we have a safety factor of 1.5. In school, that’s just a target to aim for—if I screw up, the grade will point it out and I’ll feel silly for missing easy points. On the job, that’s not the case. If your work has a serious mistake, you’re going to kill people.

Or wreck the global economy.

Since starting Nate Silver’s book, perhaps the most interesting thing I’ve learned so far (besides an actually intuitive explanation of Bayes’ Theorem, contra Yudkowsky) was just how stupid the root causes of the housing crisis were.

I’d recommend reading the book if you’d like a properly comprehensive explanation, but the executive summary would be that, starting in the late 1990s, the value of houses began to skyrocket in what we now know was a real estate bubble. This was basically unprecedented in US history, which should have been a wake-up call in itself, but the problem was compounded by the fact that many investors assumed that housing prices would keep going up. They wanted to bet on these increasingly risky properties, creating all sorts of creative “assets” to bundle specious loans together. Rating agencies were happy to evaluate all of these AAA, despite being totally new and untested financial instruments. Estimated risk of default proved to be multiple orders of magnitude too low. And yet everyone believed them.

Silver describes this as a failure of prediction, of epistemology. Assessors made extremely questionable assumptions about the behavior of the economy and the likelihood of mortgage default, which are legitimate challenges in developing predictive models. Going back to my examples of structural engineering, it’s easy to drop the scientific notation on a material property when crunching the numbers. If you say that aluminum has a Young’s Modulus of 10.7, the model isn’t going to know that you meant 10.7 × 106 psi or 10.7 Msi. It’s going to run the calculations regardless of whether your other units match up, and may get an answer that’s a million times too big. Remember that your safety margin is 1.5.

I don’t think economic forecasters have explicit margins of error, but the same general principle applies. Using the wrong Young’s Modulus is an honest mistake, an accident, which is easily rectified if found early. Lots of errors in the rating agencies’ models weren’t so honest. They made what looked like big allowances for unknowns, but didn’t question a lot of their key assumptions. This speaks to a real failure of epistemic humility. They didn’t ask themselves, deep down, if their model was wrong. Not the wrong inputs, but the wrong equations entirely.

For instance, say I model an airplane’s wing as a beam, experiencing bending and axial loads, but no axial torsion. That’s a very big assumption. Say there’s engines mounted on the wing—now I’m just ignoring Physics 101. If I ran the numbers and decided that propulsive and aerodynamic twisting moments were insignificant for the specific question I’m considering, then it might be an acceptable assumption. But I would need to run the numbers.

Many people, at many organizations, didn’t run the numbers in the years leading up to the financial crisis. Now not all of them were given an explicit choice—many were facing managerial pressure to meet deadlines or get specific outputs. That’s an organizational issue, but really just bumps the responsibility up a level.3 Managers should want the correct answer, not the one that would put the biggest smile on their face.

In aerospace engineering, we have an example of what happens when you do that:


Just because the numbers look good on paper doesn’t mean they correspond to the real world. Empirical testing is where that comes in. Engineers do that all the time, but even then, it doesn’t prevent organization incentives from bungling the truth. If the boss wants to hear a particular answer, she may keep looking until she finds it.

Economists are worse, trying to predict a massively nonlinear system and, Silver reports, doing quite badly at it. Objectivism is very strong on the importance of saying I know, but rationality also depends on saying I don’t know when you legitimately don’t. Try to find out, but accept that some truths are harder to obtain than others.

Existence exists, and existents don’t care what you think.

1Outside of your body, that is. This is where the line between body and mind becomes pertinent and about where I give up over reducibility problems. Suffice to say that if you can create matter ex nihilo, there’s a lot of people who would be interested in speaking with you.

2Those of you with itchy fingers about quantum mechanics are politely invited to get a graduate degree in theoretical physics. We’re talking about the macroscale here.

3Not that responsibility is a thing that can truly be distributed:

Responsibility is a unique concept… You may share it with others, but your portion is not diminished. You may delegate it, but it is still with you… If responsibility is rightfully yours, no evasion, or ignorance or passing the blame can shift the burden to someone else. Unless you can point your finger at the man who is responsible when something goes wrong, then you have never had anyone really responsible.
 —Admiral Hyman G. Rickover, USN