My Kind of Reflection
❦
In Where Recursive Justification Hits Bottom, I concluded that it’s okay to use induction to reason about the probability that induction will work in the future, given that it’s worked in the past; or to use Occam’s Razor to conclude that the simplest explanation for why Occam’s Razor works is that the universe itself is fundamentally simple.
Now I am far from the first person to consider reflective application of reasoning principles. Chris Hibbert compared my view to Bartley’s Pan-Critical Rationalism (I was wondering whether that would happen). So it seems worthwhile to state what I see as the distinguishing features of my view of reflection, which may or may not happen to be shared by any other philosopher’s view of reflection.
- All of my philosophy here actually comes from trying to figure out how to build a self-modifying AI that applies its own reasoning principles to itself in the process of rewriting its own source code. So whenever I talk about using induction to license induction, I’m really thinking about an inductive AI considering a rewrite of the part of itself that performs induction. If you wouldn’t want the AI to rewrite its source code to not use induction, your philosophy had better not label induction as unjustifiable.
- One of the most powerful principles I know for AI in general is that the true Way generally turns out to be naturalistic—which for reflective reasoning means treating transistors inside the AI just as if they were transistors found in the environment, not an ad-hoc special case. This is the real source of my insistence in Recursive Justification that questions like “How well does my version of Occam’s Razor work?” should be considered just like an ordinary question—or at least an ordinary very deep question. I strongly suspect that a correctly built AI, in pondering modifications to the part of its source code that implements Occamian reasoning, will not have to do anything special as it ponders—in particular, it shouldn’t have to make a special effort to avoid using Occamian reasoning.
- I don’t think that “reflective coherence” or “reflective consistency” should be considered as a desideratum in itself. As I say in The Twelve Virtues and The Simple Truth, if you make five accurate maps of the same city, then the maps will necessarily be consistent with each other; but if you draw one map by fantasy and then make four copies, the five will be consistent but not accurate. In the same way, no one is deliberately pursuing reflective consistency, and reflective consistency is not a special warrant of trustworthiness; the goal is to win. But anyone who pursues the goal of winning, using their current notion of winning, and modifying their own source code, will end up reflectively consistent as a side effect—just like someone continually striving to improve their map of the world should find the parts becoming more consistent among themselves, as a side effect. If you put on your AI goggles, then the AI, rewriting its own source code, is not trying to make itself “reflectively consistent”—it is trying to optimize the expected utility of its source code, and it happens to be doing this using its current mind’s anticipation of the consequences.
- One of the ways I license using induction and Occam’s Razor to consider “induction” and “Occam’s Razor” is by appealing to E. T. Jaynes’s principle that we should always use all the information available to us (computing power permitting) in a calculation. If you think induction works, then you should use it in order to use your maximum power, including when you’re thinking about induction.
- In general, I think it’s valuable to distinguish a defensive posture where you’re imagining how to justify your philosophy to a philosopher that questions you, from an aggressive posture where you’re trying to get as close to the truth as possible. So it’s not that being suspicious of Occam’s Razor, but using your current mind and intelligence to inspect it, shows that you’re being fair and defensible by questioning your foundational beliefs. Rather, the reason why you would inspect Occam’s Razor is to see if you could improve your application of it, or if you’re worried it might really be wrong. I tend to deprecate mere dutiful doubts.
- If you run around inspecting your foundations, I expect you to actually improve them, not just dutifully investigate. Our brains are built to assess “simplicity” in a certain intuitive way that makes Thor sound simpler than Maxwell’s Equations as an explanation for lightning. But, having gotten a better look at the way the universe really works, we’ve concluded that differential equations (which few humans master) are actually simpler (in an information-theoretic sense) than heroic mythology (which is how most tribes explain the universe). This being the case, we’ve tried to import our notions of Occam’s Razor into math as well.
- On the other hand, the improved foundations should still add up to normality; 2 + 2 should still end up equalling 4, not something new and amazing and exciting like “fish.”
- I think it’s very important to distinguish between the questions “Why does induction work?” and “Does induction work?” The reason why the universe itself is regular is still a mysterious question unto us, for now. Strange speculations here may be temporarily needful. But on the other hand, if you start claiming that the universe isn’t actually regular, that the answer to “Does induction work?” is “No!,” then you’re wandering into 2 + 2 = 3 territory. You’re trying too hard to make your philosophy interesting, instead of correct. An inductive AI asking what probability assignment to make on the next round is asking “Does induction work?,” and this is the question that it may answer by inductive reasoning. If you ask “Why does induction work?” then answering “Because induction works” is circular logic, and answering “Because I believe induction works” is magical thinking.
- I don’t think that going around in a loop of justifications through the meta-level is the same thing as circular logic. I think the notion of “circular logic” applies within the object level, and is something that is definitely bad and forbidden, on the object level. Forbidding reflective coherence doesn’t sound like a good idea. But I haven’t yet sat down and formalized the exact difference—my reflective theory is something I’m trying to work out, not something I have in hand.