Brain, Mind and Computers by Stanley Jaki
Introduction
If you’ve ever heard of Stanley Jaki, it’s probably because Douglas Hofstadter briefly mentions him in Gödel, Escher, Bach, his famous book on AI and philosophy of mind. On page 524 Hofstadter describes Jaki’s position like this:
CHURCH-TURING THESIS, SOULISTS’ VERSION: Some kinds of things which a brain can do can be vaguely approximated on a computer but not most, and certainly not the interesting ones. But anyway, even if they all could, that would still leave the soul to explain, and there is no way that computers have any bearing on that.
That’s a decent summary of Jaki’s thesis in Brain, Mind and Computers, and Hofstadter’s skepticism is a decent barometer for how it’s seen today. Calling Jaki’s position a “version” of the Church-Turing Thesis– which holds that every quantity a person might calculate can also be calculated by an idealized computer– is a bit of a sour joke on Hofstadter’s part. Most modern AI researchers would agree with Hofstadter in adopting a much more sweeping version of the Church-Turing Thesis. In essence, they hold that everything a human mind does could in principle be run on a computer. Jaki doesn’t.
This debate between the “Soulists” and the “Mechanists”– Jaki’s preferred term for Hofstadter and company– has vast implications for how we think about computers, AI, and intelligence in general. To be sure, it’s not the same as the more pressing question of whether, when, and how a “general” AI might be created. (Roughly speaking, a two-year-old has a mind but isn’t a general-purpose optimizer, while evolution is a general-purpose optimizer but doesn’t have a mind.) Yet there’s a certain family resemblance between the two questions, and they tend to attract similar audiences split along similar lines. Understanding the Soulist-Mechanist debate can clarify what we should and shouldn’t expect from AI research, which in turn can help us approach it in a fruitful, honest, and humane way.
Jaki’s book was recommended to me several years ago as an outstanding treatment of this topic. Unfortunately, having finished it, I can’t really recommend Brain, Mind and Computers as reading material. Jaki is polemical, meandering, and at some points clearly wrong. He has obvious ideological commitments and is uninterested in discussing even the possibility that they might not be valid. In what follows I’m sometimes going to be steelmanning him in the interest of charity. Even so, I ended up more sympathetic to Jaki’s position than I expected. On many points he reasoned better than the AI advocates of his day, and we can learn a lot from his understanding of scientific history.
Brain, Mind and Computers is organized into four chapters (later editions add a fifth) which approach the question “Can machines think?” through the lens of various scientific fields: physics, neuroscience, psychology, and mathematical logic respectively. It’s a thought-provoking structure but in the end not a very compelling one. Instead, I’d like to present Jaki’s work as a combination of three different kinds of arguments: one based on scientific history, one based on practical considerations of engineering or experimentation, and one based on pure philosophy of science.
1. The Historical Argument
Jaki’s account of the idea of a thinking machine starts, surprisingly, with physics. He notes that Mechanism amounts to accounting for thought purely in terms of predictable physical laws. If causes and effects in a mental system are fully quantified, the thinking goes, their behavior should be deterministic (and controllable), just like that of the physical universe. Jaki further notes that this isn’t a new idea. In fact, as science advances in its successful prediction of physical processes, the idea of similarly taming mental processes follows like a fad.
Jaki manages to find ridiculous examples of Mechanism in even the greatest scientific triumphs. Lucretius, the great ancient Roman proponent of atoms, imagined that vision was caused by objects emitting miniature physical replicas of themselves to be absorbed by the eyes and retained by the brain. Descartes thought the brain worked like a hydraulic machine; Newton speculated that it ran on “vibrations” in the “aether”. 19th-century writers seized on electricity and magnetism as possible mechanisms. Pavlov baselessly claimed that the physical mechanism for his famous conditioned reflexes was “irradiation” of the brain, of all things. In a brilliant bit of intellectual jujitsu, Jaki describes all these theories as forms of vitalism; they didn’t explain thought but only asserted, without further detail or understanding, that it obeyed certain mechanistic forces. In modern times, Jaki suggests, the pattern doesn’t change; we’ve just replaced the old, discredited candidates with the new mysterious force of computation.
As foils for these villains Jaki also introduces a series of scientific heroes. He paints them as the less zealous, more responsible group– the ones who had more direct experience with the calculating devices and physical phenomena in question. His list includes Pascal, inventor of the first mechanical calculator; Liebniz, Newton’s more admirable rival in the race to invent calculus; Ampere and Maxwell, the titans of classical electromagnetism; Kelvin and Babbage, inventors of groundbreaking computing devices; Vannevar Bush, a crucial early developer of electrical computers; and John Von Neumann. All of them, Jaki says, strongly rejected interpretations of their devices as “thinking”, in opposition to the inevitable hype from less grounded and disciplined observers.
Jaki’s point here isn’t that his heroes were right and his villains were wrong. In hindsight everyone agrees about that: the human brain doesn’t run on aether or magnetism, and Liebniz’s or Babbage’s machines weren’t minds. Jaki’s point is that there’s a pattern of flawed reasoning here– a cognitive bias. That pattern is something like “unsophisticated observers of mechanical processes assume that the mind can easily be explained in terms of those processes as well.” Jaki wants readers to understand viscerally that the cognitive leap from “look at this cool mechanical process!” to “thought must work just like that!” is tempting but fundamentally invalid. He thinks AI proponents in his day are making the same mistake.
Jaki applies similar reasoning from another direction when considering the influence of physical science on psychology. Here his villains are Skinner and– more surprisingly– Freud, whom he quotes as saying mental energies must be treated like quantities and flows “in the same sense as a physicist employs the concept of a fluid electric current”. His main heroes are Francis Galton, William James, Carl Jung, and Alfred Adler, all of whom he cites in support of the claim that mechanism in psychology is fundamentally limited. Psychologists can observe statistical regularities in a population, but when it comes to an individual, conclusions have to be guided by the fact of that person’s own experience, which is not reproducible and therefore not purely mathematical or mechanical. In this sense Jaki makes the psychologists, too, a study in contrasts between overreaching evangelizers and hard-nosed experimentalists; in this case, though, the relevant experimental data is exactly the unruliness of the human mind.
Even when talking about psychology, though– or neuroscience, or mathematics– Jaki emphasizes physics as the central, motivating metaphor. He sees Hilbert’s program to systematize mathematics (historically important, but wrecked by Gödel) and neuroscientists’ efforts to localize memory in the brain (broadly unsuccessful as of his writing) as attempts to transplant the predictability and controllability of physics to places it fundamentally can’t thrive. Against this, Jaki asserts that the explanatory power of mechanism in physics became so inspiring precisely because physicists refused to apply it to problems where thought was involved. He cites Eddington, Einstein, Schrödinger, and Bohr as physicists who consciously disclaimed any effort to explain thought or similarly metaphysical concepts in terms of the physical phenomena they studied. Jaki claims that straying beyond this to explain mental phenomena in terms of mechanical rules is a recipe for scientific nonsense.
Is Jaki right?
He certainly makes some effective points. Jaki has a talent for catching, and citing, intellectual giants at their most embarrassing moments of wrongheadedness. This doesn’t prove anything in itself, but it’s a useful reminder that scientific progress isn’t the smooth arc that hindsight shows us. It’s full of bumps and wrong turns and nasty arguments over how to read the map. If we project the easy certainty of a textbook forward onto our own era, we give ourselves a false confidence in the strength of our paradigms. The paradigm of treating thought as computation, and vice versa, could easily be one such example of that very real cognitive bias.
What about Jaki’s positive examples? One does wonder if his sources might be cherry-picked. I decided to spot-check him by going to an absolute authority– Wikipedia’s List of pioneers in computer science– and trying to assign Mechanist or Soulist views to everyone on the list up through 1945, roughly the period Jaki covers. Most of Jaki’s heroes are on it. The pioneers he doesn’t cover have mixed or unclear views in many cases, but generally tend toward supporting Jaki’s thesis. (For example, in 1913 Leonardo Torres y Quevedo enthusiastically described even simple feedback devices as “knowing” facts, but also made clear he didn’t think a machine could ever “ponder” something.) The list also includes a number of logician-philosophers– Boole, Russell, Frege, Brouwer, and so forth– whom Jaki might class as villains. Since their contributions are seriously complicated by the later work of Gödel, Church, and Turing, I’m willing to treat them as a separate group and say Jaki was in the right on this one. Turing himself is of course the great exception (Gödel and Church were generally Soulist) but one Jaki gives considerable, though unsympathetic, attention to.
I’m not fully persuaded by Jaki’s sharp division of experimentalist heroes from enthusiast villains, but I can’t deny that he’s onto something in his observation. Even in the 21st century it’s fair to observe that scientific popularizers and cheerleaders– Nye, Tyson, maybe Yudkowsky– are much more universally and insistently Mechanist than successful practitioners. Even Gödel, Escher, Bach, now that I read it with Jaki’s critiques in mind, has a certain frustrating vagueness about the details of its Mechanist arguments exactly where precision would be most valuable. When excitement about the power of machines leads Mechanism to be simply assumed in its justification– Jaki is ruthless about noting the circularity – it’s easy to lose sight of whatever reasons might or might not exist for believing Mechanism over Soulism in the first place.
2. The Practical Argument
Jaki’s second main way of addressing the question “Can computers think?” is essentially to point out: “No, they can’t.” He observes that science and engineering haven’t successfully created thinking machines and contends that their progress so far is unlikely to carry them to that goal. In this argument Jaki again adopts an aggressively experimentalist position in opposition to the theoreticians: “Show me the results!” He argues that the results so far have demonstrated neither mechanized thought nor a viable path toward it.
Again Jaki draws on his expertise in scientific history to make this argument. This time, though, his narrative isn’t one of hype versus nuance but one of simple failure. Science, he says, has tried to understand brain function and reasoning in a purely mechanistic way, and it has consistently failed. I’ve already mentioned two of the scientific efforts Jaki covers this way: the neuroscience of memory, and Hilbert’s program.
“Today, no less than two hundred or three hundred years ago,” Jaki writes, “two main facts dominate the over-all field of brain research. The first is the relatively meager extent of our knowledge of the processes taking place in the brain; the other is the complete impasse at which the question of the brain-mind relationship finds itself from the physicalist viewpoint.” In support of the latter claim he emphasizes the failure of scientists over many decades to find the hypothesized physical equivalents of stored memories. He also stresses the brain’s redundancy and plasticity– thousands of neurons die and are replaced every hour, and large chunks can be removed without seriously compromising thought. Furthermore there doesn’t seem to be a straightforward connection between cognitive ability and brain size, and mechanical interpretations of brain activity like EEGs don’t correlate to cognitive experience except in very loose ways.
The real sting in Jaki’s argument, though, comes from the first part of his claim: that no progress has been made on the fundamental issues involved. In this history-of-science narrative there are no heroes or villains, only fallen soldiers. (That includes William James and Francis Galton, whom Jaki treats as heroes elsewhere!) As an explanation of why no progress is being made, Jaki brings in more abstract considerations: the huge number of brain components, the difficulty (intractable, he thinks) of fully quantifying the physical state of cells, the reactive and anti-inductive quality of many organic systems. He considers these sufficient obstacles to ever fully understanding brain processes in a purely mechanistic way.
Similarly, when discussing Hilbert’s program, Jaki emphasizes the centuries-old hope of philosophers that all intellectual truths could be mathematized and then ground out by some kind of machine. Logicians like Bertrand Russell brought this hope to apparent near-fruition in the early 20th century before it was dashed– at least in principle– by Gödel’s incompleteness theorems and Turing’s work on the halting problem in the 1930’s. To this basic narrative Jaki adds, as he did with the brain, arguments as to why certain types of progress in mechanistic logic will never be made; for example, he believes the inherent flexibility and ambiguity of language will prevent it from ever being successfully processed by a machine.
Is Jaki right?
Well, no. In this line of argument he makes several missteps, the first and least forgivable of which is assuming that sheer weight of numbers will be enough to save it. He estimates the number of components in the brain at 10^16, a number so massive that… hang on… I’m being told that in 2023 this many bits (roughly 1 petabyte) can be enumerated on a device the size of a breadbox, and that GPT-4 is within an order of magnitude or so of having this many parameters. (Not coincidentally, GPT-4 also does pretty well on the language-processing problems that Jaki thinks are impossible for machines.) From the other direction, Jaki also claims that making a machine play consistently-better-than-human chess is a “practical impossibility” due to the huge search space (he estimates a lower bound of 10^40)– a prediction that’s been blown out of the water by AI techniques that found far more efficient ways to decide on a move.
Besides betting against computer engineering’s incredible progress, Jaki also seems to have been mistaken in counting out neuroscience. Modern experiments have demonstrated, in basic form, the ability to express specific memories by activating small clusters of brain cells. They’ve also provided plausible, though not certain, low-level biological descriptions of memory formation (synapse growth and increased excitability of neurons). Finally, modern tools like MRI and PET scans provide more precise (though still impressionistic) mechanical descriptions of brain activity.
The harder question is whether all this would actually satisfy Jaki’s criteria for full mechanistic understanding. He has a memorable passage on this aspect of psychology that’s worth quoting at length:
Boring’s phrase, “Psychology, if it is to be a science, must be like physics,” is still a phrase, not a reality. What James said of psychology is still true: “psychology is like physics before Galileo’s time– not a single elementary law yet caught a glimpse of.” Whether psychology will ever have its Newton is anybody’s guess. James himself anticipated with confidence such an eventuality, but was also quick to add that the subject matter of psychology will inevitably make its Newton sound rather metaphysical.
That still seems accurate today. If we go looking for modern psychological Newtons we find candidates like Friston with “free energy”, who does indeed “sound rather metaphysical” to say the least (or, to say the most, sounds like a crackpot). Modern neuroscience is still in the proof-of-concept stage– it’s not necessarily providing actionable intelligence about the workings of the mind, as Jaki might demand from a true Mechanist account.
Actually, the same might even be said of AI research. It’s interesting to note that even the large neural nets that compose sonnets and trounce grandmasters don’t really support internal inspection to see why or how they’re making decisions. They’re undeniably mechanical, but the mechanism is curiously non-portable and non-reducible and so arguably doesn’t advance any Mechanist understanding of what a mind might be. If we take Jaki’s predictions in this narrowly experimentalist sense– a bet against practical demonstrations of how mechanistic thought actually operates– then they’ve held up surprisingly well over the past fifty-some years.
3. The Philosophical Argument
Jaki argues that attributing thinking to machines, or machinery to thought, is a well-attested fallacy of scientific discourse (section 1). He also argues that scientific efforts to demonstrate mechanized thought in practice haven’t made, and won’t make, meaningful progress (section 2). But these empirical, or I suppose meta-empirical, arguments– though they constitute the more novel, fascinating, and persuasive portions of Brain, Mind and Computers– aren’t really the main point for him. Jaki believes that Mechanism is an impossibility even in principle, and at various points throughout the book he gives his reasons why.
Unfortunately, many of the reasons are disappointing. Some are outright silly. Jaki claims machines can’t actually understand anything because they only operate on symbols; assumes that quantum mechanics requires the minds of observers to be non-physical; and even suggests that ESP might be evidence of minds acting independent of matter. (To be fair, Turing took this possibility seriously too, but it’s not a good look.) Others are merely unoriginal. One passage summarizes in quick succession Soulist arguments by John Lucas from Gödel’s incompleteness theorem, by Michael Polyani from infinite regress in formal logic, and by Neville Moray from qualia. There’s little point in discussing any of these further, since Jaki isn’t really adding anything to the discussion. Works like Gödel, Escher, Bach and Scott Aaronson’s Quantum Computing Since Democritus are good sources for standard Mechanist responses.
There are two major arguments Jaki makes, though, that I wasn’t familiar with. First, Jaki offers a striking description of why math works: because its units are homogeneous. The 2s in 2+2=4 are assumed to be interchangeable, and you can only subtract apples from oranges if you don’t care about the difference between them. When we compare operations in a computer to thoughts in a brain, where does the homogeneity come from? Sure, we could go down to the level of atoms if necessary, though Jaki claims this might not work even in principle due to quantum mechanics. Or we could raise the computation to a purely probabilistic level: Jaki has no problem with statistical regularities that apply across a sample of many minds. But for anything in between, the process of mathematization has to assume a structural equivalence that might simply not be there. The individuality of the mind might get in the way.
Second, Jaki emphasizes the steep obstacles to a mechanistic account of what he calls an “organic” system. He doesn’t mean this in a grossly biological sense, but more as a gesture toward what we might call “anti-inductive” or maybe “irreducible” behavior. This is behavior that reacts to probing, and systematic study, and attempted control, by changing– behavior, in other words, that can’t easily be disentangled from its context, including the context of its own mechanization. As Douglas Adams joked: “There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable.” The joke is funny because the “Universe” of classical physics is about the only thing in our experience that doesn’t do that. Anti-inductiveness is a distinctive property– maybe a fundamental property– of minds.
Between these two arguments, Jaki has the beginnings of a philosophical case for the human mind as an irreducible and non-interchangeable whole that inherently resists being captured in mere numbers. But that case depends on facts that aren’t in evidence. In practice, there do seem to be regularities– what Hofstadter calls isomorphisms– that apply well below the level of the whole mind, but high enough that they can be tractably mapped onto thought processes. If there’s a viable argument here it’s a variant of the practical argument, not a philosophical one.
The problem is that Jaki really wants his argument to be philosophical. He has a bad habit of essentializing patterns of thought– taking phenomena like sense perception or meaning or consciousness and assuming that they’re fundamental ontological entities. With this assumption in place, the conclusions he wants just naturally fall out of it. For example, in several places he argues that isomorphisms between reason and the physical world in a Mechanist model are impossible– because reason is absolute while sense data are statistical; because brain patterns are “just” electrical signals which couldn’t reproduce physical qualities like the wavelength of light; because physical objects can have properties that aren’t shared by the symbols for them, like the indivisibility of atoms. Jaki seems not to grasp the way concepts like the map-territory distinction, or symbols taking on meaning through interpretation– bread and butter for Hofstadter and for most who work with computers today– might make his comparisons seem ridiculous. To him they indicate a fundamental incompatibility.
Is Jaki right?
I’ve already given away my overall answer: no. When talking about the philosophy of science, rather than its history, Jaki is outside his expertise and it shows. He gets more bombastic and less disciplined, grasps at any intellectual straw he can find, and summarizes others’ arguments rather than making his own. For all I know it might be possible to prove Soulism from first principles, but Jaki’s style of philosophical assertion is nowhere near robust enough to bear the weight of such a proof.
On the other hand, if the goal of all this isn’t to prove Soulism unconditionally, but just to show that Mechanism breaks down at some point, then Jaki is in better shape. Compare Jaki’s more developed arguments about individuality and irreducibility with the limitations Isaac Asimov placed on his fictional “psychohistory”. Not to spoil the Foundation series in detail, psychohistory is envisioned as a successful mechanization of thought: a way of predicting, even controlling, the operation of human minds on strictly scientific principles. However, to make it plausible and interesting, Asimov has to stipulate that psychohistorical mechanization works only in aggregate (millions of people over long time periods); only if all outside influences are accounted for; and only if the subjects aren’t aware of the predictions! That’s a decent model for the ways Jaki thinks real-life Mechanism is likely to be limited as well. And limitations on what Mechanism is able to do even in principle create space for Soulism to have its say.
It’s worth pausing here to understand Jaki’s positive beliefs about the nature of the mind: his actual reasons for rejecting Mechanism out of hand. They explain a lot of what he says, they’re coherent in their own way, and they’re very different from anything most of us have ever been seriously exposed to. Mechanism is such a pervasive assumption of the modern world that it’s hard for us to even consider any other possibility. If we do try to dissent from it, we tend to take refuge in metaphysical hairsplitting– P-zombies, Chinese rooms, subjective experiences– that tries to preserve consciousness as an independent category while leaving external behavior entirely predictable.
Jaki doesn’t subscribe to any of this. He’s both a sincere experimentalist– a firm believer in science proceeding by observable tests of hypotheses– and a Jesuit committed to a standard Catholic philosophical account of the mind, which treats it as a characteristic and perhaps definitive quality of the human soul. In contrast even to dualists like Descartes, Jaki treats the body and soul as an inseparable union. (Fascinatingly, he links this to the Christological doctrine that Jesus was a single person uniting the full human and divine natures.) He insists that the physical state and actions of the body are not to be separated from the non-physical– spiritual, in fact– state and qualities of the mind.
From this view several things follow. First, mind isn’t simply an emergent property but its own ontological entity, God-given and in some ways God-resembling. Second, mental facilities like knowledge, creativity, understanding, and meaning are to some extent metaphysical and absolute. Jaki believes a computer couldn’t “really know” something it stores, or “really mean” something it outputs, because there’s no rational grounding for that knowledge or meaning; in contrast the human mind is capable of grounding itself, producing sure self-knowledge and self-understanding. Third, this capacity cashes out as a non-subjective difference in behavior, which is not fully quantifiable (since otherwise, Jaki acknowledges, a computer could in principle reproduce it) but nevertheless is publicly observable.
This is a jarringly high view of the mind, and Jaki presents no particular evidence to make us believe it. On the other hand, he sketches a plausible way in which Mechanism might break down beyond a certain point and leave us with no other viable alternative. Showing us the shape of something we thought was a philosophical impossibility– an approach to the mind that’s evidential and compatible with our intuitions, yet non-mechanist– is an achievement in itself.
Conclusion
Why did Jaki feel the need to write a book about all this, anyway? Why should we care if he’s right or wrong?
In a few places in Brain, Mind and Computers Jaki hints at the answer. Mechanism, he says, isn’t just wrong but dehumanizing. Believing that computers will never rise to the current standard of humans, he worries that humans will instead be lowered to the standard of computers– that they’ll be seen as mere mechanisms, suitable for control and use as cogs rather than accepted as having inherent dignity. The false promise of Mechanism could poison the real promise of human thought.
This also explains why Jaki was so concerned about analyzing and predicting the future path of progress in AI. Given his experimental outlook, to satisfy him intellectually it would probably have been enough to note that computers in his day didn’t produce behavior characteristic of thought, and leave it at that. But for Jaki the real concern was always the projection of hypothetical future progress onto current circumstances in ways that distorted understanding.
Mechanist AI advocates are wont to accuse their Soulist opponents of “moving the goalposts”. As soon as the Soulists provide a concrete prediction about what computers won’t be able to do, the Mechanists get to work doing it– and so far have tended to succeed sooner or later. Then when the Soulists object that they still aren’t satisfied, the Mechanists understandably cry foul. But understanding Jaki’s position has given me a different perspective on this dynamic. The most difficult intelligent behaviors for AI to reproduce are also the most difficult to specify, and therefore the least likely to be set as a concrete goal. The fact that AI keeps achieving such goals doesn’t really tell us much about where it will end up. The real goalpost-moving happened when the Mechanists latched onto a concrete prediction, because the real goalposts are at the end of the field. AI is only a mind if it can reproduce all aspects of mind to humans’ satisfaction, not just the easy ones. This has been obvious since Turing.
To see why this matters, suppose that instead of trying to mechanize intelligence we’re “only” trying to mechanize life. Specifically, we want a machine that will act as “alive” as a puppy does. As a first step toward this we’ll try to see if all the behavior we want can be explained by Newtonian mechanics at a macroscopic scale. We observe that under certain conditions such mechanics do explain the behavior of the puppy. Specifically, if we pick it up and give it a kick, it travels in a perfect parabolic arc until it hits the floor. But if we end our investigation there– if we treat predicting only certain kinds of puppy movement as our goalpost, and say that those kinds of movement sufficiently capture life– then something will have gone wrong. Our subsequent treatment of “living” things will be not only factually mistaken, but extremely unpleasant for any puppies that we happen to meet.
So the question of whether we’re correct in projecting the current course of AI all the way forward to the horizon labeled “intelligence”, or whether our expectations will collide with insurmountable limitations, is an intensely relevant one for our behavior in the here and now. One can observe that AI keeps succeeding and succeeding at incremental goals, or that it keeps failing and failing to come to grips with its ultimate task. To figure out what path future development is likely to follow, we need to reason critically about the grounds of our beliefs about that development– and that’s exactly where Jaki comes in. He claims that the hype around mechanized thought is blinding us to the serious limitations that will prevent us from bringing it to fruition.
So: Is Jaki right?
At this point I have to admit I've been misleading you a little. "Is Jaki right?" is a bad way to put the question. It probably doesn't have an answer; if it did, that answer would probably have to be "No". After all, Jaki wrote over fifty years ago. A huge gap separates him from current AI research and there's no particular reason to expect he could speak to it. A more relevant and interesting question would be: "was Jaki right?" Did he reason successfully about the AI research, and AI hype, of his own day?
I believe he did.
Jaki wrote Brain, Mind and Computers in 1969 and updated it with an additional chapter (which I’ve mostly skipped over in this review) 1989. Hofstadter wrote Gödel, Escher, Bach in 1979. Deep Blue beat Garry Kasparov– the first big falsification of Jaki’s concrete predictions– in 1997. The periods 1974-1980 and 1987-1993 are known in hindsight as “AI winter” when existing paradigms sputtered out, funding dried up, and progress appeared to slow or stop.
See what’s going on here?
However incomplete his arguments might be as applied to our day and age, from the perspective of AI research in Jaki’s day they were absolutely on point. AI was overhyped; progress was stalling; there were fundamental obstacles, like the irreducibility of cognition and the combinatorial complexity of language, that contemporary approaches couldn’t grapple with. Furthermore, if we penalize modern Soulism for Jaki’s misconceptions we should also penalize 1960’s Mechanism for these failings! Yes, Jaki essentialized cognition in a way that seems ridiculous to us, with Gödel, Escher, Bach and another forty years of development on its themes behind us. But so did Jaki’s Mechanist contemporaries! They were all operating under a now-abandoned AI paradigm that really did expect to recreate the structures of high-level cognition as explicit objects in a computer program, in a more or less isomorphic way. Small wonder that Jaki thought of AI using the same paradigm! Even Hofstadter, ten years later, was confused on this point. On page 572 of Gödel, Escher, Bach he writes:
Of course, Artificial Intelligence research is not aimed at simulating neural networks, for it is based on another kind of faith: that probably there are significant features of intelligence which can be floated on top of entirely different sorts of substrates than those of organic brains.
This expectation of “significant features” above the neural level, independent of the organic substrate, is precisely the assumption that Jaki identified as an unsupported article of AI “faith” and soundly refuted in Brains, Mind and Computers. The modern paradigm of neural nets stacked layer on layer and trained in unsupervised ways is almost directly opposite to Hofstadter’s expectation here. In that sense, Jaki was entirely right!
In other words, the main lesson I learned from reading Brain, Mind and Computers was one of reading intellectual history backwards. The development of scientific paradigms is rarely as simple or overdetermined as it looks in hindsight, and projecting it forward as uncomplicated progress in those same paradigms is asking for trouble. The old AI paradigm had to die a nasty death, and decay over the course of the AI winter, before a new one based on reasoning as an emergent rather than explicit capability could rise to take its place. Jaki’s only real mistake was confusing the doom of the paradigm with the doom of the entire project.
Or was it? This was the question that still bothered me after all my research on Jaki: what could, or would, Jaki have put forward as a Soulist objection to the wildly successful (but increasingly overhyped) modern paradigm of AI research? Not being an expert I don’t have a complete answer, but I’ll wrap up this review by offering a few thoughts on how Jaki’s brand of Soulism– historically informed and experimentally focused, with a high view of the human mind– might remain viable today.
Let’s start by stating the dilemma. With the problems of scale and structure increasingly solved by AI developers, Soulism urgently needs some coherent way to answer one particular objection: “Okay, but what do you think would actually happen if you scanned a brain and simulated the physics of all the neurons?” To build intuition for how this might get answered, let’s start with a simple dualism based on the relationship between a character in a video game (body) and the person playing that character (mind). Try to imagine what experiments one might perform within the physics of a video-game world to determine the existence of player characters in that world. The game could make the physical characteristics of PCs identical to those of NPCs down to an arbitrarily subtle level. However, we’d still expect the behavior of PCs to remain fundamentally more complex and less predictable than that of NPCs in ways that could be observed and validated– though not fully understood– within the game.
That’s not a fully satisfying analogy, though, because it treats the mind-body division as purely artificial and doesn’t really explain the nature of the border where the two interact. Fortunately, it’s plausible that the mind might have physical effects that simply aren’t within the capability of physics to predict. Quantum mechanics, though it’s probably not the actual mechanism for them, at least proves that such properties can exist. Physics before QM assumed that particles, like billiard balls, had fully measurable positions and velocities; QM made it clear that there were fundamental limits to the determinism of those properties, and the strongest truth physics could express was a probability distribution. This was– as Jaki emphasizes– a rude shock to mechanists. Their retroactive attempts to redefine baseline reality as consisting of just the probability distributions seem to seriously beg the question.
Finally, even if it’s fully exposed to physical interaction, mind might stay practically distinct from matter by being in some relevant sense computationally irreducible. Scott Aaronson, in Quantum Computing Since Democritus and elsewhere, fascinatingly reframes philosophical questions like the mind-body problem as questions of computational complexity. The classic unproven conjecture of complexity theory is “P != NP”: behaviors can be exponentially more difficult to reproduce than to verify. If consciousness is such a behavior, as seems likely, then being experimentally observable but infeasible to reconstruct is no contradiction. Jaki wouldn’t like this answer, and it doesn’t resolve the question of how consciousness works so much as defer it, but it does increase the plausibility of a scenario where AI R&D continues for an arbitrarily long time without producing anything acceptably mind-like.
Of course, neither my speculations here nor Jaki’s in Brain, Mind and Computers are a definitive refutation of Mechanism. If such a refutation were possible, the world would look very different. However, I can’t endorse Hofstadter’s wholesale dismissal of Soulism either. Reading Jaki taught me a healthy skepticism about the course and character of scientific progress. We can all too easily get drawn into hasty conclusions about it that neither history nor our current assumptions actually support. If that skepticism could be applied to modern AI research, we might be better able to grapple with the problems it presents in a truly thoughtful way.