Last time, we imagined that a cognitive “confabulation” process (and therefore all intelligence) happens in the brain as an interference phenomenon, or a sort of nonlinear convolution, among complicated modes of oscillation on a neural network.
But this idea is immature and unfunded, and experiments are not prepared at the moment to rigorously test some kind of hard prediction.
So instead, let us wave our hands, consider the typical living person as an empirical phenomenon, and attempt to describe a basic theory of idea genesis by thinking about it/him/her. A spoken sentence is commonly defined in English class as a “complete thought,” and we hypothesize that this definition can be closely correlated with some specific thing that might be called an “understood idea” as it enters or exits a conscious person, given the following conditions:
1) The person is arriving spontaneously at each output word, i.e. composing sentences as they are being spoken. This is different from a “memorized idea” which could instead be modeled as a sort of large word in a person’s vocabulary. It is also different from a “perceived idea” like this sentence that you are reading, because in this case a large percentage of the processing devoted to “finding” each word is cut out and replaced with less-intensive processing devoted to parsing each word and, in a typical case, “sounding it out” internally as your eye scans the page. Incidentally, that is why it takes much longer to write a book than it takes to read it.
2) The person really understands each input word, a philosophical dead end which can only be assumed from a given reply.
So where do these understood ideas come from? We tend to agree what is a coherent sentence, and far chaining mellow peninsula no binder what. But how do we arrive at the correct series of words for each idea? It is not really possible to identify the physical source of any particular word that I might say myself, because to do so would require me to say new and different words (at least internally), and so on. But it is still possible to theorize a mechanism by which this can happen in a general sense, that is consistent with the principles of analog confabulation.
A good place to start is with the acknowledgment that words are not guaranteed to mean exactly the same things to different people, and it is only by assuming a considerable amount of shared experience that we can rely on these labels to signify approximately what we intend to communicate. It would also be wise to acknowledge the fact that most things that can be understood by intelligent beings aren’t easily translated into words, as the arrival of creatures with “large” vocabularies was not very long ago, and therefore we have a rather naive understanding of what a “large” vocabulary actually is.
With that in mind, let’s get right to the core of the matter: what makes a certain word or pattern part of a person’s vocabulary? What is its function in relation to other words, and the people who use them? I consider it logical and correct to describe each word as a reminder of some shared experience. Why does the word “apple” mean what it does? Because it has been associated with the experience of an apple since before any one of us was alive. I know what the word means because I have experienced it so many times in the presence of apples. I can communicate this to other people, because when dealing with apples, I am strongly inclined to spontaneously arrive at that word, and externalize it.
The paradox, then, is this: if every word in a given vocabulary has to refer to some common feature of experience, how do people communicate new things? Well, there are several other factors to consider. First, it is possible to arrange familiar words in a way that reveals some previously unfamiliar aspect of the relationship between them. When these arrangements are particularly witty or profound, they are often called “jokes.”
Second, it is sometimes possible and even necessary to create new, completely unfamiliar words when they are required by a new idea. In these cases, if the new words are particularly appropriate or useful, they must refer to some common feature of experience that has not been named, and so they are assimilated into the shared vocabulary of those who understand the new idea. That is how language evolves.
Third, human communication has always been imprecise at best and useless at worst, so there is hardly any guarantee that listeners will ever understand anything I say in the same way that I do. This imprecision is usually ignored by humans, yet it causes the evolution of communicated ideas in unpredictable and not necessarily unhelpful directions. On the other hand, when we are inclined to read and write precise, executable computer code, it is often found that simply reading the code like one would read a book does not provide any useful insight. To rigorously understand a computer program or a mathematical proof, one must essentially construct a perfect imitation of some discrete state of mind achieved by its original creator, and it is not a coincidence that our relatively primitive machines can be readily configured to make use of these same ideas. We should also not be surprised that drilling children in the efficient execution of algorithms does little to produce creative adults.
Luckily, none of these factors lead to contradiction when imagining a neural network as an analog phenomenon, and in fact the reality seems much more consistent with this framework than with typical digital and discrete-time neural networks. The idea requires a rather uncompromising philosophy once it is extrapolated far enough, but that’s a common problem with any broad scientific theory. The most difficult point to accept will be that in this view, there is no further control system or homunculus that sits “behind” the interference phenomenon in any sense, as the phenomenon itself is the only control mechanism present. This challenging idea might lead some to conclude that insanity is only one unfortunate circumstance away, or even that free will itself does not exist. I would caution those who go that far to be aware of exactly what it is they are defining – if “free will” means the capacity for human beings to make decisions that contradict every rational force or expectation in the known universe, then explaining in scientific terms how this condition arises only serves to reinforce its reality.
It is trivial to cover edge cases (read: far from the cortex) with this model, because for example, medical science already knows that the force conveyed through a muscle is proportional to the frequency of the nerve pulses, not amperage or anything like that. Considering this, “reflex actions” and “muscle memories” can be explained as progressively longer signal paths that penetrate farther toward the cortex proper, but are quickly reflected back at the muscles that will perform the response. The difficulty comes with explaining more sophisticated animal behaviors, and finally with accounting for the nature of introspective consciousness. The signal paths for these actions are certainly orders of magnitude more complex than any of those which we can directly observe at present, but it is not impossible or even implausible that the underlying physical mechanism should essentially be the same.
The central hypothesis linking analog confabulation with intelligence suggests that in reality, conscious thought is only ever quantized or digitized in the sense that a given signal either “resonates with” or “does not resonate with” the rest of the brain. It would not be elementary to add or multiply these signals in a linear fashion, as the space of human ideas is not affine. Thus, a set of words grouped together in a specific order can encode much more information than the set of information gathered from each word when considered on its own. Furthermore, ideas beyond a certain elementary complexity level are never 100% guaranteed to persist. A common annoyance called a “brain fart” happens typically when one word or phrase from an idea that “should” be resonating with the others fails to enter the feedback condition, due to unexpected interference from any number of sources. This condition is not usually permanent, but people can and do permanently forget ideas that don’t resonate with anything for the rest of their lives.
Is it really possible to understand intelligence if this much ambiguity is required? Analog systems have characteristics that make them very useful for certain tasks related to intelligence, so it is in our best interest to try. After it has stabilized, a neural network arrives at a sort of “temporary solution” where the weightings of its connections are each optimally configured that no (or few) weightings change on the next recurrence of network activity. It would seem that an analog system could be stabilized in this manner to much more significant precision, and possibly in much less time, especially if any “aliasing” effect of the digitized neurons causes disruptive oscillatory behavior to persist longer than it would otherwise. The improvement over coarse digital algorithms would likely be significant, as evidenced by the fact that bees can reliably discover the best routes to maximize food collection using very little available hardware. A digital simulation of physically precise or effectively “continuous” neural networks is possible and has been attempted, but the complexity and price of such a system is still prohibitive, to say the least. The alternatives would appear to be either an enormously complicated analog computer, or the convenient discovery of some mathematical trick that makes efficient modeling with Turing machines possible.
Therefore, at present this perspective on high-level behavior and intelligence might be developed further in a qualitative field like psychology. One intriguing theory of mind, originally published by Julian Jaynes in 1976, suggests that humans went through a phase of “bicameral” mentality in which one part of the brain that generated new ideas was perceived by the other part as a member of the external universe. Jaynes suggests that this “bicameralism” was similar in principle to what we call “schizophrenia” today, and can account for all sorts of historical oddities that we call religions, myths and legends. The theory is based on the core epiphany that logical and learned behaviors predate consciousness and indeed provide some of the necessary conditions for its existence. This is used to push the idea that the human “phenomenon” emerged from interactions between sophisticated, organized animals and the external environment after a special phase of “bicameral society” in which most humans were not even self-aware.
Jaynes’s historical analysis touches on many interesting ideas, and provides enough evidence to demand a serious consideration, but its most obvious shortcoming is the manner in which it skips from an initial, abstract consideration of the separation between behavior and consciousness, to a discussion of Gilgamesh and the Iliad. We pick up the story of mankind there, and nothing is said of the millions of years of evolution leading to that point. Any complete theory of intelligence has to account for canine and primate societies as well as early human ones, and Jaynes’s bold assertions leave the reader wondering if there are any self-aware apes leading their mindless troops through the jungle.
In the framework of analog confabulation, we can ignore some of these hairier philosophical challenges for the moment, as the bicameral mind simply bears striking similarities to one intuitive model of a general pre-conscious condition. When a stimulus enters the kitty cat, it responds immediately and predictably. This is the behavior of a system that is not considerably affected by feedback. It can be characterized as involving a sort of linear path from the senses into the cortex, with one or two “bounces” against the cortex and then back out through the muscles as a reaction. It’s really quick and works wonderfully for climbing and hunting, but it means that the cat will never sit down and invent a mousetrap.
Self-aware creatures, on the other hand, behave as if there is significant feedback, at least while introspecting, and their brains might be characterized as having a great number of “loops” in the neural pathways. It means that the resonances theorized by analog confabulation can be extremely sophisticated, but naturally sophisticated resonating structures would have to develop before any of that could happen. The critical threshold must obviously involve enough complexity to process a vocabulary of a certain size, but it could include communication of any kind, using any of the senses.
The question of when or whether bicameral human societies existed is unaffected by any of this, but at the same time that possibility cannot be ruled out. It might even be valid to say that, for example, dogs have “bicameral minds” like Jaynes claims ancient humans did, only that their vocabulary is limited and not fully understood by us. Much of it could be roughly translated into simple, impulsive ideas like “I’m hungry!” or “come play!” or “squirrel!” like the dogs in Up, but a dog could never say “I’m thinking so therefore I exist!” in the same manner. Most dogs have not discovered that their brains are the source of their own ideas, and even if they did they would not have any good word for “think.”
So what solid logic supports this theory in the end?
– Wernicke’s area and Broca’s area are two topologically complex parts of the brain that are active in understanding and forming words, respectively. A high-bandwidth neural loop connects them.
– A large body of circumstantial evidence, some of which will be included here:
– Uniquely “human” behaviors like laughter, dancing, singing, and aesthetic choices all can be said to have a certain “rhythmic” component that describes the behavior intuitively and at a low level. Each behavior would then involve periodic signals, by definition.
– More specifically, if laughter really does betray some “inner” resonance that happens involuntarily when a human encounters the right kind of new idea, that phenomenon suddenly makes a whole lot more sense in an evolutionary context.
– Meditation reveals how new ideas arrive as unexpected, sudden, and sharp feedback loops, which often take some time to deconstruct and translate into the appropriate words, but are nevertheless very difficult to erase or forget. That is of course, unless an idea arrives in the middle of the night, in which case the noise of REM sleep can overwrite anything that is not written down.
– The fact that words have to “happen” to a person several times before they are useful means that each has a periodicity, even if it is irregular. And some words like “day” and “night” occur with regular periodicity.
– Music has profound effects on the mind. Duh.
– Light also affects mood, and too much of the wrong spectrum can make you SAD.
I’ll try to keep this list updated as I remember more circumstantial evidence that should be written down in a notebook already, but it seems like there would be a lot. In any case, if you *ahem* have thoughts about this theory, please do share them. Nobody really knows the answer to any of these questions yet so all ideas are appreciated.