Here’s an updated version of a paper I wrote for Amit Ray’s class last quarter.
ABSTRACT
We assume that intelligence can be described as the result of two physical processes: diffraction and resonance, which occur within a complex topology of densely and recurrently connected neurons. Then a general analog operation representing the convolution of these processes might be sufficient to perform each of the functions of the thinking brain. In this view of cognition, thoughts are represented as a set of “tuned” oscillating circuits within the neural network, only emerging as discrete symbolic units in reality. This would pose several challenges to anyone interested in more-efficient simulation of brain functions.
INTRODUCTION
In the early years of computer science, intelligence was theorized to be an emergent property of a sufficiently powerful computer program. The philosophy of Russell and Wittgenstein suggested that conscious thought and mathematical truths were both reducible to a common logical language, an idea that spread and influenced work in fields from linguistics to computational math. Early programmers were familiar with this idea, and applied it to the problem of artificial intelligence. Their programs quickly reproduced and surpassed the arithmetical abilities of humans, but the ordinary human ability to parse natural language remained far beyond even the most sophisticated computer systems. Nevertheless, the belief that intelligence can be achieved by advanced computer programs persists in the science (and science fiction) community.
Later in the twentieth century, others began to apply designs from biology to computer systems, building mathematical simulations of interacting neural nodes in order to mimic the physical behavior of a brain instead. These perceptrons were only able to learn a limited set of behaviors and perform simple tasks (Rosenblatt 1958). More powerful iterations with improved neural algorithms have been designed for a much wider range of applications (like winning at Jeopardy!), but a model of the human brain at the cellular level is still far from being financially, politically or scientifically viable. In response to this challenge, computationalists have continued the search for a more-efficient way to represent brain functions as high-level symbol operations.
Confabulation Theory is a much newer development: it proposes a universal computational process that can reproduce the functions of a thinking brain by manipulating symbols (Hecht-Nielsen 2007). The process, called confabulation, is essentially a winner-take-all battle that selects symbols from each cortical substructure, called a cortical module. Each possible symbol in a given module is population-encoded as a small set of active neurons, representing one possible “winner” of the confabulation operation. Each cortical module addresses one attribute that objects in the mental world can possess. The mental-world objects are not separate structures, but are rather encoded as the collection of attributes that consistently describe them. Ideas are encoded in structures called knowledge links, which are formed between symbols that consistently fire in sequence. It is proposed that this process can explain most or all cognitive functions.
THEORY
The confabulation operation happens as each cortical module receives a thought command encoded as a set of active symbols, and stimulates each of its knowledge links in turn, activating the target symbols in each target module that exhibit the strongest response. This operation then repeats over and over as the conscious entity “moves” through each word in a sentence, for example. Confabulation Theory seems to affirm the Chomskian notion of an emergent universal grammar, but the specific biological mechanism that enables the process is not fully understood. However, it must be efficient enough to achieve intelligence with the resources available to the brain.
Research indicates that population encoding by itself cannot account for the bandwidth of cognitive functions when considering the available hardware, and some have proposed the idea that information must also be encoded within the relative timing of neural events (Jacobs et al. 2007). Recent experimental data suggests that some information is encoded in the “phase-of-firing” when an input signal interferes with ongoing brain rhythms (Masquelier 2009). These rhythms are generated by resonant circuits of neurons that fire in a synchronized fashion, and some circuits can be effectively “tuned” to resonate at a range of frequencies between 40 and 200 Hz (Maex 2003).
We now consider the possibility that these “tuned” neural circuits are an essential condition for language processing and other intelligent behavior: their dynamics implement an analog operation similar to the hypothesized confabulation. Sensory information arrives as a wave of excitation that propagates through the topology of the nervous system, and is sorted into harmonic components as it is diffracted by the structure of the brain. Each configuration of neurons exhibits resonance with specific frequencies or longer patterns of activation when exposed to this input, and can therefore process a continuous signal from any or all of the senses at once. The time-domain representation of the frequency-domain representation of the data is, in a rough sense, transposed into the specific resonant characteristics of various neural populations, which can then be trained to produce the same cascade of resonances that was caused by the original signal and thus recall the information from memory. In the case of a spoken word, the resonance cascade encoded in the brain structure is activated, and the waves of excitation move down the nerves to the muscles in a way that generates the sound. Speaking a word and listening to the same word would necessarily activate some common circuitry, as suggested by the existence of mirror neurons (Rizzolatti 2004).
The confabulation operation, and all cognition, could then be understood as an emergent property of suitably resonant neural populations, activated by and interfering with appropriate sensory signals. It has been hypothesized for some time that frequency-tuned circuits are essential to the process of sensory data binding and memory formation (Buzsaki 1995). If thoughts are indeed generated by an analog of the confabulation process, these “tuned” configurations would probably correspond to the hypothesized symbols more closely than simple populations of neurons. This would allude to a few looming challenges. First, the exact resonant properties are different for each of a neuron’s 1,000-plus connections, and vary with both time and interference in an analog fashion. Second, these resonances would need to be “sharp” enough to accurately identify and mimic the minute variations in sensory data generated by slightly different objects in the real world. Cochlear hair cells do exhibit behavior that suggests a finely-tuned response, each one activating in a constant, narrow band of input frequencies (Levitin 2006).
If confabulation is to emerge from a resonating neural network, this network must be able to process arbitrary periodic activation patterns, along with simpler harmonic oscillations, and arrive at meaningfully consistent results each time it is exposed to the sensory data generated by a real-world object. Considering the mathematical properties of analog signals, this does not seem like an impossible task. As Joseph Fourier demonstrated in his seminal work on the propagation of heat, any periodic signal can be represented as the sum of an infinite series of harmonic oscillations. This result suggests that it is at least possible to “break down” periodic or recurring sensory signals into a set of harmonic oscillations at specific frequencies, and within this framework, those frequencies would determine exactly where the sensory data is encoded on the topology of the brain. We can imagine that recurring harmonic components of the signals generated by the appearance, sound, smell, taste or texture of an apple would all contribute to the mental-world category of “apple-ness,” but that hypothesis doesn’t immediately suggest a method to determine exactly which frequencies persist, where they originate or where they are encoded in the cortex (aside from the “red” or “green” frequencies, I suppose).
Within this purely qualitative framework, thought is simply produced by the unique configuration of neural circuits that exhibit the strongest resonance as signals from the senses interfere with concurrent network activity. Dominant circuits can even “shut off” other circuits by altering the firing threshold and delay of their component neurons, thus destroying the resonant behavior. This phenomenon would seem to prevent an overwhelming number of ideas from destructively interfering with each other, making normal linear thought generation possible.
Memory is reliable because the recurring harmonic components of experience are gradually “tuned” into the brain structure as sensory signals mold the emerging resonance into the cortex, effectively integrating the new idea with a person’s existing knowledge base. The overwhelming majority of information is eventually discarded in this process, as it does not recur in the decomposed signal. Only those harmonic components that persist are remembered.
IMPLICATIONS
A quantitative description of this framework is beyond the scope of this paper, but it would probably include a way to discretely represent as many of the individual resonances as possible. A generalized model could be built in a higher-dimensional “circuit space,” but it is unclear whether this approach would prove significantly faster or entirely different from the latest data compressing and signal processing algorithms. Programming truly intelligent behavior into this kind of machine would probably require considerable effort, as humans generally learn things like basic arithmetic over a long period of time, eventually processing simple equations by associating categories of numbers with their sums and products.
An investigation of human and animal behavior within this framework might yield better results for now. The obvious place to start is with music cognition, as Daniel Levitin has done in his book. Further research on the connections between music theory and induced brain rhythms is advisable.
The framework is also interesting because it would require very little demarcation between the mental and physical realms, information entering and exiting the body seamlessly with sensory perception and behavior, respectively. If we imagine that the signal generated by an individual’s name might exhibit a characteristic interference with the signal generated by the personal pronouns “I” and “me,” then self-awareness might only emerge in a community that passes around the right messages. Philosophically, all conscious thought would then be intimately dependent on reality, as trained intelligent brains would only be able to reproduce the various harmonic patterns that recur in reality.
As broad pattern-matching devices, humans perform precise computations rather inefficiently, and Turing machines will probably remain the most appropriate tool for that specific job. However, imprecise computations like those required for effective facial recognition might be greatly optimized by the subtle oscillatory characteristics of neural circuitry. Those attempting to achieve artificial intelligence will benefit from a careful evaluation of the data that their models can represent and preserve.
SOURCES
Buzsaki, Gyorgy and Chrobak, James. “Temporal structure in spatially organized neuronal ensembles: a role for interneuronal networks.” Current Opinion in Neurobiology, 5(4):504-510, 1995.
Fourier, Jean Baptiste Joseph. The Analytical Theory of Heat. New York: Dover Publications, 1955.
Gibson, William. Neuromancer. New York: Ace, 1984.
Hecht-Nielsen, Robert. “Confabulation theory.” Scholarpedia, 2(3):1763, 2007.
Jacobs, Joshua, Michael Kahana, Arne Ekstrom and Itzhak Fried. “Brain Oscillations Control Timing of Single-Neuron Activity in Humans.” The Journal of Neuroscience, 27(14): 3839-3844, 2007.
Levitin, Daniel. This is Your Brain on Music. Plume, New York, 2006.
Maex, R. and De Schutter, Erik. “Resonant synchronization in heterogeneous networks of inhibitory neurons.” The Journal of Neuroscience, 23(33):10503-14, 2003.
Masquelier, Timothee, Etienne Hugues, Gustavo Deco, and Simon J. Thorpe. “Oscillations, Phase-of-Firing Coding, and Spike Timing-Dependent Plasticity: An Efficient Learning Scheme.” The Journal of Neuroscience, 29(43): 13484-13493, 2009.
Rizzolatti, Giacomo and Craigher, Laila. “The Mirror-Neuron System.” Annual Review of Neuroscience. 2004;27:169-92.
Rosenblatt, Frank. “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain.” Psychological Review, 65(6), 1958.
[…] Last time, we imagined that a cognitive “confabulation” process, and therefore all intelligence, happens in the brain as an interference phenomenon, or a sort of nonlinear convolution, among complicated modes of oscillation on a neural network. […]