According to Karl Popper, any scientific theory must be falsifiable. What does this entail?

A falsifiable theory leads to predictions which would be invalidated by some conceivable observation. For example, Newtonian dynamics predicts that in a uniform gravitational field, two objects with different masses will have the same acceleration due to gravity. It implies that if we drop a feather and a ball bearing inside a vacuum chamber, the ball bearing will not fall faster, as long as the theory is valid. This is in fact what happens.

Newton’s theory was very successful at describing the celestial motion of the known planets, but in the 19th century it did not correctly predict the orbit of Uranus. This fact would have falsified the theory, or greatly limited its precision. However, Urbain Le Verrier knew that the gravitational field of an unknown planet could be pulling Uranus into its observed orbit, and predicted where and how massive such a planet would be. Astronomers pointed their telescopes at the expected location and discovered Neptune. If no planet had existed at that location, this prediction would have been wrong, and the inconsistency with Newtonian dynamics would have remained valid.

A planet orbits in an ellipse, and the point where it moves closest to the sun is called the perihelion of its orbit. This point gradually precesses around the sun due to gravitational forces exerted by other planets. The same Le Verrier compared observations of Mercury to the perihelion precession rate derived from Newton’s theory, and found a discrepancy of nearly half an arcsecond per year. He predicted the existence of another planet closer to the sun to explain his result, but no planet was ever observed and this problem remained open.

To explain other puzzling observations, Albert Einstein abandoned the Galilean transformations of Newton’s theory for a framework which uses Lorentz transformations in four dimensions. General Relativity describes gravitation as an effect of spacetime curvature. It simplifies to Newtonian dynamics at lower energy levels. According to this theory, the perihelion of Mercury’s orbit should precess by an additional 0.43 arcseconds per year, matching the observed value.

Still, the intuitive simplicity of Einstein’s theory did not automatically mean that it was a valid replacement for Newtonian dynamics. During the total solar eclipse of 1919, measured deflection of starlight agreed with the value derived from General Relativity, a highly publicized result. Decades passed before additional predictions were conclusively validated.

Global Whining

The scientific method is the greatest invention of the modern age. For centuries, its practitioners have transformed civilization using rational systems revealed through careful observation. Theories which have succeeded not by virtue of their popularity but because they correctly predicted unknown phenomena are especially awe-inspiring. However, predictions without rational justification or those vague enough to be confirmed by any number of observations should not earn the same recognition. I’m a huge fan of Karl Popper regarding falsification, the idea that a scientific theory must predict some observable event which would prove it wrong (if the theory is wrong). This principle eliminates uncertainty regarding how specific a valid theory must be. Unfortunately, it has been ignored by some academics who claim to be scientists so that people won’t laugh at their ideas. You might have already guessed that today I’m targeting the low-hanging fruit of global warming alarmism. Prepare to be offended.

I won’t waste your attention picking apart the various temperature series, criticizing the IPCC models, or citing evidence of misconduct, not because those arguments have already been made by more qualified individuals, but because they shouldn’t even be necessary. Fundamental problems with any apocalyptic hypothesis make the whole enterprise seem ridiculous. This is what Popper says about scientific theory:

1) It is easy to obtain confirmations, or verifications, for nearly every theory – if we look for confirmations.
2) Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected […] an event which would have refuted the theory.
3) Every “good” scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
4) A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
5) Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.

The scenarios published by climate modelers don’t qualify as scientific predictions because there is no way to falsify them – updated temperature measurements will inevitably correlate with some projections better than others. And fitting curves to historical data isn’t a valid method for predicting the future. Will the IPCC declare the CO2-H2O-feedback warming model invalid and disband if the trend in the last decade of HadCRUT3 data continues for another decade or two? How about if the Arctic ice cap survives the Summer of 2014? I’m supposed to trust these academics and politicians with billions of public dollars, before their vague predictions can be tested, because the global warming apocalypse they describe sounds more expensive? This riotous laughter isn’t meant to be insulting, we all say stupid things now and then.

Doesn’t the arrival of ScaryStorm Sandy confirm our worst environmental fears? Not if we’re still talking about Karl Popper’s science. Enlightened by the theory of Catastrophic Anthropogenic Global Warming, academics were reluctant to blame mankind for “exceptional events” only two months ago. They probably didn’t expect a hurricane to go sub-tropical and converge with a cold front as it hit New York Bight on the full moon at high tide less than six weeks later, because that kind of thing doesn’t happen very often. Informed news readers might have been expecting some coy suggestion that global warming “influences” weather systems in the rush to capitalize on this disaster. But in a caricature of sensationalism, Bloomberg splashes “IT’S GLOBAL WARMING, STUPID” across a bright orange magazine cover, and suddenly enormous storm surges threaten our infrastructure again while the seas are still rising slowly and inevitably, all because of those dirty fossil fuels.

I don’t mean to say that we should actually expect scientific integrity from a stockbrokers’ tabloid, but Mike Bloomberg has really sunk to a new low. He spoke at TechCrunch Disrupt a few years ago and seemed like an average business-friendly mayor, not a shameless propagandist. I guess the soda ban was a bad omen. It’s a bit discouraging to see another newspaper endorse the panic, but then the organizers of our climate crusade have been pushing their statist agenda on broadcasters for a long time.

On Sunday, the New York Times doubled down with this ridiculous melodramatic lament, written by one talented liberal artist. Where’s a prediction about the next exceptional event, folks? Is it going to be a tornado or earthquake, or does it matter? Are there actually any rules for preaching obnoxious hindsight to believers? Can anyone suggest an observation that would falsify the theory?

What will the temperature anomaly or the concentration of carbon dioxide measure in ten years? How about one solid date for the eradication of a low-lying coastal city? If you must predict the apocalypse, it is only somewhat scientific if you can rationally argue for a deadline. And the science is only settled when the world ends (or doesn’t end) on time.

Plus ça change, plus c’est la même chose. Happy 2012.


Today I want to talk about comedy, because it is an absolutely amazing subject. The fact that an entire dynastic profession exists to make groups of (hopefully drunken) strangers laugh on command just seems kind of unbelievable. Clearly there is something deep and transformative about laughter, but what does it really mean when a person is compelled to laugh at something? For any aspiring jokesters out there, how can a comedian create this situation and get paid?

Well, in scientific terms, laughter is probably caused by something that behaves like a central pattern generator in the nervous system. These neural structures generate rhythmic output patterns without relying on any external feedback, so it is a bit strange to apply this concept to laughter (a person has to hear or see every joke, for example). However, the laughter usually happens only after a person gets the joke, at which point the “joke input” has ended in almost every case.

Therefore we should probably be conceptualizing laughter as an internal rhythmic feedback loop that can be started by some “funny” input. The challenge then is to define a “funny” input. I’ll pause for a second here so you can try that…

But wait! Doesn’t the very incomprehensibility of the challenge suggest something profound about how we should understand humor? Everyone knows that jokes are hard to write because an original comedian has to be the first person to notice that a certain thing is funny. The whole art of comedy revolves around having some of that uncommon and funny knowledge, and choosing to reveal it in the most entertaining way possible. Knowing this, is it possible to imagine something that all funny things must have in common?

Well, sure. They’re all “correct” in some abstract sense. Comedy is the process of being so profoundly correct that other people are compelled to laugh as soon as they realize what is going on. Us college-educated folk can scoff at low-brow humor, but almost any example of “bad” comedy still does reveal more than a few simple truths to more than a few tragically underinformed people, and therefore it can still make a lot of money. The fact that a thing is not funny to every person does not mean that it is not “funny” in some platonic sense. Somewhat disappointingly, there is no such thing. That makes good comedy very hard work, but at least we don’t ever have to fear the funniest joke in the world.

(From this perspective, slapstick humor is a special case where the truth being revealed is basically how badly it must suck for the victim…)

Generally speaking, this is not a new idea at all. A government document says this:

The American comedian Will Rogers was asked how he conceived his jokes. He answered: “I don’t make jokes. I just watch the government and report the facts.” See what I mean? Sometimes the truth is funnier than “comedy.”

Several Woody Allen bits are included as example one-liners, like this one:

I can’t listen to that much Wagner. I start getting the urge to conquer Poland.

It’s funny because it combines and reveals several truths in a clever and efficient way:

  • Wagner was a German imperialist.
  • Music conveys emotion.
  • Germany conquered Poland (and murdered millions of Jews) in World War II.
  • Woody Allen is Jewish.

The joke actually depends on the audience already knowing all of these things, and the “trick” is that he alludes to each in such an efficient and thought-provoking way, in the space of two short sentences. When we realize, all at once, the absurdity contained in the idea of a modern American Jew savoring hypnotic war hymns that ushered in the Second Reich, the effect is very funny for a lot of people, even if they don’t want to think about it!

I’m particularly interested in this method of “humor analysis” because it seems to emerge so naturally from a feedback-dominated model of intelligence. Laughter happens when a person notices something that is interpreted as “true enough” to activate an unconscious neural feedback loop, forcing them to externalize their acknowledgement and understanding. That is the sole evolutionary function of laughter, a phenomenon which almost certainly had a pivotal role in the building of every human civilization.

This is not saying that Adam Sandler is the greatest American ever, or even that we should all start studying Internet memes for the sake of science. But it does mean that we should take a moment and bow our heads in respect to every person who has ever wanted to make another person laugh, and in recognition of the great things they have accomplished for the sake of humanity. Because when a country of people stop what they are doing and start laughing (against their will) at the same idea at the same time, you can probably trust it a bit more than usual.

How would I define a “funny” thing? Funny things are true enough to make people laugh.

Here is someone else’s definition:

There is no simple answer to why something is funny… Something is funny because it captures a moment, it contains an element of simple truth, it is something that we have always known for eternity and yet are hearing it now out loud for the first time.

Science: Analog Intelligence

Last time, we imagined that a cognitive “confabulation” process (and therefore all intelligence) happens in the brain as an interference phenomenon, or a sort of nonlinear convolution, among complicated modes of oscillation on a neural network.

But this idea is immature and unfunded, and experiments are not prepared at the moment to rigorously test some kind of hard prediction.

So instead, let us wave our hands, consider the typical living person as an empirical phenomenon, and attempt to describe a basic theory of idea genesis by thinking about it/him/her. A spoken sentence is commonly defined in English class as a “complete thought,” and we hypothesize that this definition can be closely correlated with some specific thing that might be called an “understood idea” as it enters or exits a conscious person, given the following conditions:

1) The person is arriving spontaneously at each output word, i.e. composing sentences as they are being spoken. This is different from a “memorized idea” which could instead be modeled as a sort of large word in a person’s vocabulary. It is also different from a “perceived idea” like this sentence that you are reading, because in this case a large percentage of the processing devoted to “finding” each word is cut out and replaced with less-intensive processing devoted to parsing each word and, in a typical case, “sounding it out” internally as your eye scans the page. Incidentally, that is why it takes much longer to write a book than it takes to read it.

2) The person really understands each input word, a philosophical dead end which can only be assumed from a given reply.

So where do these understood ideas come from? We tend to agree what is a coherent sentence, and far chaining mellow peninsula no binder what. But how do we arrive at the correct series of words for each idea? It is not really possible to identify the physical source of any particular word that I might say myself, because to do so would require me to say new and different words (at least internally), and so on. But it is still possible to theorize a mechanism by which this can happen in a general sense, that is consistent with the principles of analog confabulation.

A good place to start is with the acknowledgment that words are not guaranteed to mean exactly the same things to different people, and it is only by assuming a considerable amount of shared experience that we can rely on these labels to signify approximately what we intend to communicate. It would also be wise to acknowledge the fact that most things that can be understood by intelligent beings aren’t easily translated into words, as the arrival of creatures with “large” vocabularies was not very long ago, and therefore we have a rather naive understanding of what a “large” vocabulary actually is.

With that in mind, let’s get right to the core of the matter: what makes a certain word or pattern part of a person’s vocabulary? What is its function in relation to other words, and the people who use them? I consider it logical and correct to describe each word as a reminder of some shared experience. Why does the word “apple” mean what it does? Because it has been associated with the experience of an apple since before any one of us was alive. I know what the word means because I have experienced it so many times in the presence of apples. I can communicate this to other people, because when dealing with apples, I am strongly inclined to spontaneously arrive at that word, and externalize it.

The paradox, then, is this: if every word in a given vocabulary has to refer to some common feature of experience, how do people communicate new things? Well, there are several other factors to consider. First, it is possible to arrange familiar words in a way that reveals some previously unfamiliar aspect of the relationship between them. When these arrangements are particularly witty or profound, they are often called “jokes.”

Second, it is sometimes possible and even necessary to create new, completely unfamiliar words when they are required by a new idea. In these cases, if the new words are particularly appropriate or useful, they must refer to some common feature of experience that has not been named, and so they are assimilated into the shared vocabulary of those who understand the new idea. That is how language evolves.

Third, human communication has always been imprecise at best and useless at worst, so there is hardly any guarantee that listeners will ever understand anything I say in the same way that I do. This imprecision is usually ignored by humans, yet it causes the evolution of communicated ideas in unpredictable and not necessarily unhelpful directions. On the other hand, when we are inclined to read and write precise, executable computer code, it is often found that simply reading the code like one would read a book does not provide any useful insight. To rigorously understand a computer program or a mathematical proof, one must essentially construct a perfect imitation of some discrete state of mind achieved by its original creator, and it is not a coincidence that our relatively primitive machines can be readily configured to make use of these same ideas. We should also not be surprised that drilling children in the efficient execution of algorithms does little to produce creative adults.

Luckily, none of these factors lead to contradiction when imagining a neural network as an analog phenomenon, and in fact the reality seems much more consistent with this framework than with typical digital and discrete-time neural networks. The idea requires a rather uncompromising philosophy once it is extrapolated far enough, but that’s a common problem with any broad scientific theory. The most difficult point to accept will be that in this view, there is no further control system or homunculus that sits “behind” the interference phenomenon in any sense, as the phenomenon itself is the only control mechanism present. This challenging idea might lead some to conclude that insanity is only one unfortunate circumstance away, or even that free will itself does not exist. I would caution those who go that far to be aware of exactly what it is they are defining – if “free will” means the capacity for human beings to make decisions that contradict every rational force or expectation in the known universe, then explaining in scientific terms how this condition arises only serves to reinforce its reality.

It is trivial to cover edge cases (read: far from the cortex) with this model, because for example, medical science already knows that the force conveyed through a muscle is proportional to the frequency of the nerve pulses, not amperage or anything like that. Considering this, “reflex actions” and “muscle memories” can be explained as progressively longer signal paths that penetrate farther toward the cortex proper, but are quickly reflected back at the muscles that will perform the response. The difficulty comes with explaining more sophisticated animal behaviors, and finally with accounting for the nature of introspective consciousness. The signal paths for these actions are certainly orders of magnitude more complex than any of those which we can directly observe at present, but it is not impossible or even implausible that the underlying physical mechanism should essentially be the same.

The central hypothesis linking analog confabulation with intelligence suggests that in reality, conscious thought is only ever quantized or digitized in the sense that a given signal either “resonates with” or “does not resonate with” the rest of the brain. It would not be elementary to add or multiply these signals in a linear fashion, as the space of human ideas is not affine. Thus, a set of words grouped together in a specific order can encode much more information than the set of information gathered from each word when considered on its own. Furthermore, ideas beyond a certain elementary complexity level are never 100% guaranteed to persist. A common annoyance called a “brain fart” happens typically when one word or phrase from an idea that “should” be resonating with the others fails to enter the feedback condition, due to unexpected interference from any number of sources. This condition is not usually permanent, but people can and do permanently forget ideas that don’t resonate with anything for the rest of their lives.

Is it really possible to understand intelligence if this much ambiguity is required? Analog systems have characteristics that make them very useful for certain tasks related to intelligence, so it is in our best interest to try. After it has stabilized, a neural network arrives at a sort of “temporary solution” where the weightings of its connections are each optimally configured that no (or few) weightings change on the next recurrence of network activity. It would seem that an analog system could be stabilized in this manner to much more significant precision, and possibly in much less time, especially if any “aliasing” effect of the digitized neurons causes disruptive oscillatory behavior to persist longer than it would otherwise. The improvement over coarse digital algorithms would likely be significant, as evidenced by the fact that bees can reliably discover the best routes to maximize food collection using very little available hardware. A digital simulation of physically precise or effectively “continuous” neural networks is possible and has been attempted, but the complexity and price of such a system is still prohibitive, to say the least. The alternatives would appear to be either an enormously complicated analog computer, or the convenient discovery of some mathematical trick that makes efficient modeling with Turing machines possible.

Therefore, at present this perspective on high-level behavior and intelligence might be developed further in a qualitative field like psychology. One intriguing theory of mind, originally published by Julian Jaynes in 1976, suggests that humans went through a phase of “bicameral” mentality in which one part of the brain that generated new ideas was perceived by the other part as a member of the external universe. Jaynes suggests that this “bicameralism” was similar in principle to what we call “schizophrenia” today, and can account for all sorts of historical oddities that we call religions, myths and legends. The theory is based on the core epiphany that logical and learned behaviors predate consciousness and indeed provide some of the necessary conditions for its existence. This is used to push the idea that the human “phenomenon” emerged from interactions between sophisticated, organized animals and the external environment after a special phase of “bicameral society” in which most humans were not even self-aware.

Jaynes’s historical analysis touches on many interesting ideas, and provides enough evidence to demand a serious consideration, but its most obvious shortcoming is the manner in which it skips from an initial, abstract consideration of the separation between behavior and consciousness, to a discussion of Gilgamesh and the Iliad. We pick up the story of mankind there, and nothing is said of the millions of years of evolution leading to that point. Any complete theory of intelligence has to account for canine and primate societies as well as early human ones, and Jaynes’s bold assertions leave the reader wondering if there are any self-aware apes leading their mindless troops through the jungle.

In the framework of analog confabulation, we can ignore some of these hairier philosophical challenges for the moment, as the bicameral mind simply bears striking similarities to one intuitive model of a general pre-conscious condition. When a stimulus enters the kitty cat, it responds immediately and predictably. This is the behavior of a system that is not considerably affected by feedback. It can be characterized as involving a sort of linear path from the senses into the cortex, with one or two “bounces” against the cortex and then back out through the muscles as a reaction. It’s really quick and works wonderfully for climbing and hunting, but it means that the cat will never sit down and invent a mousetrap.

Self-aware creatures, on the other hand, behave as if there is significant feedback, at least while introspecting, and their brains might be characterized as having a great number of “loops” in the neural pathways. It means that the resonances theorized by analog confabulation can be extremely sophisticated, but naturally sophisticated resonating structures would have to develop before any of that could happen. The critical threshold must obviously involve enough complexity to process a vocabulary of a certain size, but it could include communication of any kind, using any of the senses.

The question of when or whether bicameral human societies existed is unaffected by any of this, but at the same time that possibility cannot be ruled out. It might even be valid to say that, for example, dogs have “bicameral minds” like Jaynes claims ancient humans did, only that their vocabulary is limited and not fully understood by us. Much of it could be roughly translated into simple, impulsive ideas like “I’m hungry!” or “come play!” or “squirrel!” like the dogs in Up, but a dog could never say “I’m thinking so therefore I exist!” in the same manner. Most dogs have not discovered that their brains are the source of their own ideas, and even if they did they would not have any good word for “think.”

So what solid logic supports this theory in the end?

– Wernicke’s area and Broca’s area are two topologically complex parts of the brain that are active in understanding and forming words, respectively. A high-bandwidth neural loop connects them.

– A large body of circumstantial evidence, some of which will be included here:

– Uniquely “human” behaviors like laughter, dancing, singing, and aesthetic choices all can be said to have a certain “rhythmic” component that describes the behavior intuitively and at a low level. Each behavior would then involve periodic signals, by definition.

– More specifically, if laughter really does betray some “inner” resonance that happens involuntarily when a human encounters the right kind of new idea, that phenomenon suddenly makes a whole lot more sense in an evolutionary context.

– Meditation reveals how new ideas arrive as unexpected, sudden, and sharp feedback loops, which often take some time to deconstruct and translate into the appropriate words, but are nevertheless very difficult to erase or forget. That is of course, unless an idea arrives in the middle of the night, in which case the noise of REM sleep can overwrite anything that is not written down.

– The fact that words have to “happen” to a person several times before they are useful means that each has a periodicity, even if it is irregular. And some words like “day” and “night” occur with regular periodicity.

– Music has profound effects on the mind. Duh.

– Light also affects mood, and too much of the wrong spectrum can make you SAD.

I’ll try to keep this list updated as I remember more circumstantial evidence that should be written down in a notebook already, but it seems like there would be a lot. In any case, if you *ahem* have thoughts about this theory, please do share them. Nobody really knows the answer to any of these questions yet so all ideas are appreciated.


What is the most important thing that a programmer should do? The textbook answer, “comment everything”, only ensures a minimum viable reusability, especially when nonsense appears like:

//UTF-8 Kludge

causing unsuspecting developers to detour for what could be hours (are office distractions NP-complete?) and jeopardizing the sanity of everyone involved. Don’t get me wrong, great comments are indispensable works of art, but to be able to write them, you must first write a great program.

Then what is a great program? It depends on who you ask. Rails users might argue that a great program says everything it needs to say exactly once in unreasonably concise Ruby, and includes modular unit tests for 100% of the code base. LISP enthusiasts might praise a program for its mind-boggling recursive structure, and its unreasonable efficiency. However, in both of these cases, and always for any novice programmer, the most important feature of a great program is its consistent and correct use of variables.

Put another way: a computer program’s codebase is unsustainable if its variable identifiers can’t be interpreted correctly by developers. This means that calling a variable “temp” or “theAccountNumber” is always a bad idea unless it is actually correct to think of that variable as “the temp variable”, or the only “account number” that that particular program (or section of the program) uses. We are at a point where nearly every bottleneck I encounter in everyday software development is between my mind and an unfamiliar block of code. If there is a chance of confusing anything else with whatever you’re naming, it’s the wrong name.

What is the right name? That’s another question with a multitude of answers. CamelCase with the basic, accurate name of a variable is a good place to start, meaning that if I were to create a variable to hold the text of some person’s additional info popup, it might be a good idea to start with one of the following:


depending on the context, i.e. what the program (or section) is supposed to do. Most developers I have met use the first letter of every variable to encode a bit of extra information, like if I decided to use lower case letters for an instance-level variable:


or possibly prepend a special character for private variables:


As with everything, the key is to use whatever makes the most sense to the people who need to work on it. If we are to sidetrack into a bit of philosophy, accurate naming is what makes all intelligent communication possible. The only thing that allows you to understand my word (“chicane”) is the fact that it has been used in public to refer to some persistent feature of the universe, and therefore anyone else can imagine what I imagine (or check Wikipedia) when I use it. This applies to all formal and mathematical language too: the only thing that is required to understand a given theorem is an accurate understanding of the words that it contains. Be careful, though: accurately understanding some of the words that mathematicians use is a lot harder than it sounds.

Are any non-programmers still paying attention? This part applies to you too. As the old 20th-century paradigm of “files, folders and windows” starts to look more and more passé, why not ditch that menacing behemoth that you call your “backup” and start organizing files by naming them correctly? (Spelling is important!) If you do that, just Gmail them all to yourself, and then they will be archived as reliably as Google is, forever. If you picked the right name for a file, you’ll know what to search for when you need it.

Celebrity Science

In 1692, experts in Salem, Massachusetts arrived at the conclusion that certain members of the town were witches endowed with dark powers, and successfully convinced the community to convict and execute people for an imaginary crime. At the turn of the 20th century, Thomas Edison ran a smear campaign to discredit AC power and promote his own DC power, which had a transmission distance advantage but was much more dangerous. At the end of the Apollo program, moonwalker Gene Cernan went through extended and publicized training in geology, and was partnered with Harrison Schmitt, a geologist, not because they would really need to analyze rocks while on the moon, but because it made a powerful image.

These are all people whom I’d like to call “celebrity scientists” – scholars with reputations that break through into the publicly visible, politically significant mainstream. Some of these individuals use that power for good, like the late Richard Feynman with his scathing critique of America’s textbook system. However, too many of the others use their incredible gift to cling to whatever wealth or power they can force out of the scientific and political community and the public at large.

I suspect this is not a new problem, and not one confined to any particular discipline. The most visible example today would be modern popular economics (if you are inclined to call that a science). Apparently, prestigious and short-sighted academics have crowded the media with flighty abstractions about easing and lending and rates and fees, and made the preposterous claim that the economy can be “solved” by their (and only their) intelligent authority. All this while ignoring the simple fact that every unexpected turn of the market during our three-year “recovery” was an unexpected observation, working to invalidate their theory! Regrettably, the Times doesn’t care.

It gets a lot worse: roving celebrity scientists popularized the mythical discipline of “climate science” (ostensibly requiring experience in atmospheric and ocean physics, solar physics, particle physics, geology, biology, chemistry, zoology, botany…) in the 1970s, and they rode their fame all the way to a Nobel prize and a super-scary movie featuring the former vice president of America! Years later, courtesy of some anonymous whistleblower, we find that back in reality those folks might not have had a single clue what they were doing, but regardless were mining their celebrity for as much money and power as it was worth.

People who know me might be suspicious of my reflexive animosity toward the wannabe environmentalist cult, but the evidence is all there for anyone to see. Sure, it’s telling that after swallowing this rubbery green worm hook, line and sinker, the media prefers a quiet, mealy-mouthed retreat. But isn’t the bigger problem that these scientists have funding at all? Isn’t the biggest problem that these junk theories had to be discredited by retired bloggers on the Internet? How much more of this sponsored ignorance can our culture possibly endure?

Thankfully, not all the news is so discouraging. NASA, one of the all-time great “celebrity science” outfits, has outgrown its childlike glee and subsequent fascination with floating around in spaceships, and launched its first rover toward Mars in eight years, the one that Spirit and Opportunity got them the budget to build. The celebrity for this mission will be Curiosity, a robot with unbelievable, super-human endurance, one that will revolutionize what we know about alien worlds yet again, and one that will become rightly famous for this despite not caring one bit.

Science is constantly recovering from its past mistakes. I’ll quote Michael Crichton in reverence:

In past centuries, the greatest killer of women was fever following childbirth. One woman in six died of this fever. In 1795, Alexander Gordon of Aberdeen suggested that the fevers were infectious processes, and he was able to cure them. The consensus said no. In 1843, Oliver Wendell Holmes claimed puerperal fever was contagious, and presented compelling evidence. The consensus said no. In 1849, Semmelweiss demonstrated that sanitary techniques virtually eliminated puerperal fever in hospitals under his management. The consensus said he was a Jew, ignored him, and dismissed him from his post. There was in fact no agreement on puerperal fever until the start of the twentieth century. Thus the consensus took one hundred and twenty five years to arrive at the right conclusion despite the efforts of the prominent “skeptics” around the world, skeptics who were demeaned and ignored. And despite the constant ongoing deaths of women.

The silver lining is that the truth does emerge, however eventually. As long as science is regarded as the quest to understand what we can observe, no amount of philandering with politicians will change the sobering reality that is revealed by history and perspective. And if we can get there in less than half of a century, that isn’t even so bad, considering! The point is, most celebrity science is garbage. The sooner mainstream Internet realizes this and ignores the garbage, the better for everyone involved. Including the polar bears.

P.S. It’s possible that I’m slightly limiting my options for, ahem, “graduate education” by taking such a confrontational tone, but the weird thing is that it doesn’t matter and I don’t care! As far as I’m concerned, any school that would object to this perspective is going to be irrelevant rather soon.

Science: Analog Confabulation

Here’s an updated version of a paper I wrote for Amit Ray’s class last quarter.


We assume that intelligence can be described as the result of two physical processes: diffraction and resonance, which occur within a complex topology of densely and recurrently connected neurons. Then a general analog operation representing the convolution of these processes might be sufficient to perform each of the functions of the thinking brain. In this view of cognition, thoughts are represented as a set of “tuned” oscillating circuits within the neural network, only emerging as discrete symbolic units in reality. This would pose several challenges to anyone interested in more-efficient simulation of brain functions.


In the early years of computer science, intelligence was theorized to be an emergent property of a sufficiently powerful computer program. The philosophy of Russell and Wittgenstein suggested that conscious thought and mathematical truths were both reducible to a common logical language, an idea that spread and influenced work in fields from linguistics to computational math. Early programmers were familiar with this idea, and applied it to the problem of artificial intelligence. Their programs quickly reproduced and surpassed the arithmetical abilities of humans, but the ordinary human ability to parse natural language remained far beyond even the most sophisticated computer systems. Nevertheless, the belief that intelligence can be achieved by advanced computer programs persists in the science (and science fiction) community.

Later in the twentieth century, others began to apply designs from biology to computer systems, building mathematical simulations of interacting neural nodes in order to mimic the physical behavior of a brain instead. These perceptrons were only able to learn a limited set of behaviors and perform simple tasks (Rosenblatt 1958). More powerful iterations with improved neural algorithms have been designed for a much wider range of applications (like winning at Jeopardy!), but a model of the human brain at the cellular level is still far from being financially, politically or scientifically viable. In response to this challenge, computationalists have continued the search for a more-efficient way to represent brain functions as high-level symbol operations.

Confabulation Theory is a much newer development: it proposes a universal computational process that can reproduce the functions of a thinking brain by manipulating symbols (Hecht-Nielsen 2007). The process, called confabulation, is essentially a winner-take-all battle that selects symbols from each cortical substructure, called a cortical module. Each possible symbol in a given module is population-encoded as a small set of active neurons, representing one possible “winner” of the confabulation operation. Each cortical module addresses one attribute that objects in the mental world can possess. The mental-world objects are not separate structures, but are rather encoded as the collection of attributes that consistently describe them. Ideas are encoded in structures called knowledge links, which are formed between symbols that consistently fire in sequence. It is proposed that this process can explain most or all cognitive functions.


The confabulation operation happens as each cortical module receives a thought command encoded as a set of active symbols, and stimulates each of its knowledge links in turn, activating the target symbols in each target module that exhibit the strongest response. This operation then repeats over and over as the conscious entity “moves” through each word in a sentence, for example. Confabulation Theory seems to affirm the Chomskian notion of an emergent universal grammar, but the specific biological mechanism that enables the process is not fully understood. However, it must be efficient enough to achieve intelligence with the resources available to the brain.

Research indicates that population encoding by itself cannot account for the bandwidth of cognitive functions when considering the available hardware, and some have proposed the idea that information must also be encoded within the relative timing of neural events (Jacobs et al. 2007). Recent experimental data suggests that some information is encoded in the “phase-of-firing” when an input signal interferes with ongoing brain rhythms (Masquelier 2009). These rhythms are generated by resonant circuits of neurons that fire in a synchronized fashion, and some circuits can be effectively “tuned” to resonate at a range of frequencies between 40 and 200 Hz (Maex 2003).

We now consider the possibility that these “tuned” neural circuits are an essential condition for language processing and other intelligent behavior: their dynamics implement an analog operation similar to the hypothesized confabulation. Sensory information arrives as a wave of excitation that propagates through the topology of the nervous system, and is sorted into harmonic components as it is diffracted by the structure of the brain. Each configuration of neurons exhibits resonance with specific frequencies or longer patterns of activation when exposed to this input, and can therefore process a continuous signal from any or all of the senses at once. The time-domain representation of the frequency-domain representation of the data is, in a rough sense, transposed into the specific resonant characteristics of various neural populations, which can then be trained to produce the same cascade of resonances that was caused by the original signal and thus recall the information from memory. In the case of a spoken word, the resonance cascade encoded in the brain structure is activated, and the waves of excitation move down the nerves to the muscles in a way that generates the sound. Speaking a word and listening to the same word would necessarily activate some common circuitry, as suggested by the existence of mirror neurons (Rizzolatti 2004).

The confabulation operation, and all cognition, could then be understood as an emergent property of suitably resonant neural populations, activated by and interfering with appropriate sensory signals. It has been hypothesized for some time that frequency-tuned circuits are essential to the process of sensory data binding and memory formation (Buzsaki 1995). If thoughts are indeed generated by an analog of the confabulation process, these “tuned” configurations would probably correspond to the hypothesized symbols more closely than simple populations of neurons. This would allude to a few looming challenges. First, the exact resonant properties are different for each of a neuron’s 1,000-plus connections, and vary with both time and interference in an analog fashion. Second, these resonances would need to be “sharp” enough to accurately identify and mimic the minute variations in sensory data generated by slightly different objects in the real world. Cochlear hair cells do exhibit behavior that suggests a finely-tuned response, each one activating in a constant, narrow band of input frequencies (Levitin 2006).

If confabulation is to emerge from a resonating neural network, this network must be able to process arbitrary periodic activation patterns, along with simpler harmonic oscillations, and arrive at meaningfully consistent results each time it is exposed to the sensory data generated by a real-world object. Considering the mathematical properties of analog signals, this does not seem like an impossible task. As Joseph Fourier demonstrated in his seminal work on the propagation of heat, any periodic signal can be represented as the sum of an infinite series of harmonic oscillations. This result suggests that it is at least possible to “break down” periodic or recurring sensory signals into a set of harmonic oscillations at specific frequencies, and within this framework, those frequencies would determine exactly where the sensory data is encoded on the topology of the brain. We can imagine that recurring harmonic components of the signals generated by the appearance, sound, smell, taste or texture of an apple would all contribute to the mental-world category of “apple-ness,” but that hypothesis doesn’t immediately suggest a method to determine exactly which frequencies persist, where they originate or where they are encoded in the cortex (aside from the “red” or “green” frequencies, I suppose).

Within this purely qualitative framework, thought is simply produced by the unique configuration of neural circuits that exhibit the strongest resonance as signals from the senses interfere with concurrent network activity. Dominant circuits can even “shut off” other circuits by altering the firing threshold and delay of their component neurons, thus destroying the resonant behavior. This phenomenon would seem to prevent an overwhelming number of ideas from destructively interfering with each other, making normal linear thought generation possible.

Memory is reliable because the recurring harmonic components of experience are gradually “tuned” into the brain structure as sensory signals mold the emerging resonance into the cortex, effectively integrating the new idea with a person’s existing knowledge base. The overwhelming majority of information is eventually discarded in this process, as it does not recur in the decomposed signal. Only those harmonic components that persist are remembered.


A quantitative description of this framework is beyond the scope of this paper, but it would probably include a way to discretely represent as many of the individual resonances as possible. A generalized model could be built in a higher-dimensional “circuit space,” but it is unclear whether this approach would prove significantly faster or entirely different from the latest data compressing and signal processing algorithms. Programming truly intelligent behavior into this kind of machine would probably require considerable effort, as humans generally learn things like basic arithmetic over a long period of time, eventually processing simple equations by associating categories of numbers with their sums and products.

An investigation of human and animal behavior within this framework might yield better results for now. The obvious place to start is with music cognition, as Daniel Levitin has done in his book. Further research on the connections between music theory and induced brain rhythms is advisable.

The framework is also interesting because it would require very little demarcation between the mental and physical realms, information entering and exiting the body seamlessly with sensory perception and behavior, respectively. If we imagine that the signal generated by an individual’s name might exhibit a characteristic interference with the signal generated by the personal pronouns “I” and “me,” then self-awareness might only emerge in a community that passes around the right messages. Philosophically, all conscious thought would then be intimately dependent on reality, as trained intelligent brains would only be able to reproduce the various harmonic patterns that recur in reality.

As broad pattern-matching devices, humans perform precise computations rather inefficiently, and Turing machines will probably remain the most appropriate tool for that specific job. However, imprecise computations like those required for effective facial recognition might be greatly optimized by the subtle oscillatory characteristics of neural circuitry. Those attempting to achieve artificial intelligence will benefit from a careful evaluation of the data that their models can represent and preserve.


Buzsaki, Gyorgy and Chrobak, James. “Temporal structure in spatially organized neuronal ensembles: a role for interneuronal networks.” Current Opinion in Neurobiology, 5(4):504-510, 1995.

Fourier, Jean Baptiste Joseph. The Analytical Theory of Heat. New York: Dover Publications, 1955.

Gibson, William. Neuromancer. New York: Ace, 1984.

Hecht-Nielsen, Robert. “Confabulation theory.” Scholarpedia, 2(3):1763, 2007.

Jacobs, Joshua, Michael Kahana, Arne Ekstrom and Itzhak Fried. “Brain Oscillations Control Timing of Single-Neuron Activity in Humans.” The Journal of Neuroscience, 27(14): 3839-3844, 2007.

Levitin, Daniel. This is Your Brain on Music. Plume, New York, 2006.

Maex, R. and De Schutter, Erik. “Resonant synchronization in heterogeneous networks of inhibitory neurons.” The Journal of Neuroscience, 23(33):10503-14, 2003.

Masquelier, Timothee, Etienne Hugues, Gustavo Deco, and Simon J. Thorpe. “Oscillations, Phase-of-Firing Coding, and Spike Timing-Dependent Plasticity: An Efficient Learning Scheme.” The Journal of Neuroscience, 29(43): 13484-13493, 2009.

Rizzolatti, Giacomo and Craigher, Laila. “The Mirror-Neuron System.” Annual Review of Neuroscience. 2004;27:169-92.

Rosenblatt, Frank. “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain.” Psychological Review, 65(6), 1958.

Nuclear Meltdowns

There’s been a lot of media buzz in recent days about a few of Japan’s nuclear reactors, which are apparently suffering from a coolant failure and possible meltdown. The 2011 quake is already a disaster, but from the tone of some of these articles, reporters seem to be expecting it to “go nuclear” at any time now, even trotting out the dreaded “Chernobyl” name-drop. This is not honest reporting: it reflects either astonishing incompetence or cold deceptiveness, intended to scare the public out of supporting a promising technology. I’m not going to try to identify the unscrupulous characters who might be responsible (hint: look at the people who sell oil), but I can’t just sit here and watch a meme this deadly infect the world without doing anything about it. This will be like bailing out a cruise ship with a teaspoon, but hey, I gotta try something.

To understand why Chernobyl is a horrible point of reference, we need to start with a brief overview of the physics. Fission reactors generate heat energy by arranging sufficiently “heavy” materials (substances made of large atoms) in such a way that little bits of the atoms start “breaking off” and a stable chain reaction can be achieved. When exposed to the right kind of energy, each fissile atom produces one or more very-fast-moving particles that will eventually smash into other atoms in the fuel and break them apart in turn, thus continuing the reaction. However, according to the curious laws of the quantum universe, at first many of these particles are traveling too fast to successfully break off bits of another atom, and so they must be decelerated by what is called a “neutron moderator” until they are traveling slowly enough to impact the next fissile atoms and continue the reaction.

The neutron moderator effectively determines the rate of the reaction by producing more or fewer of these slow-moving “thermal” neutrons, so its design is of critical importance (pardon the pun) to any supposedly safe reactor. Now the difference between Chernobyl and every other less-dangerous incident should be clear: the reactor at Chernobyl was designed with a notoriously unsafe choice of a moderator, while every operational reactor in Japan has used a very safe choice. Specifically, Japan’s reactors use ordinary water as a moderator. Sure, it’s named as the coolant, but the genius of the design lies in the fact that the water also controls the rate of reaction! If the reactor overheats and the coolant (which is also the moderator) boils away, those fast neutrons maintain their too-high velocity and the chain reaction slows. While this certainly doesn’t eliminate all hazards from the immediate area (did I mention the massive, speeding particles?) the engineers can monitor core pressure from farther away and release some irradiated (very slightly dangerous) steam if it climbs too high, which is exactly what has happened already. At Chernobyl, the reactor used a graphite moderator instead. When it overheated, loss of coolant allowed the reactor to go “prompt critical”, where the chain reaction intensified uncontrollably and exponentially. It ruptured the poorly-designed containment vessel and the hot exposed graphite went up in a fireball, throwing tons of radioactive core materials into the atmosphere. One of these events is not like the other.

Of course, this is not the whole story, as today’s reactors are designed with many additional safety features, but the point is that this difference is indispensible to any intelligent consideration of the risks. Prestigious newspapers might be careful to brush off predictions of a radioactive explosion, and tend to identify dangerous materials leaking through containment into the groundwater as the most serious threat. Even so, there is a conspicuous absence of clarification on the physics whenever Chernobyl is mentioned. The public may expect this kind of fear-mongering and exaggeration from the media, but it is still staggeringly irresponsible when a genuine crisis is happening.

Chernobyl was a disaster of epic proportions, but it was caused by incompetence and greed, not technology. For those who know better, gleefully invoking its memory in order to scuttle safe nuclear power is a terribly evil thing to do.

I don’t want to sound as if I’m criticizing all cautious attitudes in light of the situation (I have one myself). On the other hand, I guarantee that those predicting a fallout doomsday or the end of nuclear power will be pressured to revise their opinions as the crisis is resolved. April 26th was a normal day ruined by irresponsible thugs who abused the scientific method. March 11th will be remembered as a catastrophic day saved by heroes who commanded it.

UPDATE: I really shouldn’t do this, but I’m going to go out on a very narrow limb here and guess that the psychological damage caused by the image of exploding reactor buildings will cause more devastation than a quiet but complete meltdown would have, which makes me wonder a bit about the decision to start pumping in seawater. However, I’m not even close to being qualified to operate one of those things, so don’t trust that opinion under any circumstances.

Where are the aliens?

Today, let’s look at the Drake Equation, probably the most useless formula in all of science. Essentially, through some sort of hocus-pocus bullshit, scientists “estimate” the likelihood of all the different criteria for intelligent life to evolve, and by multiplying this all together they can “calculate” the number of intelligent alien species we “should” be able to see. Typical answers range from about 10 to 1,000,000 civilizations. If I haven’t made it clear yet, I believe the whole thing is a thoroughly un-scientific exercise in futility. Without any observations to confirm these predictions, this equation will remain in the realm of religion forever. Using math doesn’t automatically make you a scientist.

This hasn’t stopped folks from trying to draw conclusions from these results, however. The “Fermi Paradox” goes something like this: if there “should” be at least 10 and possibly many more intelligent alien civilizations in our galaxy, why haven’t we picked up any extra-terrestrial signals yet?

Well, there are several things wrong with this “paradox.” In the first place, this is a bit like a slug wondering why it hasn’t ever picked up any extra-terrestrial slime trails. Our solar system is absolutely bathed in electromagnetic radiation, and there could easily be signals from thousands of different alien planets among that radiation that we simply don’t recognize as such. If aliens are at least as advanced as we are, they could easily be ten thousand times more so, and the window in which a civilization broadcasts decipherable messages (or messages that can be understood by humans) into space could easily be very short indeed, for any number of reasons.

Furthermore, there is no evidence indicating that life “should” evolve in other regions of the galaxy, as we’ve only seen it happen on one planet so far. This is partly why astrobiologists are so eager to identify life on Mars or the moons of Jupiter – without that evidence, their jobs technically don’t exist. As of right now, the idea that intelligent life has to be a “typical” case is an egregious example of anthropic bias, and until we actually make contact with an alien species there is absolutely no reason to assume something like this. It may not be a foolish belief, but it isn’t anything but a belief.

This kind of “contaminated” science bothers me a lot, and so I’ve been working on a little story that explores these issues and provides a hypothetical resolution to the Fermi Paradox. Hopefully it doesn’t take longer than a year or so to write, and it’ll be published here in full whenever it’s ready. Spoiler alert: the aliens actually do exist.