Review: Team Fortress 2

I’m writing about one of my all-time favorite games today, because last week I remembered that it recently switched over to a “free-to-play” model with a brand new market for in-game cosmetic and practical items. I downloaded it on yet another new computer because I supposed that there would be a veritable goldmine of free (inexperienced!) players to kill, a fresh selection of map updates, and brand-new play styles made possible by all the cool new weapons. Boy, was that supposition correct! I don’t remember the old game being this fun at all, and the old game was a whole lot of fun!

For those who don’t know what I’m talking about, Team Fortress 2 is a video game where one team tries to defend strategic points on a map, and the other team tries to infiltrate their defenses and capture whatever needs capturing. Players on either team get to choose from 9 different “classes” of militant, which have different strengths, use different weapons and require very different play styles to see any success. I’m told the Pyro is a girl under the gas mask, but otherwise all the soldiers are male, which angers some people. Anyway, the game is the most fully realized competitive multiplayer experience ever to grace the PC or Mac or console. The strategic possibilities are mind-blowing and hardly anybody gets left out, even people who normally suck at “shootey” games. I loved this game when it came out and I still have to keep myself away from it to this day, like cigarettes for normal people.

Most amazing is that, had the folks at Valve sat around and boozed away their profits from the Orange Box rather than going right back to work on the community, I would have never spent another dime to play the thing! As it stands, the $10 in my Steam wallet has been used up over the course of 3 days and I will have to refill it soon. There is less of a “walled garden” feeling with all the new free players, but Valve never forgot how to treat its loyal customers and as a result they’re simply printing money over there these days. Good for them.

I want to use part of this review to imagine what the wild success of this format means for the future of the entertainment industry. Valve has changed the world, by creating a virtual place that is completely free to access (barring hardware limitations), and selling virtual things that only work in that virtual place! Luddites will scoff and say that we’re all suckers for thinking that these ideas can really be valuable, but here’s a real hard serious question for them: does paying $2 at an arcade to pretend to race a motorcycle 3 times around a track give me more “fun” than owning a virtual rocket launcher would? I don’t think many people are prepared to argue that point.

Some gamers also complain that new weapons will introduce game balance issues, even though this does not make good logical sense. The market will demand any weapon that is found to be disproportionately effective, and Valve will be able to identify it and take any corrective action necessary. In addition, having hundreds of slightly different “enhancements” to choose from means not having enough time to study all their strengths and weaknesses.

What we should take away is that by sacrificing a dwindling source of revenue, Valve has turned intellectual property that might still be able to generate meager sales on a good day into a thriving economy that is driving their current profits. “Common sense” might say that giving away your product is a bad idea, but it doesn’t really have to be. This is hard evidence.

Celebrity Science

In 1692, experts in Salem, Massachusetts arrived at the conclusion that certain members of the town were witches endowed with dark powers, and successfully convinced the community to convict and execute people for an imaginary crime. At the turn of the 20th century, Thomas Edison ran a smear campaign to discredit AC power and promote his own DC power, which had a transmission distance advantage but was much more dangerous. At the end of the Apollo program, moonwalker Gene Cernan went through extended and publicized training in geology, and was partnered with Harrison Schmitt, a geologist, not because they would really need to analyze rocks while on the moon, but because it made a powerful image.

These are all people whom I’d like to call “celebrity scientists” – scholars with reputations that break through into the publicly visible, politically significant mainstream. Some of these individuals use that power for good, like the late Richard Feynman with his scathing critique of America’s textbook system. However, too many of the others use their incredible gift to cling to whatever wealth or power they can force out of the scientific and political community and the public at large.

I suspect this is not a new problem, and not one confined to any particular discipline. The most visible example today would be modern popular economics (if you are inclined to call that a science). Apparently, prestigious and short-sighted academics have crowded the media with flighty abstractions about easing and lending and rates and fees, and made the preposterous claim that the economy can be “solved” by their (and only their) intelligent authority. All this while ignoring the simple fact that every unexpected turn of the market during our three-year “recovery” was an unexpected observation, working to invalidate their theory! Regrettably, the Times doesn’t care.

It gets a lot worse: roving celebrity scientists popularized the mythical discipline of “climate science” (ostensibly requiring experience in atmospheric and ocean physics, solar physics, particle physics, geology, biology, chemistry, zoology, botany…) in the 1970s, and they rode their fame all the way to a Nobel prize and a super-scary movie featuring the former vice president of America! Years later, courtesy of some anonymous whistleblower, we find that back in reality those folks might not have had a single clue what they were doing, but regardless were mining their celebrity for as much money and power as it was worth.

People who know me might be suspicious of my reflexive animosity toward the wannabe environmentalist cult, but the evidence is all there for anyone to see. Sure, it’s telling that after swallowing this rubbery green worm hook, line and sinker, the media prefers a quiet, mealy-mouthed retreat. But isn’t the bigger problem that these scientists have funding at all? Isn’t the biggest problem that these junk theories had to be discredited by retired bloggers on the Internet? How much more of this sponsored ignorance can our culture possibly endure?

Thankfully, not all the news is so discouraging. NASA, one of the all-time great “celebrity science” outfits, has outgrown its childlike glee and subsequent fascination with floating around in spaceships, and launched its first rover toward Mars in eight years, the one that Spirit and Opportunity got them the budget to build. The celebrity for this mission will be Curiosity, a robot with unbelievable, super-human endurance, one that will revolutionize what we know about alien worlds yet again, and one that will become rightly famous for this despite not caring one bit.

Science is constantly recovering from its past mistakes. I’ll quote Michael Crichton in reverence:

In past centuries, the greatest killer of women was fever following childbirth. One woman in six died of this fever. In 1795, Alexander Gordon of Aberdeen suggested that the fevers were infectious processes, and he was able to cure them. The consensus said no. In 1843, Oliver Wendell Holmes claimed puerperal fever was contagious, and presented compelling evidence. The consensus said no. In 1849, Semmelweiss demonstrated that sanitary techniques virtually eliminated puerperal fever in hospitals under his management. The consensus said he was a Jew, ignored him, and dismissed him from his post. There was in fact no agreement on puerperal fever until the start of the twentieth century. Thus the consensus took one hundred and twenty five years to arrive at the right conclusion despite the efforts of the prominent “skeptics” around the world, skeptics who were demeaned and ignored. And despite the constant ongoing deaths of women.

The silver lining is that the truth does emerge, however eventually. As long as science is regarded as the quest to understand what we can observe, no amount of philandering with politicians will change the sobering reality that is revealed by history and perspective. And if we can get there in less than half of a century, that isn’t even so bad, considering! The point is, most celebrity science is garbage. The sooner mainstream Internet realizes this and ignores the garbage, the better for everyone involved. Including the polar bears.

P.S. It’s possible that I’m slightly limiting my options for, ahem, “graduate education” by taking such a confrontational tone, but the weird thing is that it doesn’t matter and I don’t care! As far as I’m concerned, any school that would object to this perspective is going to be irrelevant rather soon.

Modern Economics

Here’s a curious pattern that I’ve noticed in the news, involving eurozone and world economies during a typical week. I haven’t confirmed it scientifically, but at this point I’m convinced that the European economy requires a jump-start every weekend, and it can’t be too much longer before the machine breaks down completely.

This could contain a vague image of the weekly effect, but the reason I’m bringing it up is to suggest what I think is going on. I believe that global telecommunication has reached a point where news, and now meaningful opinions, can saturate the world in a weekend. This has never, ever been remotely possible before, and it is destroying the system that has dominated the market for a century preceding it. Internet is undermining the predictive capability mathematical models have exhibited when applied to the macro-economy, and the experts’ failure to recognize this has allowed them to mistake the forest for its derivative trees, so to speak. On Friday, desperate bureaucrats try to manufacture optimism before the weekend. By Sunday, the facade has crumbled.

Any Fed chairman or ECB official should be rightly terrified by the horde of twittering E-mericans, but I’m tempted to take the crisis a step further here and declare the entire field of classical socioeconomics obsolete. I would have absolutely no authority in doing this, of course, so instead I would like to suggest an unfortunate reality that is unaffected by any market-vs-government flame war: if there really is a silver fiscal policy for Europe, it will not come from someone who was hired ten years ago, and it will not require price fixing every five days.

Silicon Bay Update: The Next Big Thing

Vacation is over, and now I live and work in San Francisco. That turned out better than expected.

Anyway, to celebrate this new job in the tech capital of the world, I feel like philosophizing on the future of the Internet. Specifically, I’m really baffled that debates about the “Next Facebook” or “Google Killer” still flare up whenever a new candidate starts buying PR. Google was supposed to be replacing Facebook with Buzz, and Groupon was supposed to be replacing Google with… Groupon (I guess?), only a year ago. Hasn’t anyone noticed that Facebook and Google aren’t actually going anywhere? I’ll be less surprised if AT&T splinters by the end of the year.

The idea of a Facebook or Google “killer” might just be a harmless cliche, but I would guess that it is driven by more than simple memetics. It requires a certain perspective on the Internet, one that strikes me as rather primitive. What I’m talking about is the implied assumption that one virtual community is going to “kill” the others and then gradually start sucking up all Internet traffic like a giant virtual singularity. Just today, a new wave of stories appeared suggesting that Facebook could “become” the Web.

This assumption does make a sort of sense when what happened to the telephone networks is considered, but it completely fails in the context of a packet-switching network like the Internet. AT&T (and a few others) still own all the wires, but they have no control over what I do online. Google can influence what I see, and Facebook can influence who I stay in contact with, but neither can ever prevent me from looking up desk repair tips on a woodworking forum, or even from starting a new desk-repair-based community! That is not how Internet works.

Internet has a long tail, as they say, but that idea is often misunderstood to explain obscure products on Amazon. In fact, The Long Tail includes all the other websites that do what Amazon won’t do! Internet is prepared for the new and unexpected, because it has unprecedented diversity. That is the real power of the online economy – and what scares established media companies so very much.

Science: Analog Confabulation

Here’s an updated version of a paper I wrote for Amit Ray’s class last quarter.

ABSTRACT

We assume that intelligence can be described as the result of two physical processes: diffraction and resonance, which occur within a complex topology of densely and recurrently connected neurons. Then a general analog operation representing the convolution of these processes might be sufficient to perform each of the functions of the thinking brain. In this view of cognition, thoughts are represented as a set of “tuned” oscillating circuits within the neural network, only emerging as discrete symbolic units in reality. This would pose several challenges to anyone interested in more-efficient simulation of brain functions.

INTRODUCTION

In the early years of computer science, intelligence was theorized to be an emergent property of a sufficiently powerful computer program. The philosophy of Russell and Wittgenstein suggested that conscious thought and mathematical truths were both reducible to a common logical language, an idea that spread and influenced work in fields from linguistics to computational math. Early programmers were familiar with this idea, and applied it to the problem of artificial intelligence. Their programs quickly reproduced and surpassed the arithmetical abilities of humans, but the ordinary human ability to parse natural language remained far beyond even the most sophisticated computer systems. Nevertheless, the belief that intelligence can be achieved by advanced computer programs persists in the science (and science fiction) community.

Later in the twentieth century, others began to apply designs from biology to computer systems, building mathematical simulations of interacting neural nodes in order to mimic the physical behavior of a brain instead. These perceptrons were only able to learn a limited set of behaviors and perform simple tasks (Rosenblatt 1958). More powerful iterations with improved neural algorithms have been designed for a much wider range of applications (like winning at Jeopardy!), but a model of the human brain at the cellular level is still far from being financially, politically or scientifically viable. In response to this challenge, computationalists have continued the search for a more-efficient way to represent brain functions as high-level symbol operations.

Confabulation Theory is a much newer development: it proposes a universal computational process that can reproduce the functions of a thinking brain by manipulating symbols (Hecht-Nielsen 2007). The process, called confabulation, is essentially a winner-take-all battle that selects symbols from each cortical substructure, called a cortical module. Each possible symbol in a given module is population-encoded as a small set of active neurons, representing one possible “winner” of the confabulation operation. Each cortical module addresses one attribute that objects in the mental world can possess. The mental-world objects are not separate structures, but are rather encoded as the collection of attributes that consistently describe them. Ideas are encoded in structures called knowledge links, which are formed between symbols that consistently fire in sequence. It is proposed that this process can explain most or all cognitive functions.

THEORY

The confabulation operation happens as each cortical module receives a thought command encoded as a set of active symbols, and stimulates each of its knowledge links in turn, activating the target symbols in each target module that exhibit the strongest response. This operation then repeats over and over as the conscious entity “moves” through each word in a sentence, for example. Confabulation Theory seems to affirm the Chomskian notion of an emergent universal grammar, but the specific biological mechanism that enables the process is not fully understood. However, it must be efficient enough to achieve intelligence with the resources available to the brain.

Research indicates that population encoding by itself cannot account for the bandwidth of cognitive functions when considering the available hardware, and some have proposed the idea that information must also be encoded within the relative timing of neural events (Jacobs et al. 2007). Recent experimental data suggests that some information is encoded in the “phase-of-firing” when an input signal interferes with ongoing brain rhythms (Masquelier 2009). These rhythms are generated by resonant circuits of neurons that fire in a synchronized fashion, and some circuits can be effectively “tuned” to resonate at a range of frequencies between 40 and 200 Hz (Maex 2003).

We now consider the possibility that these “tuned” neural circuits are an essential condition for language processing and other intelligent behavior: their dynamics implement an analog operation similar to the hypothesized confabulation. Sensory information arrives as a wave of excitation that propagates through the topology of the nervous system, and is sorted into harmonic components as it is diffracted by the structure of the brain. Each configuration of neurons exhibits resonance with specific frequencies or longer patterns of activation when exposed to this input, and can therefore process a continuous signal from any or all of the senses at once. The time-domain representation of the frequency-domain representation of the data is, in a rough sense, transposed into the specific resonant characteristics of various neural populations, which can then be trained to produce the same cascade of resonances that was caused by the original signal and thus recall the information from memory. In the case of a spoken word, the resonance cascade encoded in the brain structure is activated, and the waves of excitation move down the nerves to the muscles in a way that generates the sound. Speaking a word and listening to the same word would necessarily activate some common circuitry, as suggested by the existence of mirror neurons (Rizzolatti 2004).

The confabulation operation, and all cognition, could then be understood as an emergent property of suitably resonant neural populations, activated by and interfering with appropriate sensory signals. It has been hypothesized for some time that frequency-tuned circuits are essential to the process of sensory data binding and memory formation (Buzsaki 1995). If thoughts are indeed generated by an analog of the confabulation process, these “tuned” configurations would probably correspond to the hypothesized symbols more closely than simple populations of neurons. This would allude to a few looming challenges. First, the exact resonant properties are different for each of a neuron’s 1,000-plus connections, and vary with both time and interference in an analog fashion. Second, these resonances would need to be “sharp” enough to accurately identify and mimic the minute variations in sensory data generated by slightly different objects in the real world. Cochlear hair cells do exhibit behavior that suggests a finely-tuned response, each one activating in a constant, narrow band of input frequencies (Levitin 2006).

If confabulation is to emerge from a resonating neural network, this network must be able to process arbitrary periodic activation patterns, along with simpler harmonic oscillations, and arrive at meaningfully consistent results each time it is exposed to the sensory data generated by a real-world object. Considering the mathematical properties of analog signals, this does not seem like an impossible task. As Joseph Fourier demonstrated in his seminal work on the propagation of heat, any periodic signal can be represented as the sum of an infinite series of harmonic oscillations. This result suggests that it is at least possible to “break down” periodic or recurring sensory signals into a set of harmonic oscillations at specific frequencies, and within this framework, those frequencies would determine exactly where the sensory data is encoded on the topology of the brain. We can imagine that recurring harmonic components of the signals generated by the appearance, sound, smell, taste or texture of an apple would all contribute to the mental-world category of “apple-ness,” but that hypothesis doesn’t immediately suggest a method to determine exactly which frequencies persist, where they originate or where they are encoded in the cortex (aside from the “red” or “green” frequencies, I suppose).

Within this purely qualitative framework, thought is simply produced by the unique configuration of neural circuits that exhibit the strongest resonance as signals from the senses interfere with concurrent network activity. Dominant circuits can even “shut off” other circuits by altering the firing threshold and delay of their component neurons, thus destroying the resonant behavior. This phenomenon would seem to prevent an overwhelming number of ideas from destructively interfering with each other, making normal linear thought generation possible.

Memory is reliable because the recurring harmonic components of experience are gradually “tuned” into the brain structure as sensory signals mold the emerging resonance into the cortex, effectively integrating the new idea with a person’s existing knowledge base. The overwhelming majority of information is eventually discarded in this process, as it does not recur in the decomposed signal. Only those harmonic components that persist are remembered.

IMPLICATIONS

A quantitative description of this framework is beyond the scope of this paper, but it would probably include a way to discretely represent as many of the individual resonances as possible. A generalized model could be built in a higher-dimensional “circuit space,” but it is unclear whether this approach would prove significantly faster or entirely different from the latest data compressing and signal processing algorithms. Programming truly intelligent behavior into this kind of machine would probably require considerable effort, as humans generally learn things like basic arithmetic over a long period of time, eventually processing simple equations by associating categories of numbers with their sums and products.

An investigation of human and animal behavior within this framework might yield better results for now. The obvious place to start is with music cognition, as Daniel Levitin has done in his book. Further research on the connections between music theory and induced brain rhythms is advisable.

The framework is also interesting because it would require very little demarcation between the mental and physical realms, information entering and exiting the body seamlessly with sensory perception and behavior, respectively. If we imagine that the signal generated by an individual’s name might exhibit a characteristic interference with the signal generated by the personal pronouns “I” and “me,” then self-awareness might only emerge in a community that passes around the right messages. Philosophically, all conscious thought would then be intimately dependent on reality, as trained intelligent brains would only be able to reproduce the various harmonic patterns that recur in reality.

As broad pattern-matching devices, humans perform precise computations rather inefficiently, and Turing machines will probably remain the most appropriate tool for that specific job. However, imprecise computations like those required for effective facial recognition might be greatly optimized by the subtle oscillatory characteristics of neural circuitry. Those attempting to achieve artificial intelligence will benefit from a careful evaluation of the data that their models can represent and preserve.

SOURCES

Buzsaki, Gyorgy and Chrobak, James. “Temporal structure in spatially organized neuronal ensembles: a role for interneuronal networks.” Current Opinion in Neurobiology, 5(4):504-510, 1995.

Fourier, Jean Baptiste Joseph. The Analytical Theory of Heat. New York: Dover Publications, 1955.

Gibson, William. Neuromancer. New York: Ace, 1984.

Hecht-Nielsen, Robert. “Confabulation theory.” Scholarpedia, 2(3):1763, 2007.

Jacobs, Joshua, Michael Kahana, Arne Ekstrom and Itzhak Fried. “Brain Oscillations Control Timing of Single-Neuron Activity in Humans.” The Journal of Neuroscience, 27(14): 3839-3844, 2007.

Levitin, Daniel. This is Your Brain on Music. Plume, New York, 2006.

Maex, R. and De Schutter, Erik. “Resonant synchronization in heterogeneous networks of inhibitory neurons.” The Journal of Neuroscience, 23(33):10503-14, 2003.

Masquelier, Timothee, Etienne Hugues, Gustavo Deco, and Simon J. Thorpe. “Oscillations, Phase-of-Firing Coding, and Spike Timing-Dependent Plasticity: An Efficient Learning Scheme.” The Journal of Neuroscience, 29(43): 13484-13493, 2009.

Rizzolatti, Giacomo and Craigher, Laila. “The Mirror-Neuron System.” Annual Review of Neuroscience. 2004;27:169-92.

Rosenblatt, Frank. “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain.” Psychological Review, 65(6), 1958.

Nuclear Meltdowns

There’s been a lot of media buzz in recent days about a few of Japan’s nuclear reactors, which are apparently suffering from a coolant failure and possible meltdown. The 2011 quake is already a disaster, but from the tone of some of these articles, reporters seem to be expecting it to “go nuclear” at any time now, even trotting out the dreaded “Chernobyl” name-drop. This is not honest reporting: it reflects either astonishing incompetence or cold deceptiveness, intended to scare the public out of supporting a promising technology. I’m not going to try to identify the unscrupulous characters who might be responsible (hint: look at the people who sell oil), but I can’t just sit here and watch a meme this deadly infect the world without doing anything about it. This will be like bailing out a cruise ship with a teaspoon, but hey, I gotta try something.

To understand why Chernobyl is a horrible point of reference, we need to start with a brief overview of the physics. Fission reactors generate heat energy by arranging sufficiently “heavy” materials (substances made of large atoms) in such a way that little bits of the atoms start “breaking off” and a stable chain reaction can be achieved. When exposed to the right kind of energy, each fissile atom produces one or more very-fast-moving particles that will eventually smash into other atoms in the fuel and break them apart in turn, thus continuing the reaction. However, according to the curious laws of the quantum universe, at first many of these particles are traveling too fast to successfully break off bits of another atom, and so they must be decelerated by what is called a “neutron moderator” until they are traveling slowly enough to impact the next fissile atoms and continue the reaction.

The neutron moderator effectively determines the rate of the reaction by producing more or fewer of these slow-moving “thermal” neutrons, so its design is of critical importance (pardon the pun) to any supposedly safe reactor. Now the difference between Chernobyl and every other less-dangerous incident should be clear: the reactor at Chernobyl was designed with a notoriously unsafe choice of a moderator, while every operational reactor in Japan has used a very safe choice. Specifically, Japan’s reactors use ordinary water as a moderator. Sure, it’s named as the coolant, but the genius of the design lies in the fact that the water also controls the rate of reaction! If the reactor overheats and the coolant (which is also the moderator) boils away, those fast neutrons maintain their too-high velocity and the chain reaction slows. While this certainly doesn’t eliminate all hazards from the immediate area (did I mention the massive, speeding particles?) the engineers can monitor core pressure from farther away and release some irradiated (very slightly dangerous) steam if it climbs too high, which is exactly what has happened already. At Chernobyl, the reactor used a graphite moderator instead. When it overheated, loss of coolant allowed the reactor to go “prompt critical”, where the chain reaction intensified uncontrollably and exponentially. It ruptured the poorly-designed containment vessel and the hot exposed graphite went up in a fireball, throwing tons of radioactive core materials into the atmosphere. One of these events is not like the other.

Of course, this is not the whole story, as today’s reactors are designed with many additional safety features, but the point is that this difference is indispensible to any intelligent consideration of the risks. Prestigious newspapers might be careful to brush off predictions of a radioactive explosion, and tend to identify dangerous materials leaking through containment into the groundwater as the most serious threat. Even so, there is a conspicuous absence of clarification on the physics whenever Chernobyl is mentioned. The public may expect this kind of fear-mongering and exaggeration from the media, but it is still staggeringly irresponsible when a genuine crisis is happening.

Chernobyl was a disaster of epic proportions, but it was caused by incompetence and greed, not technology. For those who know better, gleefully invoking its memory in order to scuttle safe nuclear power is a terribly evil thing to do.

I don’t want to sound as if I’m criticizing all cautious attitudes in light of the situation (I have one myself). On the other hand, I guarantee that those predicting a fallout doomsday or the end of nuclear power will be pressured to revise their opinions as the crisis is resolved. April 26th was a normal day ruined by irresponsible thugs who abused the scientific method. March 11th will be remembered as a catastrophic day saved by heroes who commanded it.

UPDATE: I really shouldn’t do this, but I’m going to go out on a very narrow limb here and guess that the psychological damage caused by the image of exploding reactor buildings will cause more devastation than a quiet but complete meltdown would have, which makes me wonder a bit about the decision to start pumping in seawater. However, I’m not even close to being qualified to operate one of those things, so don’t trust that opinion under any circumstances.

Piracy: The New Sleepover

The moral questions regarding piracy (or file-sharing, as its practitioners like to call it) don’t seem to be going away anytime soon. Hardly a day passes without some new article surfacing, written from the perspective of a struggling artist, an industry insider, a “scene group” (nerds who rip movies), or countless others. The arguments are usually rather predictable – depending on which side is being argued, pirates are either swashbuckling Robin-Hoods or sneaky communists.

I’m not interested in adding to this cacophony of opinions, so I’ll instead propose a simple thought experiment, which seems to render the whole debate obsolete:

Let’s imagine we’re in the future. The date isn’t important, let’s just assume that we’re in whatever year fully-immersive virtual reality goes mainstream. This future world is basically the Matrix, without evil scheming robots. People can leave the virtual “world” whenever they want to, but judging by how much time we already spend online this might not happen too often. In any case, the point is that we’re suddenly free to experience life without spatial limitations (up to the speed of light, of course). Optimistic folks might assume that this would also mean freedom from poverty, war, and unhappiness, but we don’t even need to go that far. In a world where virtual interaction is simply a social norm, we’d all have hundreds of online “digital” friends, along with whoever we still hang out with in the fleshy world. Because the virtual world would be totally immersive, the distinction between the two “kinds” of friends would start to become meaningless.

If, at this point, it is still legal for our daughters to stay up all night, watching John Hughes movies at their friends’ houses, a reasonable court could easily decide that this same activity should be perfectly fine in the virtual space; those “digital” friends should not be treated differently in the eyes of the law.

Now, the trouble is obvious: how can a court even qualitatively separate “normal” piracy from typical and legal interactions between friends? We’d need either a new “Jim Crow” law segregating real and virtual friendships, or a sweeping ban on consumption of other peoples’ media. In America, both of these situations seem laughably impossible.

This thought experiment uses the idea of “immersive” cyberspace to highlight the absurdity of the situation, but the exact same phenomenon is already happening in a cruder form. All over the world, digital communities are popping up, full of usually anonymous but often genuine friends who share media amongst themselves. Who is going to stop them?