Future-Driven Development

Have you used Yarn? Yarn is like NPM but it has a “lock file” to prevent conflicting versions in sub-dependencies and sub-sub-dependencies from breaking your bundles. It is blowing up for good reason – the lock file is a great feature.

But we all know lots of people are going to stick with NPM, and it will turn into a whole schism until the mainstream switches, or until NPM absorbs the feature (my prediction). Why? Because everyone gets so worn down chasing the de-facto official JavaScript best practices year after year, that new (maybe worthwhile) ideas get ignored with the rest of the cutting-edge.

This is the sad result of too much, let’s call it “future-driven development”, a pernicious malady affecting software projects near you. It comes in many forms, here are a few:

  • Building and re-building unit tests while a prototype is changing rapidly
  • Over-normalizing your database, adding abstractions and joins everywhere
  • Using ES2018 in 2017 with bloated polyfills because “someday they’ll be native”

In academia, you are expected to act like this. Researchers try zany architectures and plan for uncertain future scenarios because that is the whole point of research. If you’re doing R&D in any capacity this is somewhat true – it is dangerous to ignore anything promising and new for too long.

However, people also act like this way too much in the professional world. When building a dashboard for a customer, you are not trying to win Architecture of the Year, you are not building a reference implementation of All Current Best Practices, your dashboard might not always need to run for a hundred years, and you are not building the dashboard for yourself. The tools only matter so far as they continue to help you deliver an effective product.

Objection #1: How do you know what you’re going to need until it’s too late?

You don’t, and you never will. That’s where experience comes into play. You will never arrive at an ideal implementation for any particular set of requirements by following every last one of today’s best practices (most of which were developed to solve a Very Serious Problem that you might never have).

Objection #2: Doesn’t this make it harder to improve future standards?

This objection was originally a sarcastic caricature, which sums up my feelings.

The Psychology Journal Ad-Hominem

Have you ever had someone tell you that liberalism is a mental disorder? Or that right-wingers vote for bad ideas just because they have an irrational world view? It’s a pretty common ad hominem in politics, and not a compelling one. The idea is a tautology: Find two people who have incompatible ways of looking at the world, and each will think the other’s way is somehow defective. But this silly tactic shows up in elite intellectual discourse way too often.

An early example is The Anti-Capitalistic Mentality by Ludwig von Mises, a book-length psychoanalysis of the author’s political opponents. The short version: people who resent their betters turn to communism since it means bringing everyone else down to their level. It’s not subtle.

More recently, academic examples skew leftward. To be clear, this is not a claim that left-wingers disproportionately rely on ad hominem. Nor is it a claim that there is some disproportionate weakness in the right-wing psyche. It’s probably because more psychologists are left-wing than ever before, and since the attack is based on psychoanalysis it shows up a lot in their literature.

Recent examples can be found with a quick internet search. Here are a couple:

Explaining the Appeal of Populist Right-Wing Parties in Times of Economic Prosperity

The traumatic basis for the resurgence of right-wing politics among working Americans

Although these papers are more nuanced and focused than some all-encompassing manifesto, they are still ad hominems. The authors can and will claim that their research is purely academic and not intended as any kind of attack, but let’s be honest, there is one big obvious way this kind of research will always be used. It will be served up on popular political websites and compiled in brightly colored books, ready to be used as Thanksgiving ammunition.

The literature often uses global warming as an example, and a few things are going on here. In many circles global warming is an indisputable fact, so if you want to deploy some political ad hominem against the people who tend to be skeptical, it’s a great starting point. Also, the global warming movement hasn’t succeeded. This approach serves both as a play to convince voters after all, and as something to offer environmentalists disappointed with and confused by a lack of success.

The classic example in the subgenre is Lewandowsky et al’s famous Recursive Fury, a paper psychoanalyzing those who reacted poorly to another paper psychoanalyzing global warming skeptics. How it got through more than five minutes of planning without being abandoned, we may never know. In any case it was eventually retracted.

Another interesting example is On the relation between ideology and motivated disbelief by Campbell & Kay. To its credit, the paper does attempt to strike a balanced tone, supporting an ad hominem attack against both political parties. Still, they put a whole lot more effort into the global warming part, and the fourth study might be a strategic addition to give the impression of dispassionate science.

These ad hominems have been around for a long time and they aren’t going anywhere soon. But they are silly and don’t belong anywhere near academia. Even with the veneer of science, the tactic only convinces people who are inclined to accept the ad hominem anyway. It looks desperate and stupid to everyone else.

Fun with Tailgaters

Commuting in the car means a lot of time spent accidentally thinking about how it could be improved. I’ve come up with several ideas for accessories, and this first one is useless but very fun. For some time I was planning a system where buttons on the dashboard displayed messages on the rear bumper. Stuff like “C’MON, REALLY?” to display to tailgaters, and maybe amusing stuff like “WHAT’S WRONG WITH THIS GUY?” when someone up ahead is driving poorly. It would be a pretty straightforward bank of switches plugged into an Arduino, and either a screen with pictures or (better) a train-schedule type letter sign, if those can still be found.

A few weeks back, however, I remembered that dog picture from the Internet that says “deal with it” when the sunglasses drop:

This one

I realized there couldn’t be a classier way to respond to tailgaters than with a live-action version, so I decided to make one. It would be a simple Arduino project, with a servo that lowers the sunglasses over the eyes of a stuffed dog. Then a relay would light up the “Deal with it” text on cue.

Setting up the Arduino code and wiring the components didn’t take more than a few hours:

Arduino wired to button and relay
Arduino wired to button and relay

Then there was the simple matter of printing out some sunglasses and attaching them to a servo (cardboard backing, super glue, and a zip tie for the arm):

The dog with its future sunglasses
The dog with its future sunglasses

Finally the sign had to be prepared. I decided to go all out and buy a real neon sign, since that is totally fantastic and you can get them custom-built. The sign arrived with this nice label:

Packaging for sign

I also opted to buy a pre-packaged relay to switch the sign, since I’m not a trained electrician and you don’t want to trifle with AC power from the wall outlet. The PowerSwitch Tail II is great, you just plug in 5V and ground wires to the side and it works like an extension cord with a switch. The rest of the wiring was just a couple of leads going to 5V and ground, and one pull-down resistor for the button. I also got a 300 watt inverter to provide power from the car battery, and a big red button to activate the sign. Wiring it all together for a test run, it looked pretty good:

Deal with it - Live ActionThe sign turned out to be bigger than I had figured, and it takes up the whole back window of the car. Luckily it has a clear backing so my view isn’t obstructed. There’s still some polishing to go, but it’s working very well.

Nobody has tailgated me anywhere near the threshold level for sign-activation yet (perhaps this is rarer than I thought) but it’s bound to happen eventually. You know when you’re waiting in a line of cars to pass a slow-moving truck, and some chucklehead decides to tailgate you, so that maybe you’ll do the same to the car in front and so on (I assume)? The next time that happens to me, I’ll press this button on the dashboard:

deal-with-it-project-4

And I’ll take my sweet time to finish the pass. Meanwhile the offending driver will see this:

Arduino code is on Github if you’re interested.

Architecture and Environment

Sometimes an architect has a wide-open space, a large budget, and unlimited freedom to design a structure. In these cases established best practices will lead to a technically impressive result. Looking at the Burj Dubai, the appearance is of a very tall structure which required engineering expertise and advanced materials to build. With this sort of tower superproject, genuinely new and unexpected patterns are comparatively rare. The technology and academic literature certainly progress as in other fields, but the result of one of these projects is pretty much what you would expect: a very tall structure on a flat foundation.

Now consider a heavily constrained project, like Frank Geary’s fallingwater house. In this case the client requested a house that fit on a unique piece of land, with forested hills and a waterfall going through the property. You might guess that limiting the possibilities available to the architect would produce limited results, but the reality is exactly the opposite. In those unique circumstances, Geary produced the famous cantilevered design and spectacular proportions which endure as iconic examples of the craft. I don’t want to suggest that one can design a skyscraper without any creativity, or build the fallingwater house without any engineering skill. In fact both projects require a great deal of both abilities. I will suggest that the skyscraper requires comparatively more of the engineering part, and the fallingwater house requires more of the creative part.

Here there is an analogy with other disciplines. Architectural concepts apply to any situation where complicated systems of interacting components have to be designed. Courses of education, business processes, and software systems are some examples.

Working programmers dream about having unlimited resources and freedom to build monumental skyscrapers of software design, but reality does not often accomodate these fantasies. Schedules and budgets are limited, often severely. Large systems usually include legacy infrastructure which was not intended to perform the task at hand. Instead of designing with a blank slate, working around these limitations makes up the bulk of a programmer’s day to day experience. I would argue that the reality makes professional programming a much more creative pursuit than it might seem after reading the textbooks and papers. This is good! It makes the job interesting and even fun at times.

The Ludum Dare competition is a great example of the same phenomenon. Each round the participants vote on a thematic constraint to impose on their game projects. Together with a deadline, this limitation acts as a catalyst for ideas. If I’m asked right now to come up with an arcade game, it probably won’t happen soon. Ask me for a game which has, say, a theme of “Escape” or “Tiny world” and the ideas come a lot faster.

The point is not just that constrained projects demand the creative side of architecture, but also that limiting the scope of a project can encourage creativity. The next time you find yourself at a loss for new ideas, try imposing some arbitrary constraints. It will probably help.

Non-Player Characters

Here’s an interesting idea. This article mentions “non-player characters” in the context of a role-playing game, and proposes something rather unsettling:

Many of us approach the other people in our lives as NPCs.

I’ve been thinking along similar lines. People often imagine strangers as incidental scenery, part of the social environment. This is understandable given the limits of human knowledge – there simply isn’t enough time or mental capacity to understand very much about very many people. However, we often forget that this perspective is only a necessary convenience that allows us to function as individuals. For example, if you’ve ever been in a rush to get somewhere on public transportation, you’ve probably felt that bit of guilty disappointment while waiting to accommodate a wheelchair-bound passenger. Here comes some person into my environment to take another minute of my time, right? If you use a wheelchair yourself, this delay happens every time you catch a ride, and that frustration simply does not exist. If anything, I would imagine that disabled passengers feel self-conscious every time they are in a situation where their disability affects other peoples’ lives, even in an insignificant way.

Has this always been true? Probably to some degree, but the modern media environment seems to especially promote it. Good fiction writing communicates the thoughts and motivations of relevant characters, unless they are complete unknowns. This means that any meaningfully observable character has some kind of hypothesized history and experience informing their participation in the story. Film is different, in that a script can describe an “evening crowd” in two words, but the realization of that idea can involve hundreds of extras, living entire lives and working day jobs that barely relate to their final appearance on the screen. We can assume that their real lives intersected with the production of that scene on that day, but it’s really the only significance that their identities have in context.

With interactive media, the idea of a “non-player character” has appeared in many forms, and academics study how they can design the best (read: most believable) fictional characters for interactive environments. Here the limited reality of these characters is even more pronounced. In video games, non-player characters have lower polygon counts, fewer animations, and generally use less code and data. This is a consequence of the limited resources available for building a virtual environment, but the effect is readily apparent and forced.

Does this mean video games shouldn’t include background characters? Not really. What I’m suggesting is that we should be careful to see this phenomenon for what it is: an information bias in favor of the protagonist, which necessarily happens while producing media. It shouldn’t ever be mistaken for a relevant characteristic of the real world. This holiday season, when you’re waiting an extra minute or two for a disabled stranger, or expecting better service from a tired professional, remember that he or she probably has lived a life as rich and complicated as your own, and try not to react as if he or she is just some kind of annoying scenery. Whoever it is might return the favor, even if you never realize it.

Falsifiability

According to Karl Popper, any scientific theory must be falsifiable. What does this entail?

A falsifiable theory leads to predictions which would be invalidated by some conceivable observation. For example, Newtonian dynamics predicts that in a uniform gravitational field, two objects with different masses will have the same acceleration due to gravity. It implies that if we drop a feather and a ball bearing inside a vacuum chamber, the ball bearing will not fall faster, as long as the theory is valid. This is in fact what happens.

Newton’s theory was very successful at describing the celestial motion of the known planets, but in the 19th century it did not correctly predict the orbit of Uranus. This fact would have falsified the theory, or greatly limited its precision. However, Urbain Le Verrier knew that the gravitational field of an unknown planet could be pulling Uranus into its observed orbit, and predicted where and how massive such a planet would be. Astronomers pointed their telescopes at the expected location and discovered Neptune. If no planet had existed at that location, this prediction would have been wrong, and the inconsistency with Newtonian dynamics would have remained valid.

A planet orbits in an ellipse, and the point where it moves closest to the sun is called the perihelion of its orbit. This point gradually precesses around the sun due to gravitational forces exerted by other planets. The same Le Verrier compared observations of Mercury to the perihelion precession rate derived from Newton’s theory, and found a discrepancy of nearly half an arcsecond per year. He predicted the existence of another planet closer to the sun to explain his result, but no planet was ever observed and this problem remained open.

To explain other puzzling observations, Albert Einstein abandoned the Galilean transformations of Newton’s theory for a framework which uses Lorentz transformations in four dimensions. General Relativity describes gravitation as an effect of spacetime curvature. It simplifies to Newtonian dynamics at lower energy levels. According to this theory, the perihelion of Mercury’s orbit should precess by an additional 0.43 arcseconds per year, matching the observed value.

Still, the intuitive simplicity of Einstein’s theory did not automatically mean that it was a valid replacement for Newtonian dynamics. During the total solar eclipse of 1919, measured deflection of starlight agreed with the value derived from General Relativity, a highly publicized result. Decades passed before additional predictions were conclusively validated.

Global Whining

The scientific method is the greatest invention of the modern age. For centuries, its practitioners have transformed civilization using rational systems revealed through careful observation. Theories which have succeeded not by virtue of their popularity but because they correctly predicted unknown phenomena are especially awe-inspiring. However, predictions without rational justification or those vague enough to be confirmed by any number of observations should not earn the same recognition. I’m a huge fan of Karl Popper regarding falsification, the idea that a scientific theory must predict some observable event which would prove it wrong (if the theory is wrong). This principle eliminates uncertainty regarding how specific a valid theory must be. Unfortunately, it has been ignored by some academics who claim to be scientists so that people won’t laugh at their ideas. You might have already guessed that today I’m targeting the low-hanging fruit of global warming alarmism. Prepare to be offended.

I won’t waste your attention picking apart the various temperature series, criticizing the IPCC models, or citing evidence of misconduct, not because those arguments have already been made by more qualified individuals, but because they shouldn’t even be necessary. Fundamental problems with any apocalyptic hypothesis make the whole enterprise seem ridiculous. This is what Popper says about scientific theory:

1) It is easy to obtain confirmations, or verifications, for nearly every theory – if we look for confirmations.
2) Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected […] an event which would have refuted the theory.
3) Every “good” scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
4) A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
5) Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.

The scenarios published by climate modelers don’t qualify as scientific predictions because there is no way to falsify them – updated temperature measurements will inevitably correlate with some projections better than others. And fitting curves to historical data isn’t a valid method for predicting the future. Will the IPCC declare the CO2-H2O-feedback warming model invalid and disband if the trend in the last decade of HadCRUT3 data continues for another decade or two? How about if the Arctic ice cap survives the Summer of 2014? I’m supposed to trust these academics and politicians with billions of public dollars, before their vague predictions can be tested, because the global warming apocalypse they describe sounds more expensive? This riotous laughter isn’t meant to be insulting, we all say stupid things now and then.

Doesn’t the arrival of ScaryStorm Sandy confirm our worst environmental fears? Not if we’re still talking about Karl Popper’s science. Enlightened by the theory of Catastrophic Anthropogenic Global Warming, academics were reluctant to blame mankind for “exceptional events” only two months ago. They probably didn’t expect a hurricane to go sub-tropical and converge with a cold front as it hit New York Bight on the full moon at high tide less than six weeks later, because that kind of thing doesn’t happen very often. Informed news readers might have been expecting some coy suggestion that global warming “influences” weather systems in the rush to capitalize on this disaster. But in a caricature of sensationalism, Bloomberg splashes “IT’S GLOBAL WARMING, STUPID” across a bright orange magazine cover, and suddenly enormous storm surges threaten our infrastructure again while the seas are still rising slowly and inevitably, all because of those dirty fossil fuels.

I don’t mean to say that we should actually expect scientific integrity from a stockbrokers’ tabloid, but Mike Bloomberg has really sunk to a new low. He spoke at TechCrunch Disrupt a few years ago and seemed like an average business-friendly mayor, not a shameless propagandist. I guess the soda ban was a bad omen. It’s a bit discouraging to see another newspaper endorse the panic, but then the organizers of our climate crusade have been pushing their statist agenda on broadcasters for a long time.

On Sunday, the New York Times doubled down with this ridiculous melodramatic lament, written by one talented liberal artist. Where’s a prediction about the next exceptional event, folks? Is it going to be a tornado or earthquake, or does it matter? Are there actually any rules for preaching obnoxious hindsight to believers? Can anyone suggest an observation that would falsify the theory?

What will the temperature anomaly or the concentration of carbon dioxide measure in ten years? How about one solid date for the eradication of a low-lying coastal city? If you must predict the apocalypse, it is only somewhat scientific if you can rationally argue for a deadline. And the science is only settled when the world ends (or doesn’t end) on time.

Plus ça change, plus c’est la même chose. Happy 2012.

Singing

If you like music at all, take some advice from me and sing whenever you have a chance. It’s good for your soul, whatever that means. I’ve played music for many years, but I recently discovered how much singing affects my mental health. Live shows don’t count, because the crowd and the amplifiers drown you out. Sing when you’re alone in a quiet room, and fill it with your voice. Sing whenever you’re driving alone. Sing at work, as long as you aren’t disturbing anyone who can punish you. Just sing – you can probably play your own vocal cords even if you can’t play another instrument. If you sound awful, don’t worry about it. Nobody is going to harass you for singing poorly in private. Many people are too insecure to try it at all.

I’m saying this because something wonderful happens when we hear ourselves sing. It helps the brain. Matching pitch forces us to listen to the sounds that we produce, and quite possibly to produce sounds that we are not entirely comfortable hearing. This is a good thing! It teaches us to be more comfortable whenever we use our voices, and more confident in general. The sooner you get accustomed to hearing yourself make unfamiliar noises, the sooner you will be able to make noise in public with absolute confidence. Sometimes, that is absolutely necessary. Why not be prepared?

Degrees and Freedom

Here’s my idea of a good math lesson. I want to explain Euler’s formula, the cornerstone of multidimensional mathematics, and one of the truly beautiful ideas from history. In school this formula appears as a useful trick, and is not commonly understood. I think that is because students are denied enough time to wonder what the formula actually means (it doesn’t describe how to pass an exam). Here is Euler’s formula:

e^(ix) = cos(x) + i*sin(x)

This idea was introduced to me after a review of imaginary and complex numbers. Once the history and definition were out of the way, we completely freaked out at the idea of putting ‘i’ in the exponent, then practiced how to use it in calculations. I might have had a brief moment of clarity in that first class, but by the AP exam Euler’s formula was nothing more than a black box for converting rectangular coordinates to polar coordinates.

Many years later, I came across the introduction to complex numbers from Feynman’s Lectures on Physics, and suddenly the whole concept clicked in a way that it never had in school. Explained here, I don’t think it is really that difficult to understand, but then I’ve already managed to understand it, so I’ll try to communicate my understanding and then you can tell me whether it makes sense.

We need to start by generalizing the concept of a numeric parameter. The number line from grade school is an obvious way to represent a system with one numeric parameter. If we label the integers along this line, each mark corresponds to a grouping of whole, countable things, and the value of our integer parameter must refer to one of these marks. If we imagine a similar system where our parameter can “slide” continuously from one integer to the next, the values that we can represent are now uncountable (start counting the numbers between 0.001 and 0.002 if you don’t believe me) but opening up this unlimited number of in-between values allows us to model continuous systems that are much harder to represent with chunks.

Each system has a single numeric parameter, even though the continuous floating-point parameter can represent numbers that the integer parameter cannot. In physics, the continuous parameter can represent what is called a “degree of freedom,” basically a quantity that changes independently of every other quantity describing the system. Sometimes a “degree of freedom” is just like one of the three dimensions that you can see right… now, but this is not always the case. Wavefunctions in particle physics can have infinite degrees of freedom, even though the objects described by these esoteric equations follow different laws when we limit our models to the four parameters of spacetime.

Anyway, the imaginary unit or ‘i’ is just some different unit that identifies a second numeric parameter. If we multiply an integer by ‘i’, we’re basically moving a second parameter along its own number line that same distance. Apply the “sliding” logic from before and we can use the fractional parts between each imaginary interval. If this sounds new and confusing, just remember that any “real” number is itself multiplied by the real unit, 1. Personally, I don’t think that the word “imaginary” should be used to describe any kind of number, because all numbers are obviously imaginary. However, this convention exists regardless of how I feel about it, and nobody would know what to put in Google if I used a different word.

Why do teachers use this system where one implicit unit is supplemented by a second explicit unit? Simple – it was added long before anyone fully understood what was going on. The imaginary unit was the invented answer to a question, that question being:

Which number yields -1 when multiplied by itself?

The first people to ask this question didn’t get much further than “A number called ‘i’ which is nowhere on the number line, and therefore imaginary.” If those scholars had described their problem and its solution in a different way, they might have realized some important things. First, this question starts with the multiplicative identity (1) and really asks “which number can we multiply 1 by twice, leaving -1?” Thinking about it like this, it soon becomes clear that the range of values we can leave behind after multiplying 1 by another value on the same number line, twice, cannot include -1! We can make 1 bigger, twice, by multiplying it by a larger integer, or smaller, by multiplying it by a value between 0 and 1. We can also negate 1 twice while scaling it up or down, but none of these options allow for a negative result!

A clever student might point out that this is a stupid answer and that we might as well say there is none, but we still learn about it because amazing things happen if we assume that some kind of ‘i’ exists. We can imagine a horizontal number line, and then a second number line going straight up at 90° (τ/4 radians, a quarter turn) from the first. Moving a point along one line won’t affect its value on the other line, so we can say that the value of our ‘i’ parameter is represented on the vertical line and the value of our first (“real”) parameter is represented on the horizontal line. That is, a complex number (a*1+b*i) imagined as a single point on a 2-dimensional plane. In this space, purely “real” or purely “imaginary” numbers behave just like complex numbers with zero for the value of one parameter.

Now think about the answer to that question again. If our candidate is ‘i’ or some value up “above” the real number line, it’s easy to imagine a vector transformation (which we assume still works like multiplication) that can change 1 to ‘i’ and then ‘i’ to -1 in this 2D number space. Just rotate the point around the origin by 90°. When our parameters are independent like this, exponentiation by some number of ‘i’ units is exactly like rotating the imagined “point” a quarter turn around zero some number of times. I don’t really know why it works, but it works perfectly!

We’ve seen that imaginary units simply measure a second parameter, and how this intuitively meshes with plane geometry. Now let’s review what is actually going on. Numbers multiplied by ‘i’ behave almost exactly like numbers multiplied by 1, but the important thing about all ‘i’ numbers is that they are different from all non-‘i’ numbers and therefore can’t be meaningfully added into them. The ‘i’ parameter is a free parameter in the two-parameter system that is every complex number. It can get bigger or smaller without affecting the other parameter.

Bringing this all together, let’s try to understand what Euler was thinking when he wrote down his formula, and why it was such a smashing success. He noticed that the Taylor series definition of the exponential function:

Exponential function and its Taylor series

Becomes this:
Complex exponential and its Taylor series

When ‘i*x’ is the exponent, because the integer powers of ‘i’ go round our complex circle from 1 to i to -1 to -i and back. Grouping the real terms and the ‘i’ terms together suddenly and unexpectedly reveals perfect Taylor series expansions of the cosine and sine:
Euler's complex exponential series

As each expansion is multiplied by a different free parameter, the two expansions don’t add together, naturally separating the right side of our equation into circular functions! We can just conclude that those functions really are the cosine and sine of our variable, remembering that the sine is an ‘i’ parameter, and it works! Because these expressions are equivalent, having a variable in the exponent allows us to multiply our real base by ‘i’ any fractional number of times (review your exponentials), and thus rotate to any point in the imagined complex plane. There are other ways to prove this formula, but I still do not understand exactly why any of the proofs happen the way they do. It’s not really a problem, because Euler probably didn’t understand it either, but I’d still like to come across a good answer someday. What I know right now is that any complex number can be encoded as a real number rotated around zero by an imaginary exponent:

e^(ix) = cos(x) + i*sin(x)

Here is proof that certain systems of two variables can be represented by other systems of one complex variable in a different form, and the math still works! Euler’s formula is a monumental, paradigm-shattering shortcut, and it made the modern world possible. I’m not overstating that point at all, everything from your TV to the Mars rover takes advantage of this trick.

St. Jobs

I’m sure there will be glowing biographies about Steve Jobs and his many accomplishments in time, but that guy deserves every single bit of the massive praise that is heaped upon him. Some of the most interesting comments come from the journalist disciples who all but compare Jobs to Jesus at every opportunity, and from the corresponding messiah-doubters who say that Jobs was nothing more than a savvy businessman who understood timing, manufacturing and product placement. However, other contrarians are understandably uncomfortable with his role in the commercialization of independent software and his control over the iUniverse, like a charismatic sort of software dictator type figure. Many of these people have Apple on their “modern hypocrites list” and might tell you so if the conversation wanders in that direction.

Here’s maybe the one valid way I could compare Steve Jobs to Jesus: Jesus was all about ideas that could outlast and defeat humans, no matter how powerful they might seem at the time. Steve made computers and computer systems that will outlast their owners. I can’t possibly imagine a day when my iPad (2) is any less useful or amazing than it is today unless it smashes, no matter what the next ten versions look like. We are going to have to explain to our kids that this is a weird new thing! Computers used to be rickety, noisy boxes with all these wires and different parts sticking out everywhere, and they used to break all the time when a competent engineer wasn’t available to keep things working! All you early majority consumers of a certain influential desktop operating system know exactly what I am talking about…

Somewhat ridiculously, the very approach that allowed Steve and Apple to end this massive problem with casual computing was his uncompromising, even autocratic management of the platform. It feels like I might be stepping on the dreams of the free and open software communities a bit here, but I think I’m starting to understand the actual logic in favor of Apple’s paradoxical mecha-fascism, if only because I program sounds and other fast things. Not every computer has the luxury of being some genius freedom-fighter’s personal data management device. Many computers have to control cars, and medical machines, and all those other things that can’t break or otherwise present an end user with some unpredictable software issue that needs debugging. When my grandma is trying to call me on video chat, it has to be the same way. That was his reasoning, I think, and I have to agree that it makes a lot more sense than it used to.

One day, when we’re donating these old tablets to needy kids or whoever, we might remember Steve by understanding what he wanted to create: a world united by its magical and powerful technology – technology we can use to do formerly impossible things, without losing all of our time in the process.