Future-Driven Development

Have you used Yarn? Yarn is like NPM but it has a “lock file” to prevent conflicting versions in sub-dependencies and sub-sub-dependencies from breaking your bundles. It is blowing up for good reason – the lock file is a great feature.

But we all know lots of people are going to stick with NPM, and it will turn into a whole schism until the mainstream switches, or until NPM absorbs the feature (my prediction). Why? Because everyone gets so worn down chasing the de-facto official JavaScript best practices year after year, that new (maybe worthwhile) ideas get ignored with the rest of the cutting-edge.

This is the sad result of too much, let’s call it “future-driven development”, a pernicious malady affecting software projects near you. It comes in many forms, here are a few:

  • Building and re-building unit tests while a prototype is changing rapidly
  • Over-normalizing your database, adding abstractions and joins everywhere
  • Using ES2018 in 2017 with bloated polyfills because “someday they’ll be native”

In academia, you are expected to act like this. Researchers try zany architectures and plan for uncertain future scenarios because that is the whole point of research. If you’re doing R&D in any capacity this is somewhat true – it is dangerous to ignore anything promising and new for too long.

However, people also act like this way too much in the professional world. When building a dashboard for a customer, you are not trying to win Architecture of the Year, you are not building a reference implementation of All Current Best Practices, your dashboard might not always need to run for a hundred years, and you are not building the dashboard for yourself. The tools only matter so far as they continue to help you deliver an effective product.

Objection #1: How do you know what you’re going to need until it’s too late?

You don’t, and you never will. That’s where experience comes into play. You will never arrive at an ideal implementation for any particular set of requirements by following every last one of today’s best practices (most of which were developed to solve a Very Serious Problem that you might never have).

Objection #2: Doesn’t this make it harder to improve future standards?

This objection was originally a sarcastic caricature, which sums up my feelings.

This Trump Thing

Many of my friends are really upset about the 2016 election. Rather than speaking off the cuff, I’d rather point them here to read some tactfully-prepared comments which I hope will help. Disclaimer: I didn’t vote in this election and wouldn’t have voted for any of the known candidates. There was so much anger, fear, and bitterness motivating support for these candidates, and it contrasted so drastically with my personal experience of 2016, that I felt it best to stay out of it all.

However I do read and think about culture and politics a lot (perhaps too much), so maybe my perspective isn’t worthless.

Based on what you’ve seen, you might decide that old bitter racists got mad at brown people and voted in a Nazi. If this is what you believe then yeah, there is reason to be scared for the future. But I don’t think it is accurate. Here are some of the actual reasons why half the country voted for Trump, all pretty much baseless and off the top of my head.

Reason 1: History and Demographics

The Industrial Revolution is long over, and now the next one is in full swing. Centuries ago, when machinery was invented to perform tasks that had required raw human strength, the displaced laborers caused a stir, burning down factories and I expect getting all up in the politics of the day. In part, this is the same effect on a much larger scale. These days the machines don’t only do the heavy lifting, they do the fine details too. And many industries that used to require physical manipulation simply don’t anymore, because pure information is easier to work with. This means there is a whole generation, the last industrial generation, with nothing left to do. It’s sad. While the young people were going to college and preparing for the new high-tech information service economy, these old people have had basically no prospects, and nobody in politics has been sympathetic to them in a long time. Whether or not he can do anything about this, Trump was speaking directly to them the whole time, and he flipped lots and lots of the industrial-generation folks who had voted for Democrats their whole lives. You can take issue with these people believing that protectionism is a viable solution to their problems, but nobody else was proposing anything new to them.

Reason 2: Racist (etc) White People

OK, yeah, some white people are bigots. Most of those people jumped right on board. You can find their writings online (…or maybe spray-painted on a wall in your neighborhood) if you know where to look, but keep in mind that trolls like to masquerade as those people too. More importantly, your garden-variety racist with a low opinion of Muslims has never had a Muslim friend. Their bigotry is usually born from fear, which is born from ignorance. The proper response is outreach and pity, not ostracism and smugness.

Reason 3: Smugness

On the flip side, many of you have never had an old working-class rust-belt friend. You don’t understand them, and neither do the media folks who have been trashing their culture for decades. For example, the media has played up the racism angle to an unfair degree. Factory workers, coal miners, and evangelicals are people too, many of them smart and interesting people, and while they might not know everything, neither do you. If you have the stomach for it, read this prophetic piece and try to imagine how these folks have seen the coastal elites behave for decades. A surprising number of young college-educated people voted for Trump too, and I’ll bet more than a few did so because they simply picked the side with less smug behavior.

Reason 4: Corruption

I mean come on, by 2016 the Bush-Clinton family was as incestuous and rotten as the last generations of the Holy Roman Empire. Most of the people I know who voted for Trump weren’t particularly fond of him, they just couldn’t stand another anointed Yale scion being paraded around in front of them by Turner Media and the National Broadcasting Corporation. Many of them would have voted for Bernie Sanders if he was on the ticket.

OK, that explains a lot, but we still have an impolite egomaniac with no political experience as president, we’re doomed!

Maybe, yeah, but probably not. America has been through a lot, and most people (yes even the Trump voters) are still basically good people.

What can you do to help? First, make friends outside your comfort zone. This situation is partly the result of years of people shutting out everyone and everything that makes them uncomfortable. We cannot function in the long term as a society where everyone does this. Understand that other people can arrive at other decisions that you don’t like or even hate, and you can still respect them as people. Also once you know them better, you will understand where they’re coming from, and you’ll even have a chance to sell them your ideas.

Second, don’t get disheartened. If you believe in something keep fighting for it. But again always respect your opponents as people, and remember how badly the “smug bullying” tactic just backfired. Play the long game. Be polite, but annoying, and keep it up for a long time. If you are on to a good idea and you can rally long-term support it will win out in the end.

Third, if you are still worried about the nightmare Nazi scenario, exercise your right to bear arms. Never incite violence, but remember that the very best insurance against fascism is a well-armed populace. Regardless of who might actually attempt a fascist coup, the gun-owning basically good Americans will be right there with you fighting shoulder to shoulder if necessary (which we all hope and pray it will never be).

Finally, read more books from the past. Civilization has been around a long time, and in most ways things are better than they have ever been. That won’t change. Your fears about a Republican Supreme Court making over-the-counter contraception illegal might be justified, but keep in mind that we aren’t talking about throwing people in jail for adultery. Maybe some immigrant families will be broken up, which is very sad, but the public is only going to tolerate this policy if they prioritize dangerous criminals like any old law-and-order administration. So if you’re here in America illegally, uh, you probably want to drive exactly the posted speed limit for the next few years.

Future progress for minorities and women and gays needs to be made overwhelmingly in the social sphere, where government will not be involved no matter who is president. Don’t take this political event as a sign that things can only get worse from here. And one day, probably in your lifetime, that last glass ceiling will indeed be cracked.

The arc of the moral universe is long, but it bends towards justice.

– Martin Luther King Jr.

How to do Programming

TL;DR: Identify assumptions and write them down.

There’s a common (I think) misconception that the programming trade is all about knowing obscure facts and doing hard math. From this it follows that a person has to get really good at math or know a huge number of things before doing programming.

Unfortunately people who have this misconception can be discouraged from trying in the first place. It might be unrealistic to think that every single person not only can but also will want to do it, but I think that lots of people with this misconception could do very well at programming, once they understand what it is actually like, and if they put some effort into learning it.

In reality, being good at the math or knowing all about compilers and language features does not help with a large percentage of the day-to-day work that most programmers do. Instead, their effort goes into writing down all the assumptions made by the high-level description of a feature. In other words, the requirements will say “display a count of the current total” and the programming work is about finding the assumptions implied by that description (“what does ‘current’ actually mean?” etc), then writing them down explicitly. Once you write down the assumptions in the right way, the explicit representation of the assumptions is your code and you are done. Getting everything written down correctly used to be much harder, but with modern programming tools it isn’t even possible to make some of the most problematic mistakes anymore, and the tools can catch lots of other mistakes automatically. For everything else you would want to get help from a more experienced colleague, or just ask strangers on Stack Overflow.

There are programmers who would take issue with this description. I’m using the terms “doing programming” and “day-to-day programming” strategically, really to mean commercial or hobbyist programming for which there are mature high-quality tools and well-understood best practices. On the cutting edge of academia, and within programming projects that have very demanding requirements, advanced math and knowing lots of obscure facts can be much more important.

Basically, what I’m saying is that people tend to confuse the larger industry with that super-difficult hacker nerd work they see in movies and TV shows. In fact, the vast majority of programming work going on right now is the other kind. There are huge numbers of those jobs to be done, because it is where all the theory and advanced knowledge finally can be applied to a real-world problem. Many large software teams have so much of this kind of work that they split out a whole department for “requirements engineering”, which is taking the really high-level descriptions with tons of assumptions, and breaking those out into statements with fewer or no assumptions left in each statement, so that the code writers can focus on making the final code work. The best requirements are harder to write than all the coding work that comes after!

Maybe someone has told you before that programming is all about breaking problems down into smaller problems. It’s another way of saying the same thing.

 

The Psychology Journal Ad-Hominem

Have you ever had someone tell you that liberalism is a mental disorder? Or that right-wingers vote for bad ideas just because they have an irrational world view? It’s a pretty common ad hominem in politics, and not a compelling one. The idea is a tautology: Find two people who have incompatible ways of looking at the world, and each will think the other’s way is somehow defective. But this silly tactic shows up in elite intellectual discourse way too often.

An early example is The Anti-Capitalistic Mentality by Ludwig von Mises, a book-length psychoanalysis of the author’s political opponents. The short version: people who resent their betters turn to communism since it means bringing everyone else down to their level. It’s not subtle.

More recently, academic examples skew leftward. To be clear, this is not a claim that left-wingers disproportionately rely on ad hominem. Nor is it a claim that there is some disproportionate weakness in the right-wing psyche. It’s probably because more psychologists are left-wing than ever before, and since the attack is based on psychoanalysis it shows up a lot in their literature.

Recent examples can be found with a quick internet search. Here are a couple:

Explaining the Appeal of Populist Right-Wing Parties in Times of Economic Prosperity

The traumatic basis for the resurgence of right-wing politics among working Americans

Although these papers are more nuanced and focused than some all-encompassing manifesto, they are still ad hominems. The authors can and will claim that their research is purely academic and not intended as any kind of attack, but let’s be honest, there is one big obvious way this kind of research will always be used. It will be served up on popular political websites and compiled in brightly colored books, ready to be used as Thanksgiving ammunition.

The literature often uses global warming as an example, and a few things are going on here. In many circles global warming is an indisputable fact, so if you want to deploy some political ad hominem against the people who tend to be skeptical, it’s a great starting point. Also, the global warming movement hasn’t succeeded. This approach serves both as a play to convince voters after all, and as something to offer environmentalists disappointed with and confused by a lack of success.

The classic example in the subgenre is Lewandowsky et al’s famous Recursive Fury, a paper psychoanalyzing those who reacted poorly to another paper psychoanalyzing global warming skeptics. How it got through more than five minutes of planning without being abandoned, we may never know. In any case it was eventually retracted.

Another interesting example is On the relation between ideology and motivated disbelief by Campbell & Kay. To its credit, the paper does attempt to strike a balanced tone, supporting an ad hominem attack against both political parties. Still, they put a whole lot more effort into the global warming part, and the fourth study might be a strategic addition to give the impression of dispassionate science.

These ad hominems have been around for a long time and they aren’t going anywhere soon. But they are silly and don’t belong anywhere near academia. Even with the veneer of science, the tactic only convinces people who are inclined to accept the ad hominem anyway. It looks desperate and stupid to everyone else.

Fun with Tailgaters

Commuting in the car means a lot of time spent accidentally thinking about how it could be improved. I’ve come up with several ideas for accessories, and this first one is useless but very fun. For some time I was planning a system where buttons on the dashboard displayed messages on the rear bumper. Stuff like “C’MON, REALLY?” to display to tailgaters, and maybe amusing stuff like “WHAT’S WRONG WITH THIS GUY?” when someone up ahead is driving poorly. It would be a pretty straightforward bank of switches plugged into an Arduino, and either a screen with pictures or (better) a train-schedule type letter sign, if those can still be found.

A few weeks back, however, I remembered that dog picture from the Internet that says “deal with it” when the sunglasses drop:

This one

I realized there couldn’t be a classier way to respond to tailgaters than with a live-action version, so I decided to make one. It would be a simple Arduino project, with a servo that lowers the sunglasses over the eyes of a stuffed dog. Then a relay would light up the “Deal with it” text on cue.

Setting up the Arduino code and wiring the components didn’t take more than a few hours:

Arduino wired to button and relay
Arduino wired to button and relay

Then there was the simple matter of printing out some sunglasses and attaching them to a servo (cardboard backing, super glue, and a zip tie for the arm):

The dog with its future sunglasses
The dog with its future sunglasses

Finally the sign had to be prepared. I decided to go all out and buy a real neon sign, since that is totally fantastic and you can get them custom-built. The sign arrived with this nice label:

Packaging for sign

I also opted to buy a pre-packaged relay to switch the sign, since I’m not a trained electrician and you don’t want to trifle with AC power from the wall outlet. The PowerSwitch Tail II is great, you just plug in 5V and ground wires to the side and it works like an extension cord with a switch. The rest of the wiring was just a couple of leads going to 5V and ground, and one pull-down resistor for the button. I also got a 300 watt inverter to provide power from the car battery, and a big red button to activate the sign. Wiring it all together for a test run, it looked pretty good:

Deal with it - Live ActionThe sign turned out to be bigger than I had figured, and it takes up the whole back window of the car. Luckily it has a clear backing so my view isn’t obstructed. There’s still some polishing to go, but it’s working very well.

Nobody has tailgated me anywhere near the threshold level for sign-activation yet (perhaps this is rarer than I thought) but it’s bound to happen eventually. You know when you’re waiting in a line of cars to pass a slow-moving truck, and some chucklehead decides to tailgate you, so that maybe you’ll do the same to the car in front and so on (I assume)? The next time that happens to me, I’ll press this button on the dashboard:

deal-with-it-project-4

And I’ll take my sweet time to finish the pass. Meanwhile the offending driver will see this:

Arduino code is on Github if you’re interested.

NSA Snoops

This whole NSA melodrama is rather interesting. Once again a disgruntled government employee has leaked juicy top secret documents, and the fallout has divided politicians roughly along that quirky libertarian/authoritarian axis. Meanwhile, the public is freaking out because privacy online is an illusion, and Big Brother has been watching all along. George Orwell was right! We didn’t listen!

Along with several other people, I’m not exactly surprised by this revelation. What does the NSA even do if not this? How else does a company like Palantir make money? Are people actually surprised that planning crimes on Facebook is not a solid business plan?

Less obviously, why would anyone expect an ad-supported media company to stand up for privacy rights or whatever the libertarians want? People are acting like this is the cyberspace equivalent of strong-arming banks in order to search everyone’s safety deposit boxes. I’d suggest that it’s closer to reading people’s mail and rooting through their garbage. If someone invented a cheap stealth robot to do exactly that, do you really think the NSA wouldn’t be their top customer?

Signals intelligence is a sketchy business, whether or not you agree with the tactics. Most contractors who are not fleeing the country probably figured that out a long time ago.

Hobgoblins

Here’s a quote from H.L. Mencken:

The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary.

I’ve always been a bit amused by hobgoblins, having seen quite a few of them influence American pop culture firsthand. This isn’t to say that none of these scare stories have any basis in reality, or that I have always seen through the hype. However, it will be fun to maintain an official List of Hobgoblins that this millenial remembers. Here it is:

Nuclear Winter, Pollution, Japanese Manufacturing, Ozone Depletion, Saddam Hussein’s WMDs, Ecological Collapse, Skynet, Global Warming, Y2K, Terrorists, SARS, Russian Hackers, Nazi George Bush, Immigrants, Illuminati Bankers, Genetically Modified Food, Hyperinflation, Climate Change, Nazi Barack Obama, Ocean Acidification, Chinese Hackers, Nuclear Meltdowns, Bath Salts, Climate Disruption, Pre-Existing Conditions, ISIS Caliphate, Immigrants, Nazi Donald Trump, Russian Hackers, Immigrants, Nazi Donald Trump, The Deep State, “China”, Nazi Donald Trump, SARS-CoV-2…

The list approximates a chronology, and it will be updated.

The Pixar Princess

A surprising number of animated classics have been created during my lifetime. Two important revolutions happened in America at the end of the 20th century: the “Disney Renaissance” and the arrival of computer-based animation. While the explosive impact of 3D was hard to miss, another important factor was subtle and sociopolitical.

The Little Mermaid was a regression in one sense, to the animated Broadway-musical royal romances that defined early Disney animation. In another sense, it broke new ground for the format. Ariel is the titular 19th-century ocean-dwelling heroine who makes a faustian deal to win the heart of Prince Eric, a boring but beautiful human. As only the fourth official Disney Princess, she does something other than be in distress, which is notable. The box office returns were impressive, and the royal romance was back in a big way. However, that whole singing princess trope hasn’t transcended its gendered appeal to this day. Consider that Pixar out-earned Disney at the box office without featuring a princess for 17 years. Or consider Halloween costume sales.

Anyway, let’s continue. Belle is also an “interesting princess” with all the book learning and such, but her character has more depth in that she learns to love a hideous creature. There’s not much else that needs to be said here, but imagine if the genders were switched!

Jasmine is the bride of Aladdin. She is notable for being non-European and owning a tiger. Perhaps because boys would balk at Beauty and the Beast, the action in that movie revolves around the illegitimate title prince. The Lion King doesn’t actually marry an official Disney Princess, which is just as well because lions are polygynous cannibals.

Pocahontas is next, featuring a quasi-historical (Native) American. The wheels were coming off the royal romance gravy train by this time, and her movie was slightly overshadowed by Pixar’s explosive debut. Animation would be changed forever; 2D was suddenly old-fashioned and unmarketable (see The Iron Giant). While its shiny new rendering process got a lot of attention, Toy Story was also at the vanguard of a different narrative technique. Gone were the musical numbers and pretty princesses – the only romance in Toy Story is between Woody and Bo Peep, and the framing device literally casts them as role-playing toys.

That stroke of genius allowed the franchise to explore mature themes like jealousy, existential angst, and the social contract, while basing all the action around a child and his possessions. Perhaps there is some significance to Andy’s gender and the fact that his pretend play always involves aggressive conflict between good and evil. The neighbor Sid takes this to a perverted extreme, obliterating and mutilating toys, while his sister Hannah has them share idle gossip over tea.

In any case, Pixar’s movies have avoided the royal romance trope almost entirely. Shrek absolutely wallowed in fairy-tale nonsense, and eventually The Princess and the Frog and Tangled introduced Tiana and Rapunzel as the first Black and 3D Disney Princesses, respectively. Meanwhile, Finding Nemo celebrated guardianship and adventure, The Incredibles focused on ability and the family unit, and Ratatouille studied ambition and creative expression. The latter did have a romantic subplot and a peerage aspect which was subversive at best.

To get to the point, Merida the Brave is scheduled to become the first official Pixar Princess in July. This is interesting for several reasons: First, she stays single until the end of the movie! Her three would-be suitors are not very charming, to say the least. Second, she doesn’t sing, except for one Gaelic lullaby. Finally, Merida isn’t actually the first romantic female lead in a Pixar movie. That honor goes to EVE, a machine with a fusion reactor and a minimalist design supervised by Jony Ive. Clever.

I’m leaving out several other animated features of note, like Wallace and Gromit, Persepolis, and Coraline, and that isn’t even mentioning Miyazaki or the rest of Japan. Here’s to all these great artists, and congratulations to Princess Merida!

The Touchscreen Paradigm

Programming is evolving faster than ever. In recent years, mobile platforms have broken the software market wide open, and most implications of this disruption are yet to be discovered. However, some effects are already obvious. Software has transcended the limitations of mouse/keyboard/gamepad input, since mobile devices integrate touchscreens with cameras, microphones, speakers, and wireless connections. I call this “the touchscreen paradigm” but it refers to all of those now-standard inputs and outputs.

This hardware generalizes to an unprecedented number of applications. A typing keyboard can be simulated by key images on the touchscreen, although that experience has decidedly inferior ergonomics. Mouse clicks are replaced by touchscreen taps, and while this system has no provision for “hover” interactions, other forms of mouse control are improved. Drawing is very awkward with a mouse, since the brain has to map the mousepad surface to the display in real-time. Touchscreens eliminate this problem, and in fact they are functionally similar to high-end drawing tablets with integrated screens that have been available for some time. Wacom, a manufacturer of these computer accessories, now sells a high-end stylus that integrates with mobile software.

Other applications go beyond anything that is possible with a mouse and keyboard. Multiple finger touches can be processed at once, making “Minority Report” interfaces easy to build in software. Microsoft put significant capital into a tabletop touchscreen computer called the Surface Table (re-branded as the PixelSense). However, similar interfaces can be added to mobile devices with software, such as this Photo Table application. Fortunately for independent developers, the barriers to entry in mobile software are very low because standard hardware already exists.

These examples barely begin to fill the space of possible touchscreen applications. My phone is already a feature-rich camera, an “FM” radio, a guitar tuner, an SSH client, and a flashlight. Those products have been manufactured before with dedicated hardware, but mobile software is also being used to invent completely new technology. Products which require a touchscreen, audio/video input/output, or an internet connection can be built entirely in software and sold as mobile applications.

As a software engineer, this is obviously a good thing. However, the sheer number of new applications that are possible on mobile platforms presents an intimidating problem: what to build next? Customers might be able to describe what they’d pay for, but they don’t always know what they’ll want to buy in the future. The first generation of application programmers probably experienced a similar feeling. It’s inspiring and terrifying at the same time.

Non-Player Characters

Here’s an interesting idea. This article mentions “non-player characters” in the context of a role-playing game, and proposes something rather unsettling:

Many of us approach the other people in our lives as NPCs.

I’ve been thinking along similar lines. People often imagine strangers as incidental scenery, part of the social environment. This is understandable given the limits of human knowledge – there simply isn’t enough time or mental capacity to understand very much about very many people. However, we often forget that this perspective is only a necessary convenience that allows us to function as individuals. For example, if you’ve ever been in a rush to get somewhere on public transportation, you’ve probably felt that bit of guilty disappointment while waiting to accommodate a wheelchair-bound passenger. Here comes some person into my environment to take another minute of my time, right? If you use a wheelchair yourself, this delay happens every time you catch a ride, and that frustration simply does not exist. If anything, I would imagine that disabled passengers feel self-conscious every time they are in a situation where their disability affects other peoples’ lives, even in an insignificant way.

Has this always been true? Probably to some degree, but the modern media environment seems to especially promote it. Good fiction writing communicates the thoughts and motivations of relevant characters, unless they are complete unknowns. This means that any meaningfully observable character has some kind of hypothesized history and experience informing their participation in the story. Film is different, in that a script can describe an “evening crowd” in two words, but the realization of that idea can involve hundreds of extras, living entire lives and working day jobs that barely relate to their final appearance on the screen. We can assume that their real lives intersected with the production of that scene on that day, but it’s really the only significance that their identities have in context.

With interactive media, the idea of a “non-player character” has appeared in many forms, and academics study how they can design the best (read: most believable) fictional characters for interactive environments. Here the limited reality of these characters is even more pronounced. In video games, non-player characters have lower polygon counts, fewer animations, and generally use less code and data. This is a consequence of the limited resources available for building a virtual environment, but the effect is readily apparent and forced.

Does this mean video games shouldn’t include background characters? Not really. What I’m suggesting is that we should be careful to see this phenomenon for what it is: an information bias in favor of the protagonist, which necessarily happens while producing media. It shouldn’t ever be mistaken for a relevant characteristic of the real world. This holiday season, when you’re waiting an extra minute or two for a disabled stranger, or expecting better service from a tired professional, remember that he or she probably has lived a life as rich and complicated as your own, and try not to react as if he or she is just some kind of annoying scenery. Whoever it is might return the favor, even if you never realize it.