Pandemic Woo

Here we are part-way through 2020, the year humanity started a war against the common cold, and lost.

“How DARE you, it’s much worse than the flu!” someone howls.

Yes, sure, maybe in a certain context. It’s more contagious than a flu, it’s new so there is no vaccine, and it will lead to a great many lower respiratory tract infections and deaths, possibly more than we have seen in decades. Any virus that jumps to humans from another species has the potential to be quite a bit more dangerous until it evolves toward its attenuated endemic state. But contrary to what prominent experts still say, COVID-19 and all endemic human coronavirus diseases are colds. And although colds are bad, the long-term lethality of every new species is wildly over-estimated.

“You’re not a scientist, you know nothing about how SCIENCE works,” someone else heckles.

“I fucking LOVE science!” contributes another.

Here’s the problem: Science as you know it is dead. Unplug the ventilator because this was it, this was the last straw, there will be no coming back from this. The folks running the show do not know what they are doing, they have made a huge mistake, and the consequences will be too big to ignore. I’ll explore this claim in detail, and I did sit through the quarantine, so now it is only fair that you have to sit through my rant.

Science is dead

We’re witnessing the end state of a toxic culture that punishes disagreement, rewards sycophancy, and worships consensus. I’m an engineer but I do know how this culture works, and I’ve avoided it like… some sort of plague. Being a scientist today is about the worst possible job for someone with more than a teaspoon of curiosity, because most of what you know as “science” is a sham, run by the pettiest gang of bullies on the planet. Disagree with the favored consensus? Good luck applying for the zero projects that will talk to you, loser. Slog through six months of research? Congratulations, your boss will steal every discovery to get tenure.

Thomas Kuhn said basically the same thing in The Structure of Scientific Revolutions. It’s depressing to think that even if Popper was right about how science should work, Kuhn was right about how it actually did work – right up until it died. Which was just now.

Aren’t we talking about a virus?

Of course! The virus.

Respiratory viruses have been mutating and jumping between host species since before humans (or heck, mammals) evolved. Until the original SARS-CoV, there were four known species of coronavirus circulating in humans, and they’re all still around causing common colds year after year. We have no idea how many other species went extinct in the past. Remember that molecular biology is less than a century old, and the tools needed to analyze DNA “really well” have only existed for a few decades. With more humans on Earth, and more animals living alongside humans, viruses might be jumping to humans more frequently than before, but we still barely understand this phenomenon.

As mentioned above, a virus can be much more virulent (deadly, basically) in a new host species right after it jumps to that species. But if something can adapt enough to jump between species, that same flexibility means it can continue evolving, and probably in a less-dangerous direction, because the strains which do not kill or incapacitate their hosts right away are fitter, i.e. the host will have a chance to show up at work and cough onto other hosts. There are lots of similar theories in the literature, but I’ll add that out of seven examples, we have never seen a coronavirus that is both contagious enough to become endemic, and more deadly than a flu. While this new one may well become endemic, in that case I would be very surprised (and we would be in a lot of trouble) if the higher mortality estimates hold.

Personally I don’t think that millions of healthy people will ever drop dead from one of these diseases in the space of a year, like they once did from a flu in 1918. But that’s speculation – stay tuned for more in a bit.

Theory and experiment

This post-modern mess that is contemporary science bothers me so much because I come from an old school, one that doesn’t get along with Kuhn. He argues that science in its normal phase always operates with respect to a dominant paradigm, such as the Geocentric Universe or the Standard Model. Problems inherent to that paradigm will cause disagreement between scientists, and eventually a rebel faction will install a whole new dominant paradigm in a coup. There is no paradigm without problems (see incompleteness), and there is no objective way to judge one paradigm against another. Bad news all around!

If no-one can arbitrate whether a contending paradigm is better than the dominant one, then meaningful scientific progress can only happen by political revolution. And if scientific achievement is the process of winning a popularity contest, then nothing about science makes it special or different. Kuhn’s science is basically plain-old politics with silly customs. This was famously implied by Paul Feyerabend in Against Method, even though some academics have tried to interpret it otherwise.

Before all of the silliness, science actually was special. It was the light of modernism, finally illuminating the demon-haunted world. It was the one institution that was supposed to be immune to human nature, because it was not a government but a method, an idea:

  1. Imagine a theory about how the world works in some small way.
  2. Do an experiment to test your theory. If it fails, discard it.
  3. Repeat until you collect the complete Theory of Everything™ or you die, whichever comes first.

Karl Popper distilled “science as falsification“, the idea that every meaningful experiment must attempt to disprove a theory, and an early version of the null hypothesis. In 2010 Moshe Vardi criticized the encroachment of numerical methods and computer models into the theory-and-experiment core of the scientific method. I believe strongly that science should be understood as this ideal, because it has no value as anything else.

The reputation of science was built by people who treated it like an idealized method, too. In 1925 religious modernists sold the method to the American public at the Scopes Trial. Prestige accumulated in various wars for most of the 20th century. By Kuhn’s time, the scientific method had been applied (with intent and at unprecedented scale) to help win freedom around the world using technology like radar, metal alloys, atomic bombs, and more.

Any random post-modernist theory couldn’t put a dent in that reputation, but Kuhn’s idea had a profound effect on the behavior of scientists themselves. Many more started cutting corners with the method, hustling their pet theories and paradigms without experiments to support them. Or arguably even worse, substituting computer models for the experiments and pretending that their papers were something other than fiction. We have to admit that Kuhn accurately described the behavior of a community of people. The tragedy is that the average response was not to work on improving that behavior, but to double down and abandon the only reason why anyone cared about the community in the first place.

Whether Kuhn’s vision was a self-fulfilling prophecy or just an inevitable result of natural sociological law, we cannot tell at this time. In either case, science has since become thoroughly post-modern, lost its reputation, and died.

Right, because of a virus!

Not quite. Perhaps you are wondering why this particular episode signals the end, and not the replication crisis, the global warming business, or something else. Well, it’s because this time people have actually gone ahead and done shocking and terrible damage to civilization on the recommendation of the scientists. That damage will be impossible to ignore, it will force us to investigate exactly what went wrong, and we will find out, uh that the wizard behind the curtain has no clothes or whatever.

Surely this can’t be true! The only alternative to radical suppression and mitigation would be utterly devastating, that’s what the president said, right?

Well no, this is almost completely backwards. States and countries are pulling back on their lockdowns, and the peasants are rebelling against quarantine, but we will fail to see the “second wave” that was predicted (although cases may rise again in the winter). While I’m not going to try to unpack a bunch of preprints which are hinting in this direction, it’s basically because we’re closer to herd immunity than prominent experts are interpreting the available data to suggest.

[NOTE: This turned out to be way off. I underestimated both the seasonality and the speed of mutation, meaning we never got to herd immunity for a variant before the next one evaded immunity enough to spread again.]

Meanwhile, the damage done by radical suppression and mitigation will be far worse than those same experts are willing to acknowledge. Let’s review some forms that this damage may take – most references are anecdotal news reports because these effects are not being studied by public health professionals yet:

  • Many people with heart attacks, strokes, and other medical emergencies are avoiding the hospital because they are afraid that catching COVID-19 is a bigger risk than staying home. Something like a stroke is very sensitive to how quickly it is treated, and these patients are dying much more often when they wait to seek treatment. Many of the survivors will suffer heart damage, brain damage, and shortened lives as well.
  • Similarly, people who are supposed to have elective procedures like cancer treatments or heart bypass surgery (yes, that’s usually an elective procedure) now have to wait until either hospitals will accept them again, or their condition becomes so bad that the procedure is no longer classified as elective. Delay in these cases will also lead to additional deaths.
  • Many people who are right now turning to drugs and alcohol in quarantine will relapse or become addicted for the first time. Many of those will then never recover from their addiction, and some will overdose on hard drugs or die of alcohol poisoning.
  • Many people are committing suicide, both because of economic hardship, and because this is by far the scariest and/or loneliest experience of their lives. Increased suicides will continue after the quarantine is over, because some people will not get their jobs back, and some people will slip into a rut of depression from which they never recover.
  • Panic attacks can cause real heart damage. In general, stress and fear can be very bad for health.
  • More people overall may be dying in car crashes. How can this be happening when there is less traffic on the roads? Well, more people are speeding because the roads are empty and the cops are busy arresting people for not wearing face masks!
  • Most victims of domestic abuse will not be killed by their abusers, but these people are now trapped in quarantine with their abusers and suffering. These people will bear physical and emotional scars for the rest of their lives, some will turn to substance abuse or suicide to deal with their trauma, and some abused children will grow up to continue the cycle of abuse.
  • Some families that would have managed to stay together otherwise will be stressed to the point of breaking by constant contact. Others will benefit from the time spent together, some families should break up, and many children probably will be conceived during quarantine, so it’s hard to say for sure whether this will be a net-negative.
  • School-age children in every household are staying home, of course. Many children from lower-income households rely on school-provided meals to stay healthy, and in general the disruption to their education will have downstream effects.
  • It won’t happen as much in the first world, but there will be surges in diseases like tuberculosis, as well as starvation, in poor countries due to the lockdown. I would bet that these additional deaths alone will at least double the (official) total killed by the virus globally.
  • As we move further out along this limb, we can guess that some parts of the world will see tyranny and the general breakdown of social order, or even outright war. Almost every country has slipped at least one small notch in this direction with the imposition of questionable and/or unconstitutional lockdowns. Some are already going much further – Hungary’s prime minister basically suspended parliament, for example.

Finally and most controversially, deaths from COVID-19 could end up being greater due to the lockdown and panic. This is for at least three possible reasons:

  • In several hotspots, nursing homes were compelled to accept recovering COVID-19 patients, due to fear of scarce hospital beds that never materialized (sadly ICU beds are a different story). Apparently the NY government wants you to forget that this happened.
  • Mitigation can only slow the spread. If the pandemic does not end with a vaccine, roughly the same number of people will be exposed to the virus regardless of mitigation efforts. This means that people not at risk but still in quarantine (mostly the young) will be susceptible or infectious for longer, and it is possible that many more vulnerable people (mostly the old) will therefore be exposed, before herd immunity is achieved.
  • Most people who are diagnosed with COVID-19 will be overcome with anxiety. As a result their prognosis will be worse than it would be if they thought they had a mild disease. It has even been hypothesized that fear can be the most significant cause of a person’s death. I am not claiming that the severity of COVID-19 is “all in our heads”, but ignoring this factor would be foolish.

Reckoning

Let’s imagine that the effects of radical mitigation do in fact turn out to be net-negative. The immediate question is then: What could we have done differently? How should scientists and policymakers have acted instead?

With the growing powers of hindsight come answers. First and most outrageously, prominent experts made zero effort to communicate the true limits of their understanding, and they should have known better. Most laypeople won’t dig through research papers looking for stated assumptions, but they’re not less intelligent than us, folks. We need to admit that we’re unsure about every one of our assumptions, every time. We need to interrupt and correct reporters when they get things wrong. If we do not do this, ordinary farmer-joe type citizens will be suspicious, and they will be right to be. The confidence game of pretending to know everything is over.

But even if we were looking honestly at the data, how could we have avoided a wave of sick people overwhelming hospitals, without extended lockdown? In that case, as soon as we learned that the elderly and immunocompromised were the people seriously endangered by this virus, we could have quarantined them and gotten to work exposing everyone else as fast as possible. Most professionals backing this idea suggest that it could be done in stages based on risk tiers, which would roughly approximate age groups from young to old (excluding the immunocompromised). Essentially, we could have been racing for herd immunity, against the spread into the most vulnerable populations.

Prominent experts and media people are not recommending this right now, and some have gone further to insult anyone who suggests that this risk could be acceptable and even strategic. These people are cowards. They should, and they soon will, lose all credibility in the domain of public health policy.

The fate of scientific discovery

Let’s take an abbreviated (and slightly unfair) tour through the corpse of the sciences, and conclude with an attempt to solve Kuhn’s conundrum.

  • Theoretical physics: We need another trillion dollars to build the next-size-up particle accelerator, so that it can probably just tell us the Standard Model works, again. What do the finely-tuned parameters in the Standard Model actually mean, anyway? And what is renormalization, other than a glorified fudge factor? Hey look at this clunky new alternative to String Theory, which also predicts nothing new, testable, and correct.
  • Applied physics: We don’t get the biggest budget, but we did develop some lasers and rockets which could be useful. Unfortunately however, our research is not likely to yield an answer for the meaning of life.
  • Astronomy and cosmology: Heyy, we got a bloop! It’s definitely, positively, absolutely, two black holes merging. One is, uhhh, 25 solar masses, and the other is, uhh, 17 solar masses. The data fits!
  • Chemistry and biology: Actually doing gangbusters over here. We just figured out how to chop up DNA and rearrange it, so the future is going to be terrifying.
  • Sociology: Kuhn proved that science is a social phenomenon. Also, everything else is a social phenomenon.
  • Climate science: Science is in the name, you can’t get more science than that! We are currently researching why white Republicans are in denial about it.
  • Economics: We’re just as much scientists as those theoretical physicists! Why don’t we get a trillion dollars too?

In the not-too-distant future, “science” in the popular understanding will revert to being just a method, and the academic community built around it will evaporate onto internet message boards. Much of the same work will happen, and different paradigms will remain incompatible with one another – Kuhn was right about that. The big prize, that prestige which had been bestowed on the former scientific community, will be claimed by the applied sciences and engineering disciplines, where it belongs. Everything else will be understood as something like “natural philosophy” (which is fine too).

We’re left with a serious problem: All paradigms are incomplete, and so the ideal scientific method must always have discontinuities in the real world. How can any field move from one paradigm to the next without the method falling apart? Are we doomed to flail forever in the eternal darkness of political quagmire?

No! In fact the applied sciences are precisely those which can always be applied, so for them at least there is a solution. We can evaluate every paradigm in the applied sciences, and indeed compare them against one another, by observing whether they are useful. Call this process “paradigm renormalization” if you like:

  1. Before a paradigm is useful, it can be superseded by any other paradigm which is already useful (if there is overlap).
  2. A useful paradigm can in turn only be superseded by another more useful paradigm (if there is overlap).

Look through history, and this same pattern is followed by successful paradigms until the mid-20th century. Ballistics is useful. Fourier analysis is useful. Oxygen theory is useful. Relativity and Quantum Mechanics are both useful. Evolution wasn’t useful for a long time, but today we use it to develop flu vaccines. Utilitarian science is not a brand-new concept, and it bears some similarity to the ideas of Imre Lakatos.

We have an interesting decade ahead of us. Will we actually lose this war against respiratory viruses? New treatments and cultural changes will make a big difference, which is great news. But I’m not so sure that we will be able to beat evolution at its own game anytime soon. Be safe, wash your hands, and enjoy the ride.

Disagreement

People disagree with each other, which is OK, but we’re very bad at it, which is not. Nowadays disagreement is commodified and amplified by “social” technology, so everyday arguments can lead to meaningful progress across the world. But if we’re bad enough at disagreeing, those same arguments are gonna lead to war!

Pop quiz: Does your side create problems? Does the opposing side solve problems? If you answered either question with “no” then you’re disagreeing badly. Most people aren’t sociopaths, and bad motives are not the cause of common disagreements. But most people aren’t stupid, either – at least no more than the usual amount. So if most people are basically good-natured, and most people are basically rational, why do they still disagree?

Obviously it’s because they have different priorities! Let’s say that your top priority is solving Problem X, but mine is solving Problem Y. We’re totally fine so far, let’s just solve both. But what if solving Problem X makes Problem Y worse, or vice-versa? Now we have a disagreement, even though neither of us is ignorant of either problem. Scale this up so that there are many millions of problems, and any action will solve some of them while making others worse. That’s the real world.

And don’t try any of that “the free market will raise the standard of care for poor people too” or “averting environmental crisis is good for the economy” bullshit. You’re picking one thing to prioritize over another and you’re rationalizing your choice. Get better at disagreeing, by being honest with everyone – starting with yourself.

Future-Driven Development

Have you used Yarn? Yarn is like NPM but it has a “lock file” to prevent conflicting versions in sub-dependencies and sub-sub-dependencies from breaking your bundles. It is blowing up for good reason – the lock file is a great feature.

But we all know lots of people are going to stick with NPM, and it will turn into a whole schism until the mainstream switches, or until NPM absorbs the feature (my prediction). Why? Because everyone gets so worn down chasing the de-facto official JavaScript best practices year after year, that new (maybe worthwhile) ideas get ignored with the rest of the cutting-edge.

This is the sad result of too much, let’s call it “future-driven development”, a pernicious malady affecting software projects near you. It comes in many forms, here are a few:

  • Building and re-building unit tests while a prototype is changing rapidly
  • Over-normalizing your database, adding abstractions and joins everywhere
  • Using ES2018 in 2017 with bloated polyfills because “someday they’ll be native”

In academia, you are expected to act like this. Researchers try zany architectures and plan for uncertain future scenarios because that is the whole point of research. If you’re doing R&D in any capacity this is somewhat true – it is dangerous to ignore anything promising and new for too long.

However, people also act like this way too much in the professional world. When building a dashboard for a customer, you are not trying to win Architecture of the Year, you are not building a reference implementation of All Current Best Practices, your dashboard might not always need to run for a hundred years, and you are not building the dashboard for yourself. The tools only matter so far as they continue to help you deliver an effective product.

Objection #1: How do you know what you’re going to need until it’s too late?

You don’t, and you never will. That’s where experience comes into play. You will never arrive at an ideal implementation for any particular set of requirements by following every last one of today’s best practices (most of which were developed to solve a Very Serious Problem that you might never have).

Objection #2: Doesn’t this make it harder to improve future standards?

This objection was originally a sarcastic caricature, which sums up my feelings.

This Trump Thing

Many of my friends are really upset about the 2016 election. Rather than speaking off the cuff, I’d rather point them here to read some tactfully-prepared comments which I hope will help. Disclaimer: I didn’t vote in this election and wouldn’t have voted for any of the known candidates. There was so much anger, fear, and bitterness motivating support for these candidates, and it contrasted so drastically with my personal experience of 2016, that I felt it best to stay out of it all.

However I do read and think about culture and politics a lot (perhaps too much), so maybe my perspective isn’t worthless.

Based on what you’ve seen, you might decide that old bitter racists got mad at brown people and voted in a Nazi. If this is what you believe then yeah, there is reason to be scared for the future. But I don’t think it is accurate. Here are some of the actual reasons why half the country voted for Trump, all pretty much baseless and off the top of my head.

Reason 1: History and Demographics

The Industrial Revolution is long over, and now the next one is in full swing. Centuries ago, when machinery was invented to perform tasks that had required raw human strength, the displaced laborers caused a stir, burning down factories and I expect getting all up in the politics of the day. In part, this is the same effect on a much larger scale. These days the machines don’t only do the heavy lifting, they do the fine details too. And many industries that used to require physical manipulation simply don’t anymore, because pure information is easier to work with. This means there is a whole generation, the last industrial generation, with nothing left to do. It’s sad. While the young people were going to college and preparing for the new high-tech information service economy, these old people have had basically no prospects, and nobody in politics has been sympathetic to them in a long time. Whether or not he can do anything about this, Trump was speaking directly to them the whole time, and he flipped lots and lots of the industrial-generation folks who had voted for Democrats their whole lives. You can take issue with these people believing that protectionism is a viable solution to their problems, but nobody else was proposing anything new to them.

Reason 2: Racist (etc) White People

OK, yeah, some white people are bigots. Most of those people jumped right on board. You can find their writings online (…or maybe spray-painted on a wall in your neighborhood) if you know where to look, but keep in mind that trolls like to masquerade as those people too. More importantly, your garden-variety racist with a low opinion of Muslims has never had a Muslim friend. Their bigotry is usually born from fear, which is born from ignorance. The proper response is outreach and pity, not ostracism and smugness.

Reason 3: Smugness

On the flip side, many of you have never had an old working-class rust-belt friend. You don’t understand them, and neither do the media folks who have been trashing their culture for decades. For example, the media has played up the racism angle to an unfair degree. Factory workers, coal miners, and evangelicals are people too, many of them smart and interesting people, and while they might not know everything, neither do you. If you have the stomach for it, read this prophetic piece and try to imagine how these folks have seen the coastal elites behave for decades. A surprising number of young college-educated people voted for Trump too, and I’ll bet more than a few did so because they simply picked the side with less smug behavior.

Reason 4: Corruption

I mean come on, by 2016 the Bush-Clinton family was as incestuous and rotten as the last generations of the Holy Roman Empire. Most of the people I know who voted for Trump weren’t particularly fond of him, they just couldn’t stand another anointed Yale scion being paraded around in front of them by Turner Media and the National Broadcasting Corporation. Many of them would have voted for Bernie Sanders if he was on the ticket.

OK, that explains a lot, but we still have an impolite egomaniac with no political experience as president, we’re doomed!

Maybe, yeah, but probably not. America has been through a lot, and most people (yes even the Trump voters) are still basically good people.

What can you do to help? First, make friends outside your comfort zone. This situation is partly the result of years of people shutting out everyone and everything that makes them uncomfortable. We cannot function in the long term as a society where everyone does this. Understand that other people can arrive at other decisions that you don’t like or even hate, and you can still respect them as people. Also once you know them better, you will understand where they’re coming from, and you’ll even have a chance to sell them your ideas.

Second, don’t get disheartened. If you believe in something keep fighting for it. But again always respect your opponents as people, and remember how badly the “smug bullying” tactic just backfired. Play the long game. Be polite, but annoying, and keep it up for a long time. If you are on to a good idea and you can rally long-term support it will win out in the end.

Third, if you are still worried about the nightmare Nazi scenario, exercise your right to bear arms. Never incite violence, but remember that the very best insurance against fascism is a well-armed populace. Regardless of who might actually attempt a fascist coup, the gun-owning basically good Americans will be right there with you fighting shoulder to shoulder if necessary (which we all hope and pray it will never be).

Finally, read more books from the past. Civilization has been around a long time, and in most ways things are better than they have ever been. That won’t change. Your fears about a Republican Supreme Court making over-the-counter contraception illegal might be justified, but keep in mind that we aren’t talking about throwing people in jail for adultery. Maybe some immigrant families will be broken up, which is very sad, but the public is only going to tolerate this policy if they prioritize dangerous criminals like any old law-and-order administration. So if you’re here in America illegally, uh, you probably want to drive exactly the posted speed limit for the next few years.

Future progress for minorities and women and gays needs to be made overwhelmingly in the social sphere, where government will not be involved no matter who is president. Don’t take this political event as a sign that things can only get worse from here. And one day, probably in your lifetime, that last glass ceiling will indeed be cracked.

The arc of the moral universe is long, but it bends towards justice.

– Martin Luther King Jr.

How to do Programming

TL;DR: Identify assumptions and write them down.

There’s a common (I think) misconception that the programming trade is all about knowing obscure facts and doing hard math. From this it follows that a person has to get really good at math or know a huge number of things before doing programming.

Unfortunately people who have this misconception can be discouraged from trying in the first place. It might be unrealistic to think that every single person not only can but also will want to do it, but I think that lots of people with this misconception could do very well at programming, once they understand what it is actually like, and if they put some effort into learning it.

In reality, being good at the math or knowing all about compilers and language features does not help with a large percentage of the day-to-day work that most programmers do. Instead, their effort goes into writing down all the assumptions made by the high-level description of a feature. In other words, the requirements will say “display a count of the current total” and the programming work is about finding the assumptions implied by that description (“what does ‘current’ actually mean?” etc), then writing them down explicitly. Once you write down the assumptions in the right way, the explicit representation of the assumptions is your code and you are done. Getting everything written down correctly used to be much harder, but with modern programming tools it isn’t even possible to make some of the most problematic mistakes anymore, and the tools can catch lots of other mistakes automatically. For everything else you would want to get help from a more experienced colleague, or just ask strangers on Stack Overflow.

There are programmers who would take issue with this description. I’m using the terms “doing programming” and “day-to-day programming” strategically, really to mean commercial or hobbyist programming for which there are mature high-quality tools and well-understood best practices. On the cutting edge of academia, and within programming projects that have very demanding requirements, advanced math and knowing lots of obscure facts can be much more important.

Basically, what I’m saying is that people tend to confuse the larger industry with that super-difficult hacker nerd work they see in movies and TV shows. In fact, the vast majority of programming work going on right now is the other kind. There are huge numbers of those jobs to be done, because it is where all the theory and advanced knowledge finally can be applied to a real-world problem. Many large software teams have so much of this kind of work that they split out a whole department for “requirements engineering”, which is taking the really high-level descriptions with tons of assumptions, and breaking those out into statements with fewer or no assumptions left in each statement, so that the code writers can focus on making the final code work. The best requirements are harder to write than all the coding work that comes after!

Maybe someone has told you before that programming is all about breaking problems down into smaller problems. It’s another way of saying the same thing.

 

The Psychology Journal Ad-Hominem

Have you ever had someone tell you that liberalism is a mental disorder? Or that right-wingers vote for bad ideas just because they have an irrational world view? It’s a pretty common ad hominem in politics, and not a compelling one. The idea is a tautology: Find two people who have incompatible ways of looking at the world, and each will think the other’s way is somehow defective. But this silly tactic shows up in elite intellectual discourse way too often.

An early example is The Anti-Capitalistic Mentality by Ludwig von Mises, a book-length psychoanalysis of the author’s political opponents. The short version: people who resent their betters turn to communism since it means bringing everyone else down to their level. It’s not subtle.

More recently, academic examples skew leftward. To be clear, this is not a claim that left-wingers disproportionately rely on ad hominem. Nor is it a claim that there is some disproportionate weakness in the right-wing psyche. It’s probably because more psychologists are left-wing than ever before, and since the attack is based on psychoanalysis it shows up a lot in their literature.

Recent examples can be found with a quick internet search. Here are a couple:

Explaining the Appeal of Populist Right-Wing Parties in Times of Economic Prosperity

The traumatic basis for the resurgence of right-wing politics among working Americans

Although these papers are more nuanced and focused than some all-encompassing manifesto, they are still ad hominems. The authors can and will claim that their research is purely academic and not intended as any kind of attack, but let’s be honest, there is one big obvious way this kind of research will always be used. It will be served up on popular political websites and compiled in brightly colored books, ready to be used as Thanksgiving ammunition.

The literature often uses global warming as an example, and a few things are going on here. In many circles global warming is an indisputable fact, so if you want to deploy some political ad hominem against the people who tend to be skeptical, it’s a great starting point. Also, the global warming movement hasn’t succeeded. This approach serves both as a play to convince voters after all, and as something to offer environmentalists disappointed with and confused by a lack of success.

The classic example in the subgenre is Lewandowsky et al’s famous Recursive Fury, a paper psychoanalyzing those who reacted poorly to another paper psychoanalyzing global warming skeptics. How it got through more than five minutes of planning without being abandoned, we may never know. In any case it was eventually retracted.

Another interesting example is On the relation between ideology and motivated disbelief by Campbell & Kay. To its credit, the paper does attempt to strike a balanced tone, supporting an ad hominem attack against both political parties. Still, they put a whole lot more effort into the global warming part, and the fourth study might be a strategic addition to give the impression of dispassionate science.

These ad hominems have been around for a long time and they aren’t going anywhere soon. But they are silly and don’t belong anywhere near academia. Even with the veneer of science, the tactic only convinces people who are inclined to accept the ad hominem anyway. It looks desperate and stupid to everyone else.

ReactJS Techniques: Version Tags

Version tags are a pretty simple technique I’ve used with ReactJS to optimize rendering larger apps. This optimization does not add any dependency on an immutable data library. Right away I have to say that this is not something you want to be choosing all of the time. If you want to be able to use shouldComponentUpdate optimizations easily by default, or if you have any reason to be saving and replaying undo history, then you should seriously consider using an immutable data structure. But there are a couple of common reasons to skip that piece of architecture. First, If you simply cannot afford to load any more JavaScript after ReactJS and the other libraries you might already be using, then the convenience of ImmutableJS or a similar library might not justify the overhead to load it. And second, if it would be too difficult to learn and/or integrate ImmutableJS with your existing architecture, this is one alternative. In some ways it is much simpler, but in one important way it is more difficult to use.

If you want to do a very clean implementation of a flux-type architecture, your data might propagate down from a top-level “controller-view” through a whole big tree of React components. Each component in the tree will potentially render on every state change, so to avoid this you want certain nodes to be implementing shouldComponentUpdate and checking if their particular data has changed. The simplest way to do this is to flatten all props and check them one by one. While this can give a performance boost, the boilerplate required to do so can get ridiculous. ImmutableJS and similar libraries eliminate this problem by making the state an immutable data structure. Basically this means that every time a node in the state updates, that node is copied and a new reference is made to hold the new node, along with every node directly above it in the state tree. Then any component which only needs a particular state node to render can check if that reference has changed, and superfluous rendering can be easily skipped. Having the old nodes around also makes undo features trivial to add.

Here’s how version tags work. Instead of copying the node(s) that you need to be updating, you just change a version tag on those nodes. Then instead of checking an immutable reference, the components check if the version has changed to see whether to render. That’s it.

This means you’ll have to add a version tag to the same scope as each node of data you want to be able to optimize for, and it means you will have to manually update the version tag every time something in that node changes. This approach is a sort of middle ground optimization between checking every individual value type at one extreme, and working with and checking an immutable data structure at the other. However in some situations it has the potential to perform better than both.

As a simple example you might have an application that displays a list of clients and a list of appointments. You could add a version tag to each object in your state:


var _state = {
clients: [...],
clientsVersion: 1,
appointments: [...],
appointmentsVersion: 1
};

And then if you ever update something in appointments you’d do appointmentsVersion++; as well. The component that renders the appointments implements shouldComponentUpdate which just has to compare appointmentsVersion. Versions can be added to as many nodes of the tree as you like, to get the degree of optimization you need at the time.

Will updating versions all the time be in several ways less advantageous than using ImmutableJS? Absolutely. But it is a useful technique.

React.JS and Flux ideas for practical applications

Here’s a list of ideas and techniques I’ve used when working with ReactJS and Flux. There are two sections: general suggestions of how to get good results, and notes from optimizing for slower hardware. Hopefully they provide a little bit of insight for when you’re looking at all these new concepts and going:

Ah yes interesting, why not continue reading the web page?

It’s difficult to say which of these items if any would definitely be “best practices” but they are available here for reference. Use at your own discretion and happy coding. Several ideas are from [0]react primer draft and [1]tips and best practices, both good resources to introduce and guide learning on the subject.

General suggestions of how to get good results

  • This has been said elsewhere, but get as much state as possible into one place in your application. With Flux this is the store(s). The first diagram from [1] with one-way propagation of data through the tree of components is the ideal, meaning no this.state anywhere. However, there will inevitably be integrated components which have other state, and this should be as contained as possible using this.state and callbacks. Also, any state which is definitely only significant to the one component (such as the open/closed state of a dropdown menu in many cases) can be kept internally without a problem. With Flux this leads toward fewer or even no meaningful state updates happening outside the standard data flow, which is very nice.

  • Some action creators get data from AJAX and pass it along by dispatching an action. There is a way to do this with one REQUEST action and then one RECEIVE action, which is kinda nice because they can be tested and implemented separately, and the REQUEST action doesn’t have to have any callbacks. Another option could be for the action creators to attach promises with each asynchronous action. Then the stores or a service could unpack the data before updating the state. I haven’t actually used that method and there are cases where it doesn’t provide as much flexibility (simultaneous AJAX actions may cause a “cannot dispatch in the middle of a dispatch” error), so try at your own risk.

Notes from optimizing for slower hardware

  • Another point from [1], but definitely use the pureRenderMixin and/or implement the same kind of behavior with shouldComponentUpdate (more on that below).

  • I didn’t realize for a while that you shouldn’t just run the dev source through a minifier for production. With so much relying on pure JS interpreter speed, all those great log messages and debug exceptions make quite a difference. Make sure to build or copy the actual production script when using React in production.

  • There are two basic ways to show or hide sub-components based on UI state. One is to leave them out of the render (I’ve used a ternary expression that returns null if the sub-component is invisible). The other is to render them with a “visibility” prop and have the sub-component set a display:none style internally. The first method keeps the DOM small and clean, but it means more DOM manipulation each time you show or hide the sub-component. The second is more efficient if the sub-component visibility changes often, but it means more DOM elements in the page at any one time (although some are invisible). When optimizing, use the strategy that is more appropriate for the situation.

  • Don’t put complex literal props inline. These cannot be used with shouldComponentUpdate since === will return false every time. Maybe you shouldn’t use literal props at all, but that is if you want to set up constants for everything (usually not a bad idea).

  • Use shouldComponentUpdate. As far as I know there are four basic optimization strategies with shouldComponentUpdate:

    • One is to split up complex props into lots of flat value props, and just check through each one. I would think this can get verbose and maybe a little less efficient, but doing a bunch of === checks should get pretty good performance with little additional code complexity.

    • The second is to use a library like ImmutableJS or Mori to build an immutable state tree and update it through controlled methods. This adds a dependency and some not-very-serious overhead but enables pleasant management of an immutable state object that can be compared before re-rendering quickly and easily. The argument to use an immutable state object is much stronger if you have to implement some kind of undo/redo functionality, but either way it’s a great idea in many situations.

    • The third is to hand-roll some kind of version checking in each shouldComponentUpdate. I feel like this has the potential to be a very lean strategy CPU-wise but it is also a bit more complicated to maintain. Basically you add a UUID version tag to each node in the state tree you want to optimize, and whenever updating that node you also update the version tag. Then shouldComponentUpdate is just a matter of looking at the version tag for the node.

    • Finally there is the strategy of running shouldComponentUpdate fewer times. This is where you might use internal state in lower-level components, and refs to skip typing in a textbox. Another option is to set shouldComponentUpdate to always return false, and use componentDidMount and componentWillUnmount to set up and clean up DOM elements that are directly manipulated from componentWillReceiveProps. These are mentioned at the end of the list because for all but trivial cases moving state out of the standard flow will make code more complicated since synchronization between the two state repositories is now a requirement. Before using them take some time looking into alternative solutions.

Fun with Tailgaters

Commuting in the car means a lot of time spent accidentally thinking about how it could be improved. I’ve come up with several ideas for accessories, and this first one is useless but very fun. For some time I was planning a system where buttons on the dashboard displayed messages on the rear bumper. Stuff like “C’MON, REALLY?” to display to tailgaters, and maybe amusing stuff like “WHAT’S WRONG WITH THIS GUY?” when someone up ahead is driving poorly. It would be a pretty straightforward bank of switches plugged into an Arduino, and either a screen with pictures or (better) a train-schedule type letter sign, if those can still be found.

A few weeks back, however, I remembered that dog picture from the Internet that says “deal with it” when the sunglasses drop:

This one

I realized there couldn’t be a classier way to respond to tailgaters than with a live-action version, so I decided to make one. It would be a simple Arduino project, with a servo that lowers the sunglasses over the eyes of a stuffed dog. Then a relay would light up the “Deal with it” text on cue.

Setting up the Arduino code and wiring the components didn’t take more than a few hours:

Arduino wired to button and relay
Arduino wired to button and relay

Then there was the simple matter of printing out some sunglasses and attaching them to a servo (cardboard backing, super glue, and a zip tie for the arm):

The dog with its future sunglasses
The dog with its future sunglasses

Finally the sign had to be prepared. I decided to go all out and buy a real neon sign, since that is totally fantastic and you can get them custom-built. The sign arrived with this nice label:

Packaging for sign

I also opted to buy a pre-packaged relay to switch the sign, since I’m not a trained electrician and you don’t want to trifle with AC power from the wall outlet. The PowerSwitch Tail II is great, you just plug in 5V and ground wires to the side and it works like an extension cord with a switch. The rest of the wiring was just a couple of leads going to 5V and ground, and one pull-down resistor for the button. I also got a 300 watt inverter to provide power from the car battery, and a big red button to activate the sign. Wiring it all together for a test run, it looked pretty good:

Deal with it - Live ActionThe sign turned out to be bigger than I had figured, and it takes up the whole back window of the car. Luckily it has a clear backing so my view isn’t obstructed. There’s still some polishing to go, but it’s working very well.

Nobody has tailgated me anywhere near the threshold level for sign-activation yet (perhaps this is rarer than I thought) but it’s bound to happen eventually. You know when you’re waiting in a line of cars to pass a slow-moving truck, and some chucklehead decides to tailgate you, so that maybe you’ll do the same to the car in front and so on (I assume)? The next time that happens to me, I’ll press this button on the dashboard:

deal-with-it-project-4

And I’ll take my sweet time to finish the pass. Meanwhile the offending driver will see this:

Arduino code is on Github if you’re interested.

React: Why use it?

The latest JavaScript brand to hit critical mass is React. It’s a few years old, and still not at 1.0, but that hasn’t stopped it from making large waves in the community. If you’re just now getting proficient with Angular after switching from Knockout or Backbone, this might feel annoyingly familiar (and you might want to ignore React for a few years while the dust settles). But if you’re interested in where JavaScript at scale might be headed in the future, React deserves your attention for a few reasons.

The first is non-technical: teams at Facebook are using it internally to build very large systems like the chat application. The fact that the React brand of dog food makes up a large part of their diet means that it merits serious consideration. Other libraries like Knockout or Angular are not deployed at nearly the same scale within Microsoft or Google.

Of course, plenty of software tools see wide adoption because of fashion, so with React (and the proposed Flux architecture that goes along with it) we also have to look at the technical situation. Facebook has distributed some good media on this subject (see this video and the Flux documentation) but I get the sense that some of the ideas can be obscured by the vocabulary. I’ve seen “declarative” and “virtual DOM” and “immutable” too many times to count, but these might not communicate the relevant ideas to everyone. You might wonder: What makes React different if both Knockout and Angular use “declarative” markup too?

I think the central idea percolating through the community is stateless UI. React is designed to make it practical to move all state out of your UI components and into a separate, single place for each part of your problem domain (Flux calls these “Stores”). If you’ve ever encountered bugs with two-way view-model type stuff, eliminating these bugs while still delivering good performance is what differentiates this approach. For everyone else, let’s say you have UI components A and B that are bound to your application state (some view-model type stuff), and UI component A receives an update from the user. This propagates to the application state, and if component B is watching it will update as well. But what if component B also contains some state that component A is watching? Now A will update, and so on. In the real world more than two components are usually involved, and the potential for this behavior makes the program harder to reason about.

So then why do we need a special library? Can’t we just move the state out to a separate module and route every action through there (effectively making all bindings one-way)? The reason is that most libraries weren’t designed with that in mind, so throwing away the old state and fetching new state from an external source on every action means a lot of DOM operations, making for a slow application. This is where React’s virtual DOM comes in: you can go ahead and overwrite the state from your external source, and React will optimize what is sent to the real DOM.

More details will have to wait since I’m still learning the technology, but look into reactive programming for an extreme perspective on stateless UI, and check out the Windows message loop to learn about a precursor to the Flux architecture. Again, the technical reasons are important, but the easier argument to understand is that large teams at Facebook and elsewhere have had success with React (and Flux). Happy coding.