The Touchscreen Paradigm

Programming is evolving faster than ever. In recent years, mobile platforms have broken the software market wide open, and most implications of this disruption are yet to be discovered. However, some effects are already obvious. Software has transcended the limitations of mouse/keyboard/gamepad input, since mobile devices integrate touchscreens with cameras, microphones, speakers, and wireless connections. I call this “the touchscreen paradigm” but it refers to all of those now-standard inputs and outputs.

This hardware generalizes to an unprecedented number of applications. A typing keyboard can be simulated by key images on the touchscreen, although that experience has decidedly inferior ergonomics. Mouse clicks are replaced by touchscreen taps, and while this system has no provision for “hover” interactions, other forms of mouse control are improved. Drawing is very awkward with a mouse, since the brain has to map the mousepad surface to the display in real-time. Touchscreens eliminate this problem, and in fact they are functionally similar to high-end drawing tablets with integrated screens that have been available for some time. Wacom, a manufacturer of these computer accessories, now sells a high-end stylus that integrates with mobile software.

Other applications go beyond anything that is possible with a mouse and keyboard. Multiple finger touches can be processed at once, making “Minority Report” interfaces easy to build in software. Microsoft put significant capital into a tabletop touchscreen computer called the Surface Table (re-branded as the PixelSense). However, similar interfaces can be added to mobile devices with software, such as this Photo Table application. Fortunately for independent developers, the barriers to entry in mobile software are very low because standard hardware already exists.

These examples barely begin to fill the space of possible touchscreen applications. My phone is already a feature-rich camera, an “FM” radio, a guitar tuner, an SSH client, and a flashlight. Those products have been manufactured before with dedicated hardware, but mobile software is also being used to invent completely new technology. Products which require a touchscreen, audio/video input/output, or an internet connection can be built entirely in software and sold as mobile applications.

As a software engineer, this is obviously a good thing. However, the sheer number of new applications that are possible on mobile platforms presents an intimidating problem: what to build next? Customers might be able to describe what they’d pay for, but they don’t always know what they’ll want to buy in the future. The first generation of application programmers probably experienced a similar feeling. It’s inspiring and terrifying at the same time.

THIMBL Keyboard

I recently developed a prototype music keyboard for the iPad, in order to play around with the idea. It’s called the THIMBL Keyboard and it looks like this.

Each octave-row maps the twelve semitones to six positions on each hand: Thumb, Half (between Thumb and Index), Index, Middle, Big (ring finger), and Little, thus the THIMBL acronym. This keyboard is very interesting because it has no diatonic bias like a standard piano keyboard, but it does have a bias toward certain keys, i.e. the Left Index position is always a C note. The player moves up and down octaves by moving the hands vertically, so chord inversions are very easy to find. However, this layout means that a C Major scale is not especially simple to play without knowing the right sequence of steps, or memorizing finger positions.

I’ve been practicing some basic technique with this prototype, and have discovered a few things about how it behaves. It seems unorthodox at first, but after learning the intervals between each pair of finger positions, playing music by ear becomes much easier. There are some expected problems with touchscreen controls, as the fingers can’t rest on the key surfaces, and the keys don’t overlap in tiers, but in general this prototype is more durable and easier to maintain than the last version. I’d still like to build a production-quality model but this works surprisingly well in the meantime. Check it out if you’re interested!

I’ve also put together some vertically-oriented notation paper which helps with transcribing and playing music. Time is measured in rows and finger positions correspond to columns of cells. You’ll have to find some way to indicate the octave of each note in this grid, I’d recommend color-coding notes to match the octave colors on the keyboard.

Non-Player Characters

Here’s an interesting idea. This article mentions “non-player characters” in the context of a role-playing game, and proposes something rather unsettling:

Many of us approach the other people in our lives as NPCs.

I’ve been thinking along similar lines. People often imagine strangers as incidental scenery, part of the social environment. This is understandable given the limits of human knowledge – there simply isn’t enough time or mental capacity to understand very much about very many people. However, we often forget that this perspective is only a necessary convenience that allows us to function as individuals. For example, if you’ve ever been in a rush to get somewhere on public transportation, you’ve probably felt that bit of guilty disappointment while waiting to accommodate a wheelchair-bound passenger. Here comes some person into my environment to take another minute of my time, right? If you use a wheelchair yourself, this delay happens every time you catch a ride, and that frustration simply does not exist. If anything, I would imagine that disabled passengers feel self-conscious every time they are in a situation where their disability affects other peoples’ lives, even in an insignificant way.

Has this always been true? Probably to some degree, but the modern media environment seems to especially promote it. Good fiction writing communicates the thoughts and motivations of relevant characters, unless they are complete unknowns. This means that any meaningfully observable character has some kind of hypothesized history and experience informing their participation in the story. Film is different, in that a script can describe an “evening crowd” in two words, but the realization of that idea can involve hundreds of extras, living entire lives and working day jobs that barely relate to their final appearance on the screen. We can assume that their real lives intersected with the production of that scene on that day, but it’s really the only significance that their identities have in context.

With interactive media, the idea of a “non-player character” has appeared in many forms, and academics study how they can design the best (read: most believable) fictional characters for interactive environments. Here the limited reality of these characters is even more pronounced. In video games, non-player characters have lower polygon counts, fewer animations, and generally use less code and data. This is a consequence of the limited resources available for building a virtual environment, but the effect is readily apparent and forced.

Does this mean video games shouldn’t include background characters? Not really. What I’m suggesting is that we should be careful to see this phenomenon for what it is: an information bias in favor of the protagonist, which necessarily happens while producing media. It shouldn’t ever be mistaken for a relevant characteristic of the real world. This holiday season, when you’re waiting an extra minute or two for a disabled stranger, or expecting better service from a tired professional, remember that he or she probably has lived a life as rich and complicated as your own, and try not to react as if he or she is just some kind of annoying scenery. Whoever it is might return the favor, even if you never realize it.

MixBall

Music is a big part of my life. I have a voracious appetite for recorded music, and I’m working on my own humble contribution to the universe of sound. Like many aspiring composers, I’ve dreamed of creating songs that touch many lives. It hasn’t been easy – profound communication through music is an especially difficult task. In today’s world where every musical idea is measured, recorded, licensed, and purchased, that task is harder than it has ever been. I attended school with several people who are now working musicians, struggling for excellence in a craft which has been commoditized to the point of disposability.

Maybe digital distribution and piracy didn’t cause this, but many of us have still forgotten to respect the artistic process. If I were to release an original music demo, it would almost certainly be lost in a sea of other free legal or illegal content, and the prospect of eventually making usable money in this way without staging live events is not great. It feels like a step backwards, if not an unexpected development.

Knowing this, I decided to make a demo which subverts the trend. My first release of original music is now available, but only in an interactive format. MixBall is a special game that mixes the music while you play. I’m not charging money yet, but you’ll have to spend time and energy “beating” each mix, so it might make you think a bit about value. If it achieves that, I will consider it a success. The catchy tunes are a bonus, hopefully you’ll enjoy them too.

You might be wondering why this countercultural experiment is hosted on Apple’s App Store and requires an iOS device. The answer has to do with hardware limitations. I was interested in building for Android as well, but low-latency sound requires considerate effort, and Apple has the whole portable music pedigree to boot. Hopefully that doesn’t offend anyone.

Get MixBall today in the App Store!

Falsifiability

According to Karl Popper, any scientific theory must be falsifiable. What does this entail?

A falsifiable theory leads to predictions which would be invalidated by some conceivable observation. For example, Newtonian dynamics predicts that in a uniform gravitational field, two objects with different masses will have the same acceleration due to gravity. It implies that if we drop a feather and a ball bearing inside a vacuum chamber, the ball bearing will not fall faster, as long as the theory is valid. This is in fact what happens.

Newton’s theory was very successful at describing the celestial motion of the known planets, but in the 19th century it did not correctly predict the orbit of Uranus. This fact would have falsified the theory, or greatly limited its precision. However, Urbain Le Verrier knew that the gravitational field of an unknown planet could be pulling Uranus into its observed orbit, and predicted where and how massive such a planet would be. Astronomers pointed their telescopes at the expected location and discovered Neptune. If no planet had existed at that location, this prediction would have been wrong, and the inconsistency with Newtonian dynamics would have remained valid.

A planet orbits in an ellipse, and the point where it moves closest to the sun is called the perihelion of its orbit. This point gradually precesses around the sun due to gravitational forces exerted by other planets. The same Le Verrier compared observations of Mercury to the perihelion precession rate derived from Newton’s theory, and found a discrepancy of nearly half an arcsecond per year. He predicted the existence of another planet closer to the sun to explain his result, but no planet was ever observed and this problem remained open.

To explain other puzzling observations, Albert Einstein abandoned the Galilean transformations of Newton’s theory for a framework which uses Lorentz transformations in four dimensions. General Relativity describes gravitation as an effect of spacetime curvature. It simplifies to Newtonian dynamics at lower energy levels. According to this theory, the perihelion of Mercury’s orbit should precess by an additional 0.43 arcseconds per year, matching the observed value.

Still, the intuitive simplicity of Einstein’s theory did not automatically mean that it was a valid replacement for Newtonian dynamics. During the total solar eclipse of 1919, measured deflection of starlight agreed with the value derived from General Relativity, a highly publicized result. Decades passed before additional predictions were conclusively validated.

Global Whining

The scientific method is the greatest invention of the modern age. For centuries, its practitioners have transformed civilization using rational systems revealed through careful observation. Theories which have succeeded not by virtue of their popularity but because they correctly predicted unknown phenomena are especially awe-inspiring. However, predictions without rational justification or those vague enough to be confirmed by any number of observations should not earn the same recognition. I’m a huge fan of Karl Popper regarding falsification, the idea that a scientific theory must predict some observable event which would prove it wrong (if the theory is wrong). This principle eliminates uncertainty regarding how specific a valid theory must be. Unfortunately, it has been ignored by some academics who claim to be scientists so that people won’t laugh at their ideas. You might have already guessed that today I’m targeting the low-hanging fruit of global warming alarmism. Prepare to be offended.

I won’t waste your attention picking apart the various temperature series, criticizing the IPCC models, or citing evidence of misconduct, not because those arguments have already been made by more qualified individuals, but because they shouldn’t even be necessary. Fundamental problems with any apocalyptic hypothesis make the whole enterprise seem ridiculous. This is what Popper says about scientific theory:

1) It is easy to obtain confirmations, or verifications, for nearly every theory – if we look for confirmations.
2) Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected […] an event which would have refuted the theory.
3) Every “good” scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
4) A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
5) Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.

The scenarios published by climate modelers don’t qualify as scientific predictions because there is no way to falsify them – updated temperature measurements will inevitably correlate with some projections better than others. And fitting curves to historical data isn’t a valid method for predicting the future. Will the IPCC declare the CO2-H2O-feedback warming model invalid and disband if the trend in the last decade of HadCRUT3 data continues for another decade or two? How about if the Arctic ice cap survives the Summer of 2014? I’m supposed to trust these academics and politicians with billions of public dollars, before their vague predictions can be tested, because the global warming apocalypse they describe sounds more expensive? This riotous laughter isn’t meant to be insulting, we all say stupid things now and then.

Doesn’t the arrival of ScaryStorm Sandy confirm our worst environmental fears? Not if we’re still talking about Karl Popper’s science. Enlightened by the theory of Catastrophic Anthropogenic Global Warming, academics were reluctant to blame mankind for “exceptional events” only two months ago. They probably didn’t expect a hurricane to go sub-tropical and converge with a cold front as it hit New York Bight on the full moon at high tide less than six weeks later, because that kind of thing doesn’t happen very often. Informed news readers might have been expecting some coy suggestion that global warming “influences” weather systems in the rush to capitalize on this disaster. But in a caricature of sensationalism, Bloomberg splashes “IT’S GLOBAL WARMING, STUPID” across a bright orange magazine cover, and suddenly enormous storm surges threaten our infrastructure again while the seas are still rising slowly and inevitably, all because of those dirty fossil fuels.

I don’t mean to say that we should actually expect scientific integrity from a stockbrokers’ tabloid, but Mike Bloomberg has really sunk to a new low. He spoke at TechCrunch Disrupt a few years ago and seemed like an average business-friendly mayor, not a shameless propagandist. I guess the soda ban was a bad omen. It’s a bit discouraging to see another newspaper endorse the panic, but then the organizers of our climate crusade have been pushing their statist agenda on broadcasters for a long time.

On Sunday, the New York Times doubled down with this ridiculous melodramatic lament, written by one talented liberal artist. Where’s a prediction about the next exceptional event, folks? Is it going to be a tornado or earthquake, or does it matter? Are there actually any rules for preaching obnoxious hindsight to believers? Can anyone suggest an observation that would falsify the theory?

What will the temperature anomaly or the concentration of carbon dioxide measure in ten years? How about one solid date for the eradication of a low-lying coastal city? If you must predict the apocalypse, it is only somewhat scientific if you can rationally argue for a deadline. And the science is only settled when the world ends (or doesn’t end) on time.

Plus ça change, plus c’est la même chose. Happy 2012.

Singing

If you like music at all, take some advice from me and sing whenever you have a chance. It’s good for your soul, whatever that means. I’ve played music for many years, but I recently discovered how much singing affects my mental health. Live shows don’t count, because the crowd and the amplifiers drown you out. Sing when you’re alone in a quiet room, and fill it with your voice. Sing whenever you’re driving alone. Sing at work, as long as you aren’t disturbing anyone who can punish you. Just sing – you can probably play your own vocal cords even if you can’t play another instrument. If you sound awful, don’t worry about it. Nobody is going to harass you for singing poorly in private. Many people are too insecure to try it at all.

I’m saying this because something wonderful happens when we hear ourselves sing. It helps the brain. Matching pitch forces us to listen to the sounds that we produce, and quite possibly to produce sounds that we are not entirely comfortable hearing. This is a good thing! It teaches us to be more comfortable whenever we use our voices, and more confident in general. The sooner you get accustomed to hearing yourself make unfamiliar noises, the sooner you will be able to make noise in public with absolute confidence. Sometimes, that is absolutely necessary. Why not be prepared?