Removing The Fail Button: Behavioral Economics and User Interface Design

After listening to talks by a number of field-leading visual designers and data scientists over the past month and being “that one irritating guy in the audience asking questions”, I’ve noticed something strange: Mention machine data, big data, IoT or any other trendy tech topic and their eyes will glaze over slightly and they’ll kick in with a familiar schpiel you know they’ve had to do hundreds of times. But if you take any of these people — data scientists, UI designers, SEO specialists, developers, system admins — and mention “behavioral economics”, suddenly, people want to talk, but no one knows quite where it goes.

This is odd behavior. In the startup world, admitting that you’re into something but don’t know it is kind of a big deal. It doesn’t happen that often.  Geeks, or nerds, or whatever you want to call smart people who have discovered something vastly more interesting than sex or sports, are usually either knowledgeable about a subject and willing to discourse on it endlessly, or they’re ignorant of it and quite sensibly shut up and learn. There’s not much of a gray area.

So when I say that I’ve discovered something geeks who know that there’s something out there really cool that she or he is interested in learning more about, but don’t know, it should seem pretty odd. People like that usually just go ahead and learn about a thing when we’re interested… just not behavioral economics.

Despite the interest and trendiness of the topic, I’ve never met a working professional who ever said “Hmm, interesting, behavioral economics, let’s figure this one out” and sat down and learned it. I’ve met people who’ve sat down and decided to learn French, or Chinese, or high-power rifle reloading or chess or programming with Cocoa and Objective-C — hell I just saw a talk by an old man who sat down and learned machine learning and R because he thought it was interesting — but despite its popularity, none of these people are trying to sit down and figure out behavioral economics. It’s a cognitive blank spot — an aporia as the philosophers like to call it.

The reason for this aporia is that behavioral economics really undercuts the entire idea of being a rational geek to begin with; it’s much easier to disregard it than it is to fully engage with behavioral economics and its profound critique of basic cognitive givens. It’s a sensationalist claim that I’m making here, which is purposeful — interesting and wrong > boring and right, after all — but I think there’s also some level of psychodynamic truth to it. It’s really hard to take seriously a paradigm of rationality that fundamentally makes you question your own rationality itself.

In fact, if you think about it, it’s baffling and a bit mind-blowing for a human being making a trade of working with spreadsheets on computers — the current dominant paradigm for working with math or for that matter almost anything in an office — to start questioning the fundamental assumptions behind the judgment calls they’re making and the thought processes of justification and weighing. It has the effect of undercutting the basic idea so crucial to being a geek or for that matter a human being, which is the idea that we as human beings are essentially rational. It leads to difficult philosophical and cognitive questions that no one is really answering, much less asking in any systematic fashion, questions that destabilize and disrupt how we understand programming and even computers.

I’m in a unique position with regard to this — I have no ‘irons in the fire’ academically or commercially speaking, I just want to be able to stop explaining this over and over again. I’m not even putting ads on this page. Fundamentally, the issue is that I’m a writer, aestheticist, philosopher and historian with some semblance of psychological training, and I don’t understand why this community of smart people I work with hasn’t figured this basic stuff out. In fact, it kind of irks me. Normally, I’ve seen this type of content written as a ‘manifesto’, which this is not. No one wants to read a manifesto. But maybe you’ll be willing to listen to another reasonable person make an argument.

Cognitive change, especially the guerilla kind that I’m proposing, is always individual to individual. One of the most effective ways for a minority to create influence is by transforming thought and dialogue through point interventions, single actions with individuals who spread the language you want spread. If I was a state or an institution I’d attempt to change language through organizational initiatives, simple corporate fiat power or outright lies. But as an individual, I have to change language by empowering other individuals to spread the message. So, on an individual basis, just for you, the person who I’ve directed to this page, as a spur for private thought, as a form of minority appeal: here, in basic reference terms, is the argument.

salient findings in the field of behavioral economics and psychology at large
In order to discuss how behavioral economics fundamentally alters how we understand user interface (UI) design and user experience (UX) design, we first need to lay out some of the basic principles most relevant to software users in general, and the findings they’re based on. 

There is a rapidly developing field of economic research and modeling on behavioral economics proper wide enough to merit a book (there are several). A brief blog-post-length summary would be a serious disservice to to this field. So instead, at least for this post, I’ll highlight what I think are the most important findings.

Humans suck at evaluating risk. In normal conditions, we tend to value losses more heavily than gains, resulting in irrational devaluation of calculated risks with a higher expected value than a certain, lesser gain (“take a 50/50 bet that you’ll make $100 or walk away with $25″); in emergencies or exigent conditions we place irrational value on gains (gambling to “break even” or after a loss). Outside of formal debate settings like the upper echelons of policy or legal arguments, we do not have a notation that requires systematic evaluation of all risks, and this leads to uneven discussion and lack of completeness in our arguments; fundamentally, no active field today is “covering the spread”, or providing even amounts of arguments across every possible risk factor. We tend to base our risk ratings on irrational factors: the actions of others (herd instinct), our overall position even when this has no direct bearing on risk, or irrationally magnified or centrated-upon factors like a general aversion to losses.

Humans do not always choose mathematically determinate ‘best decisions’. The classic example of this is the prisoner’s dilemma, but examples abound in real life: studies of economic data from stock market crashes, or Thaler and Sunstein’s data showing that despite decades of education, consumers remain wary of 401(k) investments and irrationally wary or uninformed on basic issues of health insurance decision-making.

Humans think as little as possible — the ‘cognitive miser’ theory — so defaults matter a lot. The prevailing view of human cognition prior to the behavioral economics revolution was to perceive people as a kind of scientist, hypothesizing about the world and then testing those hypotheses against behavior. Experimental results began to show results at odds with this theory, resulting in today’s prevailing view that through rules of thumb, stereotype behaviors and heuristics, humans make decisions about the world based on purposefully limited, incomplete sets of information, often dependent on the level of self-perceived cognitive load involved.

examples of good and bad choice architecture: Windows 3.1′ing and fail buttons
“Choice architecture” is the structuring of choices to a system user; in software, this is done through the command line prompt or a graphical user interface. From the choice-architectural perspective, the knowledge gap between modern behavioral economics and software user interface design that I alluded to earlier is actually a signature trait of computer design and thinking in general. The growth in relevance and social acceptability of technical disciplines — to the point now that a college degree in liberal arts is considered a waste — has produced several generations of increasingly ontologically naive software professionals. Nowhere is this clearer than in the true nadirs of user interface design in the past quarter-century that have occurred in Microsoft product design — although, of course, the Linux world is not without its notable offenders as well.

Consider "del *.* /s /q". Starting with MS-DOS 5.1 (I remember it well, I had it on an 80286 PC that I used to play Wing Commander in 1991), it’s been possible to delete the entire hard drive, including the operating system itself, without confirmation (the -q switch removes the confirmation dialog) on any Windows machine.

Let me explain for a moment how profoundly stupid this feature is. The /s switch deletes all files from all subdirectories, wiping the entire file structure of the disk clean. The /q or “quiet” switch removes the confirmation dialog; this is primarily useful in batch commands. If you wanted to automate a garbage-collection routine that would delete all files in a directory over a certain age, for instance, you wouldn’t want it to prompt you for idiot-prompts asking ARE YOU SURE YOU WANT TO DO THIS Y/N? every time it did something. Now, this is fine for non-destructive operations that can be “re-wound” and un-done, but creating a user option to commit irrevocable damage to core machine functions is simply an asinine design decision.

"del *.* /s /q" is one of the clearest examples in history of what I term a fail button. It is a user option that, when invoked, results in system failure, the software design equivalent of a automatic weapon without a safety or a car with a button that says “SUDDENLY EXPLODE”.

Throughout the decades of its existence, in fact, Microsoft has set the bar for fundamentally stupid user interface designs — so many that, in fact, I’d like to propose a new verb: to Windows 3.1 it means to unthinkingly, fundamentally screw up the choices presented to the user through legacy UI design.

Let me explain what I mean. Here’s a screenshot from Windows 3.1 in 1992.

Ahh, the nostalgia.

Ahh, the nostalgia.

Here’s a few screenshots from the latest generation of advanced, hype-heavy, richly funded, technologically brilliant business intelligence software:

Also the winner of the contest for "corporation name that most sounds like slang for ejaculate".

Also the winner of the contest for “corporation name that most sounds like slang for ejaculate”.

Oooh, look, diagrams.

Oooh, look, diagrams.

Oooh. Graphs.

Oooh. Graphs.

Notice any similarities? Let me point them out: In what seems like every window UI since Windows 3.1 the window has been divided into three sections: addressing or context on the left, data or tuples or files in the middle and right, address bar or locative text up top. Even when we are working with basic material that in no way resembles directories or files, this design pattern still persists. It’s as if people have quietly, in common, all decided that “this is what a program should look like”.

The Windows 3.1 pattern was never a good idea to begin with. The file system on the left is badly designed, clipping off long directory names, wasting white space by indenting to the right for sub directories while pushing directory names closer together, using tiny difficult to click boxes to expand substructure, and worst of all, using premium reading space on the page (we start reading on the left, after all) for white space and directory names that are nearly informationally null in value. Later versions of Windows created a “desktop” area that could only be found be entering a lengthy address string into the File Explorer (something like “C:\Users\Windows 98\sa34892ak\users\my_user_name\desktop”), which engendered an entire decade of confusion as people downloaded email attachments to their My Documents folder or to the desktop and were unable to find them. In order to understand where your email attachment went, you need to understand folder structure and the relationship of the desktop to the file system — try explaining that to a working adult with limited time, attention and patience in a corporate environment.

Here’s another controversial claim: This is fundamentally lazy, unthinking design. You should all know better. How difficult is it to poke your head up out of the stream of rote machinic coding work and question some simple fundamental bases once in a while? Haven’t you ever wondered “Why does all this crap I’m working on look the same?” This visual comparative examination and exegesis is basic, basic freshman art history stuff. Are you just not hiring art history majors? (Actually, come to think of it, probably not.)

Why are you all wasting the left fifth of your screen on filter chains, addressing and other options that probably aren’t even used? Can’t you use context-sensitive menus instead to clean things up? The user isn’t even clicking on the left side more than a few times per task.

Why are you all presenting an address bar at the top of your window showing the SQL query you’re building? The entire point of the graphical user interface is that you don’t have to resort to text prompts. Do you expect people to ship SQL queries back and forth? Why not just link the data back and forth?

Why is there so much redundant information offered? Do you really think that every user will be a data scientist composing original hypotheses from as much information as possible, like you? Isn’t it more likely that a real, busy, flawed, sometimes-stupid sometimes-brilliant non-data-scientist human being is going to end up using a much more limited set of information — say, only the key performance indicators that define their job?

Why is there no visibility of other people in the organization? Every single business intelligence product, without exception, seems to assume that the user is some super-rational individual data scientist working in an organizational vaccuum. Are there not discussions, hypotheses tests, randomized trials — anything “real science” involving a team going on? Using any of these products, how do you know what your colleagues or even other scientists on your team are up to?

It should be patently clear to anyone exposed to the endless hype cycles of cloud and “X as a service” that we are not designing for single PC users anymore. But somehow, UI designers have not gotten the memo and are still designing software that boils down to Windows 3.1 on top of some custom algorithms, then bemoaning user stupidity and lack of decision-maker accountability when people fail to figure out what their outputs mean (like the Target data breach and the now infamous “malware.binary alert.)

recommendations for smart UI design, or, how not to Microsoft it
Applying behavioral economic principles to user interaction design – through the interface, and also through the pattern of thought and action it engenders, or user experience — is enough subject matter for a book by itself. Within the constraints of a blog post, here’s a few principles that I believe designers should follow:

simple, simple, simple!
Information overload is so common that it’s a cliche topic to write about. So many thinkers have jumped on the “complain about too much information” bandwagon that it’s become an unreflexively unexamined given; with little more than a bachelor’s degree and a few statistics, in fact, it’s even possible to sell a book on it. What information overload really should mean is a variety of different modes of information meaninglessness: complexity requiring simplification, multiple disparate or duplicate measures where a single syncretic indicator would suffice, or overly complex levels of detail where single summary measures would suffice. An approach to UI design savvy to behavioral economics should simplify data presentation with the data’s intentionality, or directedness in mind — not just simplicity for its own sake, but simplicity that compresses meanings and allows a decision-maker to consider a wider array of factors, or simplicity that hides irrelevant information and presents only domain-specific key performance indicators for each worker.

context > lists
Users should treat programs as prostheses of thinking, offloading cognitively difficult or unsuitable tasks to computers. One critical area that users really do not need to be exposed to is an exhaustive inventory of data and operations on data that can be done; most of it won’t be used and most users won’t care. Instead, use context-sensitive pop-ups, pop-overs, sub-pages and text areas to present only the most relevant menus and options for a given context, all aimed at assisting human beings in making mindful and wise decisions.

design for grunts
By “grunts” I mean the most basic, front-line, indispensable people in your organization. Grunts are the “pointy end” people in an organization that actually go out there and do the work of the organization – maintenance workers, customer service, technical support and materials manufacture are all grunts. And one thing about grunts is, grunts are very good at breaking things and making things work with limited resources. The front-line grunt is the most fundamental user of any business IT system, a kind of least-common-denominator for the entire organization at large. If you can make a user interface useful to grunts, those same dynamics will apply to the entire organization.

Take a look at this screenshot from Knights Armament Corp’s BulletFlight Level 3.

I don't care how accurate it makes me, I am not filling out a damn form in the middle of a battle while I'm shooting my rifle.

I don’t care how accurate it makes me, I am not filling out a damn form in the middle of a battle while I’m shooting my rifle.

BulletFlight is the industry gold standard in PDA-based ballistics; it offers an unusually comprehensive library of projectiles, munitions and firearms, including the signature M110 semi-automatic sniper system (SASS) that KAC itself manufactures, as well as multiple forms of ballistic coefficient calculation (G1, G2, G3, and G7), allowances for separate meteorological measurements or the increasingly common composite measure of density-altitude, and the ability to “true” or fine-tune the ballistic algorithms’ predictive model based on field observations. By themselves, none of these are unique features; many competitors like Horus ATRAG or Applied Ballistics offer the same if not deeper customization and detail.

What makes KAC BulletFlight so good, however, is its grunt design. Click on “Calculate Simple” from the basic rifle choice screen, and you get this:

Now... This I can use.

Now… This I can use.

Notice how there are only three controls — distance in rough 25-yard increments, wind speed and wind direction. Will you be able to do something arcane and sophisticated like recalculating your holdover and windage for a different ballistic coefficient of ammunition while using this screen? Probably not. But that’s not the point. With the manipulation of only three controls in wide touch-paths suitable for a gloved finger the user can obtain ballistic solutions out to the effective range of the rifle — maybe not superbly match-precise, but good enough for battles when you don’t have time to fill out a damn form just to shoot your rifle and shooting “minute of man” at speed trumps shooting minute of angle with precision.

KAC BulletFlight sets out to be a weapon, a battle tool, and in so doing produces a user experience that I consider to be the gold standard for smart UI design. Notice how the colors are muted and dull — this is to reduce the level of ambient light reflected on the shooter’s face. Notice how there are only three large measurements at the bottom of the screen, with different colors expressing direction (green is downwards, red is up for holdovers/hold-unders, yellow is clicks-down, blue is clicks-up on a scope). Notice how the distance measurements are crude and low-granularity — with typical match ammunition (most commonly M118LR but for that matter any ammunition with less than 1 minute of angle average 3-shot grouping), the deviation created in a minor range estimation error is small enough to be well within practical limits (“minute-of-man” shooting).

The colors in BulletFlight’s output are what I term visual syncretism – the combination of multiple streams of information into a single perceptual schema. Rather than wasting more space by saying “clicks down” or “clicks up” or “0.2 mils down/up”, BulletFlight embeds this information into the color. Further information can be added with variations in size (larger-font numbers further away, for instance), font, kerning or any other variable related to fonts. Visual syncretism effectively “cheats” the limit of human attention by sneaking more information into the mind’s signal bandwidth — an important mode of simplification for complex systems serving grunts at the frontline of an organization.

exploit creative misuse
A telling symptom of mismatched UI design is how customers mis-use products. Here, PowerPoint stands as a shining example lack of touch with customers: Originally intended as a presentation program, PowerPoint is today instead the most approachable and therefore “default” standard for visual mockups and basic design; for this audience, in fact, it is comparatively poor at making presentations. Take, for instance, the usage of PowerPoint in stage design in shooting sports; it ends up the case that PowerPoint, not Paint, not PhotoShop or GIMP or anything complicated and actually good, no, PowerPoint is the gold standard in shooting sports stage design.

Yup. Powerpoint.

Yup. Powerpoint.

How hard would it be for an experienced application coding team to produce software specifically for this unexploited market? Doesn’t Microsoft in fact get user data and crash profiles automatically — so shouldn’t they be able to realize this is happening? The only explanation I can come up with for why Microsoft hasn’t changed how they package PowerPoint is that they think that business presentation users are its primary market. And so they continue Windows 3.1′ing it as the preferred medium for presentations so hideously ugly and boring that they tax the mind itself (like this gem right here).

OH GOD MY EYES THE GOGGLES THEY DO NOTHING

OH GOD MY EYES
THE GOGGLES THEY DO NOTHING

traceable decision architecture
Remember “showing your work” in math class? Somehow, once you leave school, that practice stops altogether, and not for a good reason. Even in complex circumstances around high-stakes issues, when a decision is made the most detailed and informative way to explain it ends up being a decision-memo or a presentation… and we all know how much people love to read those. In the long run, this practice produces serious organizational inefficiency. Decisions have to get re-made, again and again, expensively and with research, simply because the previous decision-maker left only a partial record of how they arrived at the decision they did.

In large part this stems from the systemic linguistic difficulty of argumentation that I alluded to earlier — we simply don’t have an argument-notation system outside of debate and law that presents auditable, discoverable chains of reasoning leading to a decision. The solution that I favor for this difficulty is an open-source standard for debate and argumentation — a format for tokenizing arguments and logical relationships that I call Dialectica:

Dialectica: A hint at the answer.

Dialectica: A hint at the answer.

I’ll explain more on this in a separate article.

Tags: , ,

Leave a Reply

Your email address will not be published. Required fields are marked *