Code, Lies, and Masking Tape
Thu April 19th 2018
The recent scandals involving Facebook and Cambridge Analytica have revealed just how vulnerable we are, as users of the internet, to manipulation and control through clever, targeted marketing. It has felt to me over the last few weeks as though we’ve just realised we ceded control, without our knowledge, a long time ago; as if the mechanisms of the web were many steps ahead of us — hoovering up our private data, analysing our behaviour, and plotting to change our culture, whatever that term means for you. In many ways, this is exactly what has happened.
Am I the problem?
It’s at this point that I ask myself, how much did I have to do with that? After all, I’ve spent the last 17 years designing the landscape in which these things occurred. I create websites, digital strategies, and marketing campaigns whose very aim is to capture attention and change behaviour — for the private interests of the companies I am contracted to. Am I any better than those who do so for political ends?
I think this is a crucial question for us to ask ourselves: how ethical is our approach to digital design and online marketing? It’s a question that, in the face of the ethical capitulation of Facebook during the Cambridge Analytica scandal, I was not immediately able to answer.
Of course, I think that I’m a good person, and I believe wholeheartedly that I act in an ethical way. But how do i know this? More importantly, can I measure my approach against some sort of ethical code?
Attempts to develop some sort of unified code of practice in our industry have generally floundered, caught up in the tangled weeds of a multidisciplinary environment rich with opposing ideas and values. Programmers and marketers are like warring tribes, designers are too busy discussing typography to notice anyone else, and the guy that updates the social media account is only 12.
Because of this, I felt compelled to create my own code — or at least to express the code I hope that I live and work by, in terms that others could understand.
The four horsemen of the apocalypse
I decided to describe my approach in the context of the key areas of my business. If I think back over almost twenty years of design practice, I would say that I can characterise some of the less wholesome behaviour of my industry into four types of ethical dilemma:
Deceit is widespread in online marketing — from outright lies to disingenuous promises — and there is no better example of this than search optimisation (SEO). Search optimisation has long been the snake oil of the web industry, a palliative cure for any number of marketing, design and coding failures, though it is difficult to prove if it ever works, fails, or makes no impression at all. Its methods (to the customer) are obscure, its language arcane, and at its heart it rests on a deep-seated lie: that there is an optimal but secret solution which will enable you to reach everyone if only you pay someone enough to uncover it. Like a religion, the very absence of proof demands faith: a manipulative tool excellent for emptying wallets and ensuring only the priests of this particular religion get rich.
This is not to say that optimisation is wholly imaginary: optimisation as a process is extraordinarily valuable. The meaning of that word, however, has been debased by years of greed and abuse. When it is used in the context of online marketing, it is an excellent exemplar of how we use technical language as a form of deceit.
Ethical rule #1: I will not use technical language, or otherwise obscure the mechanisms of my industry, to deceive either my clients or the end user of the products I create.
Design is almost universally acknowledged as a good thing. Designer clothes cost more. An architecturally designed house is better than a…house. Got a problem? Apply some design thinking. That shit is magical.
UX, or User Experience, is a rich and wonderful area of design practice and something I do every day. It’s important to state that UX design is not a moral force — it is a process, and it can be used ethically or diabolically.
Consider the technique of ‘slanty design’ — a method whereby functionality or usability is deliberately removed so as to encourage only certain aspects of user behaviour. Real-world examples of this might be park benches that allow a person to sit, but have handrails placed along them making it impossible to lie-down — thereby discouraging vagrants to use them as beds. The term gets its name from desks in a public library that were deliberately slanted, to keep people from getting comfortable or putting their coffee cups on them.
Most websites and applications slant towards certain outcomes too — those desired by the business running them. Facebook encourages the sharing of more and more personal data, for instance — but why exactly? Because your data is the product it sells to marketing people, like me (and yes, like Cambridge Analytica) for a vast profit.
The question facing us is: how does the design impact both the client and the users? The very power of design begs us to ask.
Ethical rule #2: I will seek not only to design good solutions to the challenges of the brief, but to consider the impact of these design choices on the end user. Where possible I will re-write the brief to include these consequences.
Have you ever been scared that your business was missing out on some wonderful opportunity? Have you ever worried that your website wasn’t optimal because someone sent you an email about your keywords? Have you ever started to doubt your choice of smartphone after seeing an ad for a different model? Fear, Uncertainty, and Doubt (FUD) are used as weapons by marketers to undermine your confidence in what you have, and encourage you to replace it with something else.
They’re also used to make clients accept features that don’t work well. They’re used to ramp up costs. They’re used to create delays and extend timetables. FUD is the opposite of transparency and is a form of deceit that masquerades as useful information. FUD is the Fake News of the web industry.
One group of companies that uses these tactics to great effect are cybersecurity vendors: “There’s lots of bad things out there and our widget will protect you from them.” An idea which is seldom true, but which works with remarkable regularity. Even the best security products in the world are useless if they are not deployed as part of an overall strategy that includes people, policies, and process working towards a common goal. But the FUD strategy plays on our fears like a goddamn concert pianist.
Ethical rule #3: I will seek to be transparent about our services and our processes, providing honest information based on data, evidence, and experience — even when it hurts to do so.
4. Duty (and a lack thereof)
Building stuff, like sophisticated social media websites, phone apps, or business services is hard. It’s quite frankly amazing that any of it works at all. More often than not, the teams behind an application have only barely got it working by the time it is rushed to market. There is not usually either the time or budget to fine-tune things and shore-up the gaping holes in security that were abandoned when it came time to release.
We see this time and again with major security breaches on platforms such as Linkedin, or the Equifax scandal in 2017. Facebook’s own mess this past month was part hubris, part bad engineering, and part greed. In all of these cases the security issues were known about before the breach occurred, but the integrity of user data was not considered a high-enough priority to warrant immediate action.
Those who create online services need to take responsibility for the security of their users far more than they do, and be transparent in their approach to data. Far from being an abstract concern of data activists, it has been illustrated to everyone just how integral this issue is to all of us in the current age.
If I were the designer and engineer of a building, I would have an ethical and moral obligation to ensure the building did not kill, or otherwise harm, people as much as that is practical. Why should a ‘soft’ product such as design, code, or an Adwords campaign be any different?
Ethical rule #4: While working in the interests of my client, I will also seek to protect the end user of the products I create from adverse harm, and advocate for their rights to my client where needed.
Like a stuck record
If you’re as savvy a reader as I suspect you to be, you will have noticed some repetition (or at least some overlap) between these ethical rules. Good on you. This is because I like to think that they work in pairs:
- Honesty and Transparency are there to replace Deceit & Doubt, which represent our responsibilities toward the client.
- Design and Duty should be co-opted for good and not evil, which represent our responsibilities to the end user or audience.
Yes, these are simple, and deliberately so. There are several qualitative measures of communication, or particular design methods, that I could dig into for hours — but they would serve little to improve these basic principles. The goal here is to have an ethical touchstone — something you can glance at and say “Yup. We’re not acting like assholes today” and have reasonable confidence that you’re right.
Not everyone will agree with me, and perhaps I’m missing something big. But these rules are borne from conversations with others in our industry and I believe they hold water. It’s a conversation I’d like to keep going — so feel free to add your voice.