Perversely, a Level Playing Field is the Problem

On a level playing field where no-one is dogmatic about what is “appropriate” – but chooses what is appropriate from available tools & methods sounds like a healthy state of affairs, no? Free and democratic, what’s not to like?

But what if some tools & methods – and worldviews – have advantages that have nothing to do with their appropriateness?

This is the root of my 20 year agenda here, from before tech (ITC – information and communications technology) became as ubiquitous as it has, though the urgency has always been because the rise in both tech and the problem have been equally predictable. That’s the “I told you so” dealt with. What about the problem.

In that time the problem has escalated from being simply a threat to good order in technical circles, in business cost-benefit analyses and the like, to being a threat to the whole of free democratic society. I used to think I might be exaggerating until Jamie Bartlett came along ringing the same alarm bell. In fact as recently as my previous post, I tagged Jamie in to a longer-running dialogue about moderating the “pace” of social media dialogue. As well as this his succinct Medium blog post “The War Between Technology & Democracy” his most recent book has the similar but slightly longer title: “The People Vs Tech: How the Internet is Killing Democracy (and how We Save It)” (And I’ve still not read it!)

The specific (part of the) problem today arose from a Twitter exchange. This tweet from Jamie and the thread of dialogue below it:

Including this:

The problem is as old as philosophy, effectively the choice between (objective) facts vs (subjective) values. The objective side can be cast any number of ways from logical positivism to greedy objective reductionism – or plain “scientism” in my word of choice. The subjective or qualitative ethical (“what’s best?”) side varies enormously in expression, all taking the focus away from objective measurable outcomes – even those weighted with fuzzy risk factors – towards qualities of people and processes. Deontological in the sense of not driven by the existence of objective outcomes but by harder-to-grasp and harder-to-ground qualities or virtues and the like.

As I say none of this is new, and even here I must have hundreds of posts on the topic, the point is why this philosophical issue is so problematic in our times of ubiquitous tech.

It’s very simple – in my tweeted response above – this entirely objective vs at least partly subjective choice is about one being much easier than the other. Easier to define, easy to model, easy to … program, easier even when that programming involves communication-algorithms, data-gathering and machine-learning. The less clearly defined subjective stuff is almost a non-starter for programmable models, unless it is modelled using objective analogues for the real (subjective) thing.

For short:
Objective = easy.
Subjective = hard.

So then the competition for the communication of ideas takes over.

Memetics is a word I’m happy to use for it, many are not, but it’s just a word. Whatever the word, and whatever the content of the ideas, decisions, justifications, it’s the easy stuff – the easy to grasp, the easy to fit, the easier to use, the easier to communicate – that has a distinct advantage. All other things being equal, on a level playing field – it’s only natural – that the objective stuff of utilitarian philosophy wins out. Automation by algorithms – without qualitative moderators – simply reinforces that natural process.

The naturalistic fallacy accepts that natural is necessarily good. It’s not.

“A level playing” field is part of the problem, reducing a complex field to a single easy to visualise variable. “It’s complicated” always loses out to a simple justification.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.