Hunting the Algorithm

Algorithms rule your life. Really. I’ll also wager that most of us don’t have a clue what an algorithm is, or what it does. Most of us can’t even spell it.

Nowadays, however, thanks to advanced algorithms, computers can learn and reprogram themselves. They can make their own decisions automatically, without human intervention. Visions of The Terminator franchise’s murderous robots could come true, which is worrying for all of us. Our digital ‘Brave New World‘ is frighteningly close – and seriously alarming.

So, how can we address this issue? First, we have to decide what an algorithm really is, which is a bit like hunting the Snark. They are everywhere and yet there are invisible. The best definition is, ‘a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.’ Note that last word: computer.

Algorithms are the mathematical rules that tell your computer what to do and how best to do it. Computer programs comprise bundles of algorithms, recipes for handling information. Algorithms themselves are nothing more than pathways to manage pieces of data automatically. So, if ‘A’ happens, then go to do ‘B’; if that doesn’t work, then do C. It’s pure ‘either/or’ logic. Nothing could be simpler; or maybe not ….

Any computer program can be therefore viewed as an elaborate cluster of algorithms, a set of rules to deal with changing inputs. The problem is that computers increasingly rule our lives, whether we like it or not. We need to keep a close eye on these robotic machines as they can be dangerous.

Taking a nasty example, one dark night in March 2018 a computer-driven SUV mowed down and killed a female cyclist in Arizona. Sensors told state-of-the-art onboard algorithms to calculate that, given the robot SUV vehicle’s steady speed of 43 mph, the object must be stationary. However, objects in roads seldom remain stationary. New algorithms kicked in, looking for a split-second resolution. The SUV computer first decided it was dealing with another car, before it realised the car was bearing down on a woman with a bike hung with shopping baskets, expecting the SUV to drive passed her. Confused, the SUV computer handed control back to the human in the driver’s seat within milliseconds. It was too late: the cyclist, Elaine Herzberg, was hit and killed. The tech geeks responsible for the SUV then faced difficult questions like: ‘Was this algorithmic tragedy inevitable?’, ‘Are we ready for the robots to be in charge?’ and ‘Who was to blame?’

‘In some ways we’ve lost control. When programs pass into code and then into algorithms, algorithms start to create their own new algorithms, it gets farther and farther away from humans. Software is released into a code universe which no one can fully understand …’ says Ellen Ullman, author of Life in Code: A Personal History of Technology.

The problem is that algorithms now control almost everything. Amazon, Facebook, Google, university places, welfare payments, mortgages, loans and the big banks all rely on the algorithms in their computers to manage their decisions. Algorithms are seen as cool and objective, offering the ability to weigh a set of conditions with mathematical detachment and an absence of human emotion. ‘Computer say “No”‘, the catchphrase of the Little Britain character Carol Beer, is all too real nowadays, thanks in large part to algorithms.

However, currently we are experiencing first-generation, ‘dumb’ algorithms, which calculate solely on the basis of the input of their human programmers. The quality of their results depends on the thoughts and skills of the people who programmed them – people like us.

In the near future, something new and alarming will emerge. Tech pioneers are close to realising their dreams of creating human-like ‘artificial general intelligence’ (AGI): computers that don’t need programming, once they are up and running. Like Bender in Futurama, these machines possess intelligence: they can learn. A genuinely intelligent machine is able to question the quality of its own calculations, based on its memory and accumulation of experience, knowledge and mistakes. Just like us. Critically it can then modify its own algorithms, all by itself. As an analogy, It can change the recipe and alter the ingredients – without the busy chef realising what is happening.

Early iterations of AGI have already arrived: predictably, in the dog-eat-dog competitive world of financial market trading. Wherever there’s a fast buck to be made, clever individuals are already training their customised computers to attack and beat the market. The world of high-frequency trading (HFT) relies on central servers hosting nimble, predatory algorithms that have learned to hunt and prey on lumbering institutional ones, tempting them to sell lower and buy higher by fooling them as to the state of the market.

According to Andrew Smith, Chief Technology Officer at ClearBank, a major finance trading company in London: ‘In essence, these algorithms are trying to outwit each other; doing invisible battle at the speed of light, placing and cancelling the same order 10,000 times per second or slamming so many trades into the system than the whole market goes berserk – and all beyond the oversight or control of humans.’ (‘Franken-algorithms: the deadly consequences of unpredictable code‘, The Guardian, 29 August 2018)

In the same Guardian article, science historian George Dyson points out that HFT firms deliberately encourage the algorithms to learn: they are ‘just letting the black box try different things, with small amounts of money; and, if it works, reinforce those rules.’ These algorithms are making these decisions by themselves. The result is that we now have computers where nobody knows what the rules are because the algorithms have created their own rules. We are effectively allowing computers and their algorithms to evolve on their own, the same way nature evolves organisms.

This is potentially dangerous territory. Who is in charge when situations get out of hand?

Eighty years ago the science fiction writer Isaac Asimov foresaw these problems in his ground-breaking Robot series of short stories and novels, of which I, Robot is the most famous. Asimov formulated ‘Three Laws of Robotics,’ which make even more sense today, as we stand on the brink of a future world infused with robots. These Laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by a human being except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov’s stories focus on the perils of ‘technology getting out of control’, when robots become problems, either because of conflicts between the Laws or because humans trying to interfere with the Laws, allowing robots to go their own way. Now, Asimov’s fictional concerns are coming true: today we face the challenge he only imagined. The problem remains what can we do about potentially vicious ‘creatures’ that may escape into the wild?

One truism is that we cannot disinvent things. From the crossbow (which a medieval Pope tried to ban) to the torpedo and the atom bomb, our clever and murderous species has invented dangerous toys. We have all had to live with their lethal consequences.  Computer algorithms are no different. We are stuck with them.

If we don’t find a way of controlling algorithms, we may wake up one day to find that they are controlling us.  Algorithms are already telling us what to do, particularly in public services such as law enforcement, welfare payments and child protection . Algorithms have become much more than data sifters; they now act as more like gatekeepers and policy makers, deciding who is eligible for access to public resources, assessing risks whilst sorting us into ‘deserving/undeserving’ and ‘suspicious/unsuspicious’ categories. Helped by their ubiquitous algorithms, computers are now making decisions for us.

However, we have to recognise that not all governance is data-based. Real life has to deal with the messy, complicated complexities of decision-making among conflicting demands. Policymaking is a human enterprise that requires us to deal with people, not numbers. It’s time to look at Asimov’s concerns anew, because soon it may be too late.

Unless you can guarantee unplugging the robot, of course ….

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s