Tag Archives: technology

The New Face of War?

Like many others, I was surprised by the announcement by Air Chief Marshal Sir Stephen Hillier, Chief of the Air Staff, that his Reaper drone crews will be eligible for the new Operational Service Medal for their contribution to the war in Syria and the defeat of ISIS (also known as Daesh). Traditionally, medals have always been awarded based on risk and rigour. It may seem a reasonable assumption that there is not much risk sitting in a nice warm office up at RAF Waddington in Lincolnshire where they operate their Unmanned Air Vehicles (UAVs). More like playing computer games, perhaps? Where is the risk and rigour in that?

On digging deeper, however, I have changed my mind. There is no doubt that the RAF’s drone operators have made a major contribution to the defeat of ISIS and deserve official recognition.

An unnamed pilot said the drone operators’ job is very different to his Typhoon force operations. The RAF pilot, with 30 long and dangerous combat missions over Syria during his Akrotiri tour, made the point:

‘In some ways it is identical, in some way it is totally different … I think they have it a lot harder in some ways …

‘What people don’t realise is the emotional investment they end up having in it. They will watch a target for weeks on end and they will understand every part of that target’s life.

‘You can’t not become emotionally involved – we need to give those boys and girls a lot more credit that I think people are giving them.’

The pilot’s comments echo the words of the Defence Secretary, Gavin Williamson, who has said: ‘The campaign against Daesh is one of which our Armed Forces can be extremely proud. I am pleased that today those who have bravely fought against such untold evil will get the recognition they deserve.’

During the campaign to destroy the extremists in Iraq and Syria, drones were used to carry out strikes, gather intelligence and conduct surveillance. While front-line operational aircrew do operations for maybe six months or a year at a time, drone operations staff face different challenges The Reaper force is on duty 24/7/365, monitoring an enemy that is elusive, dangerous and determined to attack the West in any way it can in pursuit of its twisted, fanatical world view. The personal strain and pressure watching the every move of these individuals is immense and unrelenting.

Drone crews have been doing that for every working day on Operation Shader (codename for the Syrian campaign) for four years. ACM Hillier pointed out that for the drone pilots, sensor operators and mission intelligence co-ordinators of the Reaper crews, ‘It is not some remote support operations – they are doing operations, engaged in active operations every minute of every day. This often involves weeks of monitoring individuals and then, once a strike has been executed, another vast amount of time is spent ensuring it was successful.’

Of course, in addition sometimes taking the decision to kill whole groups by remote control is made before going home to the family for supper and to help put the kids to bed. Drone pilots face questions like: ‘What did you do today, Daddy?’

As a result the pressure has taken its toll. ACM Hillier confirmed that drone crews are monitored ‘extremely closely for the risk of psychological harm … these people see some quite stressful things. So we have provided the opportunity for counselling, and an environment where we look after each other – a full support network exists. We need to make sure we don’t end up with them [the drone pilots] getting psychologically fatigued.’

This insight into the combat stress of the new warfare is a reflection of how in the last decade drones have become a new battlefield in the ‘vertical flank’. As long ago as 2004, the militant group Hezbollah began to use ‘adapted commercially available hobby systems for combat roles’. These modified toys can be bought easily, as the Gatwick debacle in December 2018 demonstrated, and – at prices ranges from US $200 to $700 – they are as cheap as chips to the military.

Also, adapted drones are lethal. For example, in August 2014 well-directed Russian-backed artillery fire was used to devastating effect in Ukraine, leaving three mechanised battalions a smoking ruin. This mission reached its goal because the units and their positions were identified by a mini-drone with a TV camera: the Ukrainian government lost 200 vehicles – and very-short-range air defences weren’t able to detect the deadly eye in the sky.

Armed services worldwide are taking this new threat very seriously indeed – as well as the new opportunities drones offer.  Whilst much attention has been focused on hypersonic weapons and long-range missiles, small UAVs pose new risks and are a serious challenge to air defences on land and sea.

In America, Dan Gettinger (Co-director: Center for the Study of the Drone) warns, ‘The US military – and any other military – have to prepare for an operating environment in which enemy drones are not just occasional, but omnipresent … Whether it’s a small, tactical UAV, mid-size or strategic, drones of any size will be commonplace on the battlefield of the future.’

He recognises the asymmetrical nature of the drone, armed or reconnaissance. Drones are cheap, hard to detect and don’t bring politically embarrassing body bags to the attention of the media or the folks back home. Drone technology has become a cat-and-mouse game, as militaries struggle to deal with the big threat of little drones.

For example, whilst US ‘supercarriers’ – with 80 warplanes and 5000 sailors – can dominate the narrow waters of the Persian Gulf, these 100,000-tonne behemoths are intensely vulnerable to hundreds of tiny Iranian attack drones – or a swarm of radio-controlled, fast-attack craft. The only remedy is lots of close-range defensive small calibre guns – and the chances are that some of the enemy will still get through. Half a dozen US $1000 missiles can easily disable a vessel costing US $50 billion. As the Americans say: ‘You do the math – go figure.’

Inevitably the market place has latched onto the commercial possibility of drones. Driven by a global increase in the use of mini-drones by terrorists and criminals, the anti-drone market is expected to grow to US $1.85 billion by 2024, according to the US business consulting firm, Grand View Research.

‘As drones become deadlier, stealthier, faster, smaller and cheaper, the nuisance and threat posed by them is expected to increase, ranging from national security to individual privacy,’ Grand View warns. ‘Keeping the above-mentioned threat in mind, there are significant efforts – both in terms of money and time – being invested in the development and manufacturing of anti-drone technologies.’ The Dutch have even trained eagles to attack drones.

Britain’s drone policy appears to be primarily defensive, as the RAF is well aware that the F-35 Lightning (at GBP £65 million a throw) is unlikely to be available in large numbers. Reaper drones and their UAV successors (at about GBP £14 million a copy) can offer a better bang for the taxpayers’ buck. In a speech at the Royal United Services Institute on 11 February 2019, UK Defence Secretary Williamson announced that the United Kingdom was ready to develop and deploy a swarm of drones before the year was out. ‘I have decided to develop swarm squadrons of network enabled drones capable of confusing and overwhelming enemy air defences,’ he said. ‘We expect to see these ready to be deployed by the end of this year [2019].’

This is interesting: it suggests a ‘weapons mix’, where drones accompany crewed fighters as robotic wing mates. It’s cheap – and the technology already exists in the US: for example, the manoeuvrable target drone developed by Kratos Defense & Security Solutions.

The danger, as ever in UK defence procurement, is that the dead hand of Ministry of Defence jobsworths will – once again – gold plate and change the specification, starve it of funds, double the cost and, finally, draft a rotten contract just in time for the next round of defence cuts.

But that’s another story …

Advertisements

Hunting the Algorithm

Algorithms rule your life. Really. I’ll also wager that most of us don’t have a clue what an algorithm is, or what it does. Most of us can’t even spell it.

Nowadays, however, thanks to advanced algorithms, computers can learn and reprogram themselves. They can make their own decisions automatically, without human intervention. Visions of The Terminator franchise’s murderous robots could come true, which is worrying for all of us. Our digital ‘Brave New World‘ is frighteningly close – and seriously alarming.

So, how can we address this issue? First, we have to decide what an algorithm really is, which is a bit like hunting the Snark. They are everywhere and yet there are invisible. The best definition is, ‘a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.’ Note that last word: computer.

Algorithms are the mathematical rules that tell your computer what to do and how best to do it. Computer programs comprise bundles of algorithms, recipes for handling information. Algorithms themselves are nothing more than pathways to manage pieces of data automatically. So, if ‘A’ happens, then go to do ‘B’; if that doesn’t work, then do C. It’s pure ‘either/or’ logic. Nothing could be simpler; or maybe not ….

Any computer program can be therefore viewed as an elaborate cluster of algorithms, a set of rules to deal with changing inputs. The problem is that computers increasingly rule our lives, whether we like it or not. We need to keep a close eye on these robotic machines as they can be dangerous.

Taking a nasty example, one dark night in March 2018 a computer-driven SUV mowed down and killed a female cyclist in Arizona. Sensors told state-of-the-art onboard algorithms to calculate that, given the robot SUV vehicle’s steady speed of 43 mph, the object must be stationary. However, objects in roads seldom remain stationary. New algorithms kicked in, looking for a split-second resolution. The SUV computer first decided it was dealing with another car, before it realised the car was bearing down on a woman with a bike hung with shopping baskets, expecting the SUV to drive passed her. Confused, the SUV computer handed control back to the human in the driver’s seat within milliseconds. It was too late: the cyclist, Elaine Herzberg, was hit and killed. The tech geeks responsible for the SUV then faced difficult questions like: ‘Was this algorithmic tragedy inevitable?’, ‘Are we ready for the robots to be in charge?’ and ‘Who was to blame?’

‘In some ways we’ve lost control. When programs pass into code and then into algorithms, algorithms start to create their own new algorithms, it gets farther and farther away from humans. Software is released into a code universe which no one can fully understand …’ says Ellen Ullman, author of Life in Code: A Personal History of Technology.

The problem is that algorithms now control almost everything. Amazon, Facebook, Google, university places, welfare payments, mortgages, loans and the big banks all rely on the algorithms in their computers to manage their decisions. Algorithms are seen as cool and objective, offering the ability to weigh a set of conditions with mathematical detachment and an absence of human emotion. ‘Computer say “No”‘, the catchphrase of the Little Britain character Carol Beer, is all too real nowadays, thanks in large part to algorithms.

However, currently we are experiencing first-generation, ‘dumb’ algorithms, which calculate solely on the basis of the input of their human programmers. The quality of their results depends on the thoughts and skills of the people who programmed them – people like us.

In the near future, something new and alarming will emerge. Tech pioneers are close to realising their dreams of creating human-like ‘artificial general intelligence’ (AGI): computers that don’t need programming, once they are up and running. Like Bender in Futurama, these machines possess intelligence: they can learn. A genuinely intelligent machine is able to question the quality of its own calculations, based on its memory and accumulation of experience, knowledge and mistakes. Just like us. Critically it can then modify its own algorithms, all by itself. As an analogy, It can change the recipe and alter the ingredients – without the busy chef realising what is happening.

Early iterations of AGI have already arrived: predictably, in the dog-eat-dog competitive world of financial market trading. Wherever there’s a fast buck to be made, clever individuals are already training their customised computers to attack and beat the market. The world of high-frequency trading (HFT) relies on central servers hosting nimble, predatory algorithms that have learned to hunt and prey on lumbering institutional ones, tempting them to sell lower and buy higher by fooling them as to the state of the market.

According to Andrew Smith, Chief Technology Officer at ClearBank, a major finance trading company in London: ‘In essence, these algorithms are trying to outwit each other; doing invisible battle at the speed of light, placing and cancelling the same order 10,000 times per second or slamming so many trades into the system than the whole market goes berserk – and all beyond the oversight or control of humans.’ (‘Franken-algorithms: the deadly consequences of unpredictable code‘, The Guardian, 29 August 2018)

In the same Guardian article, science historian George Dyson points out that HFT firms deliberately encourage the algorithms to learn: they are ‘just letting the black box try different things, with small amounts of money; and, if it works, reinforce those rules.’ These algorithms are making these decisions by themselves. The result is that we now have computers where nobody knows what the rules are because the algorithms have created their own rules. We are effectively allowing computers and their algorithms to evolve on their own, the same way nature evolves organisms.

This is potentially dangerous territory. Who is in charge when situations get out of hand?

Eighty years ago the science fiction writer Isaac Asimov foresaw these problems in his ground-breaking Robot series of short stories and novels, of which I, Robot is the most famous. Asimov formulated ‘Three Laws of Robotics,’ which make even more sense today, as we stand on the brink of a future world infused with robots. These Laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by a human being except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov’s stories focus on the perils of ‘technology getting out of control’, when robots become problems, either because of conflicts between the Laws or because humans trying to interfere with the Laws, allowing robots to go their own way. Now, Asimov’s fictional concerns are coming true: today we face the challenge he only imagined. The problem remains what can we do about potentially vicious ‘creatures’ that may escape into the wild?

One truism is that we cannot disinvent things. From the crossbow (which a medieval Pope tried to ban) to the torpedo and the atom bomb, our clever and murderous species has invented dangerous toys. We have all had to live with their lethal consequences.  Computer algorithms are no different. We are stuck with them.

If we don’t find a way of controlling algorithms, we may wake up one day to find that they are controlling us.  Algorithms are already telling us what to do, particularly in public services such as law enforcement, welfare payments and child protection . Algorithms have become much more than data sifters; they now act as more like gatekeepers and policy makers, deciding who is eligible for access to public resources, assessing risks whilst sorting us into ‘deserving/undeserving’ and ‘suspicious/unsuspicious’ categories. Helped by their ubiquitous algorithms, computers are now making decisions for us.

However, we have to recognise that not all governance is data-based. Real life has to deal with the messy, complicated complexities of decision-making among conflicting demands. Policymaking is a human enterprise that requires us to deal with people, not numbers. It’s time to look at Asimov’s concerns anew, because soon it may be too late.

Unless you can guarantee unplugging the robot, of course ….

Who Needs Money Anyway?

Here’s a riddle for you:

Question ‘What loves money, but hates cash?’
Answer ‘Governments and banks.’

Governments love money – your money – because it helps them to bribe electors. Not for nothing has modern democratic politics been dismissed as the art of redistributing taxpayers’ money. There is no such thing as ‘government money’ – just your hard-earned dosh, taken from your pocket by law. As Louis XIV’s Minister of Finance, Jean-Baptiste Colbert, put it so elegantly, ‘The art of taxation is in plucking the goose so as to obtain the largest amount of feathers with the least amount of hissing.’

But modern governments, like banks, hate real money. This deep-seated dislike of cash in people’s pockets is because governments cannot control it. Money is mobile; money is anonymous. Money is running loose out there in the wild. For the control freaks of every government and their snivel serpents, money is an untidy, reckless, irresponsible commodity, out of government control and, horror of horrors, in the hands of ordinary people. Cash makes all those illegal economic activities easier – from terrorist activities to criminals using cash to pay for drugs, because it cannot be traced; in any cashless economy, counterfeit notes would be useless. So it’s no surprise that governments around the world want to eliminate cash because all transactions have to be linked to bank accounts that can be taxed.

Banks in their turn dislike hard cash, albeit for different reasons; all those dirty banknotes and coins need to be counted and added to ledgers. Money needs branches where you can visit to check on your dosh or negotiate a loan with a human being. However branches require staff – expensive staff. Replacing them with self-service apps allows the senior managers of financial institutions to control customers directly and, of course, such a system cuts costs and boosts profits. For the big banks, what’s not to like?

So ‘Big Finance’, enthusiastically backed by the man in Whitehall, is pushing hard for what they see as the bright new future of a cashless society; a world where IT and digital communications can get the customers to do their work for them without recourse to all that messy money. The most powerful advocate of change is the payments industry, with its credit card companies and banks. Every transaction that is done using cash is really a missed opportunity for Visa and MasterCard to earn another 2.5%. Unsurprisingly it is in their interest to trumpet that cash is redundant, inconvenient and inefficient.

However, the cashless society is a con – with Big Finance behind it. Already, all over the Western world, banks are trying to shut down branches and ATMs. They are trying to push you into using their own digital banking infrastructure to make digital payments. Financial institutions want to force everyone to put all their money through the banks’ digital systems to be recorded and harvested. Cost cutting and control is the order of the day, and our brave new digital world is making it possible. IT has made a cashless society a reality – and this demands some hard thought, because the cashless society will change all our lives.

For a start, in many cases, money as a traditional exchange of value is losing ground. Money is becoming much more a concept of ‘credit reassignment’ rather than a transfer of physical material. Everywhere today people are using credit and debit cards on a regular basis in everyday situations, such as shopping. These are cashless transactions. The question is, ‘What exactly is this cashless society?’ This leads to real impacts of living in a cashless world?

Perhaps the best finance model to consider is Sweden, which is fast becoming the world’s first completely cashless society. For example, none of the banks around Stockholm’s main Odenplan Square handle cash any more. If you want a cup of coffee and Kanelbulle in the country’s largest café chain you can only pay by card or by using your smartphone. No money please! Also, there is no chance of using coins or notes if you want to hop on one of the shiny blue busses purring past – contactless payment swipe only. Tak!

The Swedes see nothing unusual about this cashless phenomenon. Most Swedish banks have long stopped allowing customers to withdraw or pay in cash over-the-counter. A quarter of people living in Sweden use cash only once a week. Sweden’s cashless trend has been praised as the way of the future, mainly by banks and the credulous commentariat, and is increasingly being copied in places as far apart as Idaho and India.

However, not everyone is convinced that the Nordic nation’s embrace of living by technology is quite the happy Valhalla claimed. There are growing concerns about the pace of change. For example, many older Swedes are becoming alarmed amid concerns that this cashless society is causing problems for the elderly and other vulnerable groups.

‘As long as it is legal to use cash in Sweden, we think people should have the option to use it and be able to put money in the bank,’ says Ola Nilsson, of the Swedish National Pensioners’ Organisation and its 350,000 members. ‘We’re not against the cashless society, we just want to stop it from going too fast.’

They are right to be wary; because the cashless society hides some very dangerous trends. The debacle of TSB and Visa’s disastrous IT switchover is a cautionary tale for our potential cashless future. The more we rely on technological solutions, the more we will have a problem when they fail.

Moreover, there are some clear downsides to a cashless world. First, it will alienate a lot of people and put them on the fringe of society, if not make them outlaws. Not everyone has a smartphone, a bank account or even a permanent address.

Nor do all developed countries wish to go down the Scandinavian route. Switzerland, for example, rejects non-cash transactions as an invasion of privacy. The Swiss were not even keen on cashless train travel cards, worrying that the government could snoop on their travel habits.  Many people wouldn’t want their day-to-day spending habits recorded and monitored.

Then, there is the serious matter of censorship. There is already an ugly precedent. In February 2016, Uganda blocked the popular Money Mobile app during its elections. The allegation was that the opposition would use it to buy voters. The real reason was to block donations to the opposition party. In another instance, Bank of America, Visa, MasterCard, PayPal and Western Union blocked donations to Wikileaks, which has seen its revenue dip by over 95%.

Finally, cash is an important safeguard against economic disaster. When things go wrong, when confidence in the banking system crashes, citizens like to withdraw their cash in wadges and stuff it under the mattress. A run on the banks is governments’ worst nightmare: remember the Northern Rock debacle? A cashless society makes that impossible – and, paradoxically, it also makes it impossible for governments to control money flow and unable to inject money (i.e. ‘print banknotes’) to stimulate the economy.

The conclusion is that because of its reliance on electricity, electronics, software and IT, modern society is becoming more and more fragile, and vulnerable. Cyber-attacks are increasingly common, as is theft by hacking. So real money will remain a bulwark against technological vulnerability. In 2017 the average amount of cash carried by people was £33; that has now fallen to £21 in 2018. However, when technology goes wrong, hard cash still remains the safest way to make basic transactions.

So ignore those ‘nudging’ you towards purely digital services – it’s really all for the banks’ benefit, not yours. Obviously, plastic and helpful IT have their part to play, but we should not get carried away by the vision of a cashless society.

The signals say, ‘proceed with caution.’