There’s been a number of recent pieces about the rise of self-learning technology that uses artificial intelligence (AI) to carry out tasks that would previously have been too complex for a machine. From stock trading to automated translations and even playing Frogger, computers will increasingly take on roles that used to rely on people’s skills.
Netflix used an algorithm to analyse the most watched content on its service, and found that it included three key ingredients – Kevin Spacey, director David Fincher and BBC political dramas. So when it commissioned original content, it began with House of Cards, a remake of a BBC drama, starring Spacey and directed by (you’ve guessed it) Fincher.
This rise of artificial intelligence is worrying a lot of people – and not just Luddites. The likes of Stephen Hawking, Bill Gates and Elon Musk have all described it as a threat to the existence of humanity. They worry that we’ll see the development of autonomous machines with brains many thousands of times larger than our own, and whose interests (and logic) may not square with our own. Essentially the concern is that we’re building a future generation of Terminators without realising it.
They are right to be wary, but a couple of recent stories made me think that human beings actually have several big advantages – we’re not logical, we don’t follow the facts and we don’t give up. Psychologist Daniel Kahneman won a Nobel Prize for uncovering the fact that the human mind is made up of two systems, one intuitive and one rational. The emotional, intuitive brain is the default for decision making – without people realising it. So in many ways AI-powered computers do the things we don’t want to do, leaving us free to be more creative (or lazy, dependent on your point of view).
Going back to the advantages that humans have over systems, the first example I’d pick is the UK general election. All the polls predicted a close contest, and an inevitable hung parliament – but voters didn’t behave logically or according to the research and the Tories trounced the opposition. While you might disagree with the result, it shows that you can’t predict the future with the clarity that some expect.
Humans also have an in-built ability to try and game a system and find ways round it, often with unintended consequences. This has been dubbed the Cobra effect after events in colonial India. Alarmed by the number of cobras on the loose, the authorities in Delhi offered a bounty for every dead cobra handed in. People began to play the system, breeding snakes specifically to kill and claim their reward. When the authorities cottoned on and abandoned the programme, the breeders released the now worthless snakes, dramatically increasing the wild cobra population. You can see the same attempt to rig the system in the case of Navinder Singh Sarao, the day trader who is accused of causing the 2010 ‘flash crash’ by spoofing – sending sell orders that he intended to cancel but that tricked trading computers into thinking the market was moving downwards. Despite their intelligence, trading systems cannot spot this sort of behaviour – until it is obviously too late.
The final example is when humans simply ignore the odds and upset the form book. Take Leicester City. Rock bottom of the English Premiership, the Foxes looked odds-on to be relegated. Yet the players believed otherwise, kept confident and continued to plug away. The tide now looks as if it has turned, and the team is just a couple of points away from safety. A robot would have long since given up……..
So artificial intelligence isn’t everything. Giving computers the ability to learn and process huge amounts of data in fractions of a second does threaten the jobs of workers in the knowledge economy. However it also frees up humans to do what they do best – be bloody minded and subversive, think their way around problems, and use their intuition rather than the rational side of their brain. And of course, computers still do have an off switch………….
50 years ago, engineer Gordon Moore wrote an article that has become the bedrock of computing. Moore’s Law, as first described in the article, states that the number of elements that could be fitted onto the same size piece of silicon doubles every year. It was then revised to every two years, and elements changed to transistors, but has basically held true for five decades. Essentially it means that computing power doubles every two years – and consequently gets considerably cheaper over time.
What is interesting is to look back over the last 50 years and see how completely different the IT landscape is today. Pretty much all companies that were active in the market when Moore’s Law was penned have disappeared (with IBM being a notable exception and HP staggering on). Even Intel, the company Moore co-founded, didn’t get started until after he’d written the original article. At the same time IT has moved from a centralised mainframe world, with users interacting through dumb terminals to a more distributed model of a powerful PC on every desk. Arguably, it is now is heading back to an environment where the Cloud provides the processing power and we use PCs, tablets or phones that, while powerful, cannot come close to the speed of Cloud-based servers. This centralised model works well when you have fast connectivity but doesn’t function at all when your internet connection is down, leaving you twiddling your thumbs.
Looking around and comparing a 1960’s mainframe and today’s smartphone you can see Moore’s Law in action, but how long will it continue to work for? The law’s demise has been predicted for some time, and as chips become ever smaller the processes and fabs needed to make them become more complex and therefore more expensive. This means that the costs have to be passed on somehow – at the moment high end smartphone users are happy to pay a premium for the latest, fastest model, but it is difficult to see this lasting for ever, particularly as the whizzier the processor the quicker batteries drain. The Internet of Things (IoT) will require chips with everything, but size and power constraints, and the fact that the majority of IoT sensors will not need huge processing power means that Moore’s Law isn’t necessary to build the smart environments of the future.
Desktop and laptop PCs used to be the biggest users of chips, and the largest beneficiaries of Moore’s Law, becoming increasingly powerful without the form factor having to be changed. But sales are slowing, as people turn to a combination of tablets/phones and the processing power of the Cloud. Devices such as Google Chromebooks can use lower spec chips as it uses the Cloud for the heavy lifting, thus making it cheaper. At the same time, the servers within the datacentres that are running these Cloud services aren’t as space constrained, so miniaturisation is less of a priority.
Taken together these factors probably mean that while Moore’s Law could theoretically carry on for a long time, the economics of a changing IT landscape could finish it off within the next 10 years. However, its death has been predicted many times before, so it would take a brave person to write its epitaph just yet.
It is easy to write off 3D printing as a niche technology, best left to hobbyists or for businesses producing extremely specialised, one-off components. But having seen some of the latest products made with the technology, I think it is moving very quickly towards the mainstream. You can now produce incredibly intricate pieces on a home 3D printer, albeit a high spec one, and industrial 3D printers provide even more power, speed and performance. There are now more and more community spaces with 3D printers (like Makespace in Cambridge), and even some local copy shops have one, delivering another way of bringing the technology to the mass market.
So, why do I think 3D printing is going mainstream? Because it taps into three key trends:
In our mass produced, brand-led world, we have an increasing a desire for personalisation. Many people want to show their individuality, and are willing to pay for it. So whether it is jewellery customised to fit your own body shape or a sculpture you’ve designed yourself, there is a market for 3D printed objects.
2 Need for precision
The boundaries of the possible are being pushed back. Medical science can do things that were previously thought impossible, while miniaturisation is shrinking the size of everyday objects around us, while making them much more complex. 3D printing enables the creation of precisely made replacement bones for medical use, as well as significant parts of intricate jet engines. All of these are high value objects, but the same methods can be used on more mundane applications. Take spare parts for consumer goods – normally if something small breaks (such as the shelf bracket of your fridge), you need to buy a replacement from the manufacturer at an exorbitant price. And that’s if you can even track down the part. Now, it is technically possible to 3D print the replacement, and while this obviously infringes copyright, it will be difficult for the original manufacturer to find out, let alone prosecute, you.
3 Infrastructure in the Cloud
The combination of the internet, the Cloud and smartphones provides a complete, cost-effective infrastructure to support 3D printing. You can take high resolution photos with your phone, upload them and have them turned into product plans by using the immense processing power available on the Cloud. Short of inspiration? You can find and download plans for just about anything to make yourself (unfortunately including guns) through a quick search.
So what markets will it disrupt? Recent announcements point to two that have real potential. As mentioned before, parts for jet engines are being made experimentally using 3D printing by both Rolls-Royce and academic researchers in Australia. As well as the ability to work to extremely fine tolerances, 3D printing also has the benefit of producing much less waste, as objects are built up, layer by layer, rather than carved out from a larger block of expensive material.
Secondly, and more in the consumer space, Argos has announced that it will run a trial that allows people to customise jewellery, both by adding messages and also changing the item’s dimensions. Previously the likes of the Royal Mail, Amazon and Asda have run 3D printing trials. Moving to more of a “make to order” model will help Argos in keeping stock costs down – and also help differentiate it against other retailers on the high street through exclusive products. Given that the likes of Argos have been hard hit by the rise of online shopping, it is a smart move that could well be expanded to other products.
Like many technologies, 3D printing will not only change existing markets, but also spawn completely new ones that have not yet been thought up. What is definite is that it provides brands and companies with a challenge – will the ability for complex customisation be a threat or an opportunity to their business?
If you needed evidence of the growth of the smartphone market and its move into every part of our lives, then this week’s Mobile World Congress (MWC) provides it. It wasn’t that long ago that the event was dominated by network infrastructure companies, but now it is essentially a consumer electronics show in all but name. And one that looks far beyond the handset itself. Ford launched an electric bike, Ikea announced furniture that charged your smartphone and a crowdfunded startup showed a suitcase that knows where it is and how much it weighs.
Five years ago none of these companies would have even thought of attending MWC – and it is all down to the rise of the smartphone. It is difficult to comprehend that the first iPhone was only launched in 2007, at a time when Apple was a niche technology player. It is now worth more than any other company in the world and 2 billion people globally have an internet-connected smartphone. By 2020 analysts predict that 80% of the world’s adults will own a smartphone.
As any honest iPhone owner will freely admit, they may be sleek, but they are actually rubbish for making and receiving calls. What they do provide is two things – a truly personal computer that fits in your pocket, and access to a global network of cloud-based apps. It is the mixture of the personal and the industrial that make smartphones central to our lives. We can monitor our own vital signs, and the environment around us through fitness and health trackers and mapping apps, and at the same time access any piece of information in the world and monitor and control devices hundreds or thousands of miles away. Provided you have a signal……….
So, based on what is on show at MWC, what are the next steps for the smartphone? So far it seems to split into two strands – virtual reality and the Internet of Things. HTC launched a new virtual reality headset, joining the likes of Sony, Microsoft, Samsung and Oculus Rift, promising a more immersive experience. Sensors to measure (and control) everything from bikes and cars to tennis racquets are also on show. The sole common denominator is that they rely on a smartphone and its connectivity to get information in and out quickly.
It is easy to look at some of the more outlandish predictions for connected technology and write them off as unlikely to make it into the mainstream. But then, back in 2007, when Steve Jobs unveiled the first iPhone, there were plenty of people who thought it would never take off. The smartphone revolution will continue to take over our lives – though I’m not looking forward to navigating streets full of people wearing virtual reality headsets who think they are on the beach, rather than on their way to work…………
This week the election campaign has been focusing on education, with the Conservative Education Secretary, Nicky Morgan, promising that every child leaving primary school must know their times tables up to 12 and be able to use correct punctuation, spelling and grammar. It follows her predecessor, Michael Gove, revamping the history curriculum to ensure that pupils know about key dates in British history – a move that some saw as a return to Victorian rote learning of facts.
Morgan complains that Britain has slumped in international education league tables, and has vowed to move the country up in rankings for maths and English. But ignoring the fact that children are already tested on times tables, I think she’s missing the point about modern education and the skills it teaches. Of course, children should know their times tables, and be able to read and write. These are basic skills that everyone should have.
But we are in an era of enormous change, and the skills that the workforce of tomorrow requires will be very different to those of today. Increased globalisation, the advent of the knowledge economy and greater technology are impacting on all jobs. Previously safe, middle income management occupations will be broken into smaller chunks and either computerised or outsourced, hollowing out the workforce so that what remains are high end, knowledge-based roles or more menial tasks.
What we need to do is prepare our children for this world by helping them to develop the skills that they require to work in this brave new world. A large proportion of today’s pupils will end up working in jobs that don’t currently exist, so you need to focus on three areas:
1. Learning to learn
Rather than simply teaching facts and tables, you need to instil in children the skills they need to keep learning. These range from problem solving, resilience and working as a team, to ensuring they have inquiring minds and are always pushing themselves.
2. Lifelong learning
Alongside learning to learn, everyone needs to understand that education doesn’t stop when you leave school or university. Whatever field you are in, you’ll need new skills as your career evolves, so it has to be seen as natural to keep learning. The days of working for the same company for ever are long gone, and the days of working in the same role throughout your career are going the same way. So, people will have to make radical moves into new industries and careers, and that will require ongoing investment in learning new skills.
The UK government has re-introduced coding to the school curriculum, which is a major step forward in ensuring that everyone has the basic skills needed to understand and work with technology. While most jobs have required IT for a while, the spread of software into every corner of our lives means that those who understand and program computers will have a big advantage over those that just use them to type emails or surf the net. I’d like to see more government investment in coding for all, alongside schools, so that everyone learns the skills they need.
Don’t get me wrong, it is a laudable aim that every child should leave primary school knowing that 12×12 is 144 and how to use an apostrophe. But we need to be teaching our children a lot more than that if we want to nurture a workforce of self-starting, motivated and problem solving adults that can drive innovation and wealth for the country and wider society.
Countries and cities across the world are busily trying to build tech clusters. Partly this is due to the sexiness of tech (expect the UK election to feature plenty of photo opportunities of candidates with startups), partly down to the fact that it seems easy to do, and a lot to do with the benefits it delivers to a local economy. In an era where technology is radically changing how we work, play and live, high value tech companies are always going to be prized.
But how do you build a tech cluster? It may seem easy to do on the outside – set up some co-working spaces, provide some money and sit back and wait for the ideas to flourish, but it is actually incredibly difficult. This is demonstrated by the diverging fortunes of the locations of England’s oldest universities – Oxford and Cambridge. As a recent piece in The Economist explains, over the last few years Cambridge has added more well-paid jobs, highly educated residents and workers in general than its rival. This prompted a visit last October to the city from an Oxford delegation, with the leader of Oxford City Council admitting that “Cambridge is at least 20 years ahead of us.”
Given the longstanding competition between the two cities, it is easy for people in Cambridge to sit back smugly, pat each other on the back and congratulate themselves on a job well done. However, a better course of action is to take a look at what is behind Cambridge’s success, and see what can be done to improve things. After all, there are startup and tech clusters around the world – competition is global – so there’s nothing to stop entrepreneurs setting up in Silicon Valley, Munich, Paris or London rather than Cambridge.
I see five factors underpinning the success of any tech cluster:
1. Ideas and skills
The first thing you need to build any business is obviously a good idea. Universities, particularly those involved in scientific research such as Oxford and Cambridge have plenty of these. But you need a specific type of person to be involved with the research – with a mindset that goes beyond academia and understands how a breakthrough idea can be turned into a viable business. You then need to be able to access the right skills to develop the idea technically, whether through commercial research or programming.
2. Support infrastructure
This is where Cambridge scores highly in being able to commercialise discoveries, through a long-established support infrastructure. The Cambridge Science Park opened in the 1970s, while the University has put in place teams to help researchers turn their ideas into businesses. Research-led consultancies, such as Cambridge Consultants, provide another outlet to develop ideas, as well as helping to keep bright graduates in the city. There is also a full range of experienced lawyers, PR people, accountants and other key support businesses to help companies form and grow.
Obviously without money no idea is going to make it off the drawing board. Cambridge has attracted investment from local and international venture capital, and has a thriving group of angel investors, who can share their experiences as well as their funding. Due to the length of time Silicon Fen has been operating, investment has been recycled, with successful exits fuelling new startups that then have the opportunity to grow.
4. Space to expand
Cambridge is a small city, and the combination of its green belt, lack of post-industrial brownfield sites and an historic centre owned by colleges, puts a huge pressure on housing stocks. As anyone that lives in Cambridge knows, house prices are not far shy of London – but spare a thought for Oxford residents. In 2014 an Oxford home costs 11.3 times average local earnings, nearly double the British norm of 5.8 times. Additionally, as The Economist points out, there is space outside the Cambridge greenbelt for people to build on, with South Cambridgeshire Council, which surrounds the city, understanding the importance of helping the local economy. In contrast, Oxford has four different district councils, and a powerful lobby of wealthy residents who want to keep their countryside pristine, hampering housing development. That’s not to say that Cambridge is perfect, far from it. More can be done to improve transport links to reduce commuting time and to spread the benefits of Cambridge’s economic success.
Ultimately tech clusters are judged by the success of the companies they produce. And Cambridge, partly due to the longevity of the cluster, has created multiple billion dollar businesses, from ARM to Cambridge Silicon Radio. This not only puts the area on the map for investors, but attracts entrepreneurs who want to tap into talent and spawns new businesses as staff move on and set up on their own. You therefore see sub-clusters in particular areas of tech develop as specialists use their knowledge to solve different problems. This then further strengthens the ecosystem.
Tech clusters are slow to build and can’t be simply willed into existence by governments opening their wallets. They need patience, a full range of skills and co-operation across the ecosystem if they are to grow and flourish – as the relative fortunes of Cambridge and Oxford show.