Home automation is the next battleground for technology. Following on the heels of Amazon’s launch of its Echo and Echo Dot devices, which feature its voice-controlled personal assistant Alexa, Google has unveiled its plans for a range of hardware to control the smart home. The Google Home speaker features a virtual assistant, excitingly called Google Assistant, that lets you give commands and then either provides information or controls your smart devices. For example, you can stream music, control the temperature and turn the lights up/down/off, as with the Echo. And Amazon and Google are not alone, with Apple announcing its HomeKit standard which will allow users to control devices through their iPhone via either apps or Siri.
When it comes to mass adoption, it is early days in the home automation market, and each one of the major players will need to overcome four big obstacles:
1 Do we need it?
Smart home kit has yet to really take off, with many consumers not willing to pay extra for internet-enabled light bulbs or thermostats. While Google Assistant and Amazon’s Alexa can do more than control your home, with the ability to find information, check the weather/traffic, book an Uber taxi etc., you don’t really need a separate device for this. You have one – your smartphone. So what each player has to do is find ways of encouraging people to adopt it, developers to create apps that use its functions, and manufacturers to incorporate it into their own hardware. Given that we’re talking about white goods such as fridges which are replaced infrequently and are normally price-sensitive purchases, this last point is going to take some time. As an early adopter I’m going to give Alexa a go, but I can’t see a compelling reason for mainstream consumers to buy an Echo or Home, until the ecosystem around them are more mature.
2 Is it clever enough?
As an existing Siri user I know that for a smart assistant it can be pretty dumb. It doesn’t really know enough about me to provide helpful answers and most attempts at ‘conversation’ end with switching it off and trying a Google search instead. Amazon and Google promise that their assistants will be much cleverer and will learn about you in order to provide a personalised experience that understands your context, location and previous behaviour. The jury is still out on whether it can be intelligent enough to replace human interaction for basic tasks.
3 Is it private?
The self-learning promise of Assistant and Alexa also has a darker side. Essentially, you are putting an internet-enabled microphone in the heart of your home, where it can listen and learn about you, before sharing that information with Google and Amazon. While both have privacy safeguards, the less you let it share, the less useful it will be. Many people will be concerned about where their data is going, and how it will be used – particularly given the amount of information Google and Amazon already possess about us all.
4 Are we going to be trapped in silos?
For me the main issue behind each of these platforms, is that essentially they are silos. You can’t play any music stored on iTunes on either of them for example, but have to either rely on Amazon Music, Google Play Music or Spotify. Even in an age of technology giants, very few of us rely on just one platform – we tend to use bits of each and value the fact that we can pick and choose where we get email, buy products or listen to music from. By their very nature, rivals are not going to push their competitors’ services, and no-one wants to have to buy multiple hardware to cover all their bases. What is needed is some form of interchange between all platforms, a kind of one ring to rule them all – but I can’t see that happening soon.
As with any innovation there’s a lot of hype around virtual assistants, and the hardware that they control. What is needed is some equally smart marketing that overcomes the objections listed above and really focuses on the benefits – otherwise mainstream consumers are likely to simply keep their dumb homes as they are.
For anyone like myself who was around during the dotcom boom, it is hard not to feel that you are suffering from déjà vu. Many of the exotic ideas and concepts that spectacularly flopped at the time have been reborn and are now thriving. Take ecommerce. Clothes retailer Boo.com was one of the biggest disasters of the period, burning through $135 million of venture capital in just 18 months, while online currency beenz aimed to provide a way of collecting virtual money that could be spent at participating merchants.
Offline, we were continuously promised/threatened with smart bins that would scan the barcodes of product packaging as we threw it away, and automatically order more of the same. And goods might arrive from a virtual supermarket, run as a separate business from your local Tesco or Sainsbury’s. You could pay for low value goods and services with a Mondex card instead of cash (though initially only if you lived in the trial town of Swindon). The first Personal Digital Assistants (PDAs) were launched, providing computing power in the palm of your hand. We’d already laughed out of the court the ridiculous concept of electric cars, as typified by the Sinclair C5.
Fast forward to now, and versions of all of these failed ventures are thriving. There are any number of highly graphical, video based clothes retailers, while you can take your pick of online currencies from Bitcoin to Ethereum. We’re still threatened with smart appliances that can re-order groceries (fridges being the latest culprit), but Amazon’s Dash buttons are a neater and simpler way of getting more washing powder delivered that put the consumer in control. And Dash bypasses the supermarket itself, with goods dispatched direct from Amazon. I can pay for small items by tapping my debit card on a card reader – even in my local village shop. More and more cars are hybrids, if not fully electric, while handheld computing power comes from our smartphones.
What has driven this change? First off, the dotcom boom was over 15 years ago, so there’s been a lot of progress in tech. We have faster internet speeds (one of the reasons for Boo’s demise was its graphics were too large for most dial-up modems to download), better battery life for digital devices and vehicles (iPhones excepted), hardware and sensors are much smaller and more powerful, and network technologies such as Bluetooth and ZigBee are omnipresent.
However, at the same time, the real change has been in the general public. Using technology has become part of everyone’s daily lives, and those that are not online are the exception, rather than the rule. It is a classic example of the move from early adopters to the majority, as set out in Geoffrey Moore’s Crossing the Chasm. And it has happened bit by bit, with false starts and cul de sacs on the way.
So what does this mean for marketers? It really brings home the importance of knowing your audience and targeting your product accordingly. Don’t expect raw tech to be instantly adopted by the majority, but build up to it, gain consumer trust (perhaps by embedding your new tech in something that already exists), and prepare to fail first time round. And the other lesson is to look at today’s big failures, and be prepared to resurrect them when the market has changed in the future……
Like a lot of people I was initially shocked by the recent £24 billion takeover of ARM by Softbank of Japan. Not only was it the biggest acquisition ever of a European IT company, but it was also widely seen as the jewel in the crown of the Cambridge/UK tech scene.
A few years ago Cambridge had three stock market listed companies worth over a billion pounds each – ARM, Autonomy and Cambridge Silicon Radio (CSR). All have now been acquired, with varying degrees of success – HP, Autonomy’s purchaser is still suing the previous management about alleged overstating of accounts.
At the same time a large number of the next tier of Cambridge companies, such as Jagex, cr360 and Domino Printing Sciences, have also been bought, leaving many people wondering where the next tech superstar will come from. This is particularly true as an increasing number of earlier stage businesses in exciting markets have been acquired by tech giants – Internet of Things startup Neul was bought by Huawei, Evi by Amazon and Phonetic Arts by Google. And that’s just the acquisitions that were announced. I’m sure that in many cases promising technology has been snapped up without making it into the press, as the deal size has been relatively small.
So, as someone involved in the Cambridge tech scene, should we be worried? Is Silicon Fen going to turn into an offshoot of Silicon Valley – a bit like the tech towns around Heathrow, but with a bit more IP? Thinking about it more rationally, there are two main reasons for the flurry of acquisitions, particularly of smaller businesses.
1 Cambridge’s reputation
All of these acquisitions are actually recognition of the strength of the Cambridge tech sector. Big companies are attracted to the area because of the talent and innovation on show, and are increasingly willing to take a punt on earlier stage businesses to get in first and lay their hands on new technology and IP. They’ve realised that not every acquisition will work, but that the wins should outweigh the losses. So, Cambridge’s PR has worked in attracting the largest tech companies to the area.
2 Changing mix of companies
Traditionally, a lot of Cambridge startups were built on biotech, science and engineering, either from the University or the innovative consultancies that differentiate the city from many other clusters. As Cambridge grows, a greater number of companies are software-based, which means that developing their technology is faster than when trying to commercialise a product from an interesting piece of lab research. Therefore, they are likely to have a steeper growth curve, and potentially a shorter lifespan as they reach maturity (and acquisition) quicker.
A further reason for optimism is given by the new Cambridge Cluster Map, which lists the nearly 22,000 businesses based within 20 miles of the city centre. With a turnover of £33 billion, the map demonstrates the range of companies and the strength of the local economy. A third of this turnover is made up of knowledge-intensive businesses, employing nearly 60,000 people. That’s a lot of innovation, whoever ultimately owns the companies concerned.
Looking back, I think commentators will see that the ARM acquisition is part of a change in Cambridge as it matures and becomes a recognised part of the global tech sector. The economy will continue to grow, but more of the capital will come from outside the city. While this means we will have fewer ARMs and CSRs, and more outposts of Amazon, Apple and Google, it won’t stop growth and innovation, which means the Cambridge Phenomenon is likely to go from strength to strength.
Despite all the talk of innovation, there are plenty of things that people continue to do, even though they are no longer the optimal way to achieve something.
Take typing for example. The QWERTY keyboard dates back to the first, manual typewriters, where the typist hit a key manually pushing the inked letter onto a sheet of paper. The problem with the first typewriter designs was that people could hit the keys faster than the machine would cope with, leading to jams as multiple keys became intertwined. Hence adopting what was essentially a sub-optimal system in terms of speed, in order to make typewriters more efficient overall. Now, in the digital age jamming is no longer a problem, yet everyone still uses a QWERTY keyboard, as that is the de facto standard, irrespective of the fact that it can give you carpal tunnel and repetitive strain injuries.
Driving is another area where tradition dictates what we do. The reason that in England we drive on the left dates back to the days when people rode horses – as the majority of the population was right handed you could hold your reins with your left hand, leaving the other free for your sword. As part of the French Revolution this was reversed in France, and then imposed by Napoleon on the countries he conquered. This means that the majority of countries in the world now drive on the right, despite the fact that accident rates are lower amongst left hand drivers, perhaps due to right eye dominance.
These two examples demonstrate two things:
- The most logical, sensible solution can’t necessarily overcome the status quo, particularly if it means people have to completely relearn how they operate.
- People continue to choose a particular course of action, even if the reasons for it are lost in the mists of time. Tradition rules.
Why is this important? I meet a lot of technology startups, and many of them enthusiastically talk about how their invention will completely change a market or sector. Build it and they will come seems to be the mantra. All it takes is for people to see how outmoded and inefficient the current technology is, and switch to their new, unproven, but potentially much better solution. And normally relearn how they operate. And pay a bit more. Often, they then wonder why they fail to get market traction or growth.
Essentially people weren’t sufficiently convinced of the advantages to change what they did. They preferred to be inefficient rather than invest the time to solve a problem. We’ve all done this, spending an extra minute or so doing something on our PC because that’s how we were taught 20 years ago, rather than spending 15 minutes reading the manual and upgrading our knowledge.
This isn’t to say that innovation can’t happen. Look at the Dyson vacuum cleaner – the advantages of changing (no bag, better performance), outweighed the higher cost and learning how it worked. But in that case the benefits were extremely clear, and, most importantly, marketed very well.
So, the lessons for every business, whether a startup or not, are clear. The vast majority of the population generally doesn’t like change, and therefore the benefits of something new have to dramatically outweigh the disadvantages of how things have always been done. Innovation has to be clearly marketed if it is going to take root with the majority, as opposed to early adopters – it won’t just sell itself. It has to fit inside the ecosystem of what people are comfortable with, and provide them with the best overall experience. That’s why VHS beat the technologically superior Betamax technology – it had the content from Hollywood studios and was easier to operate. Often it can be easier to sell a better mousetrap than a completely new method of rodent killing device. Therefore talk to your audience, understand their pain points and make sure you provide a simple, powerful solution – otherwise you are likely to join the ranks of technically superior, but unused products, and all your innovation will be wasted.
Mankind has always had a fascination for mythical beasts, and none more so than the unicorn. Despite allegedly dying out in the flood after failing to board Noah’s Ark in time, they are still all around us in popular culture, from Harry Potter to children’s toys. I even found an exhibit in a Vienna museum labelled matter of factly as a “unicorn horn” – it was actually from a narwhal.
The horned horses are back in the news, in the world of tech at least, with any startup valued at over $1 billion by venture capitalists now dubbed a unicorn. However with more than 100 companies now achieving unicorn status there’s a growing worry that startups are trading short term valuations for longer term success. True, unicorn status helps attract skilled staff, but down the line it requires either a trade buyer that is willing to pay big money or an IPO to translate mythical (paper) valuations into hard cash. There have also been a raft of stories on how investors have structured their unicorn funding in ways that protect their cash (rather than the shares of others, such as founding teams) if the company should lose its value.
A focus on unicorns also favours certain sectors and types of company. A browse through Fortune’s latest unicorn list reveals a large number of consumer electronics (Xiaomi, Jawbone), retail (FlipKart, Snapdeal) and sharing economy (Uber, Airbnb) companies. In many ways this is what you expect – company valuations are based on what the addressable market is, so the biggest investment goes into those startups that can make most money.
However, it does potentially limit where investors put their money. There are lots of startups that will never be a Facebook or an Uber, but have the potential to be extremely successful niche players that could well grow into billion dollar valued companies. Look at ARM – when it began as a spin-off from Acorn Computers with a completely new business model, very few would have predicted its current success.
There’s also a definite geographic bias where unicorn investors are putting their money – Silicon Valley, China and India. Out of the latest Fortune list just three are in Europe, one in Australia and one in Israel. This doesn’t reflect the energy, ideas and potential in any of these places, particularly in emerging sectors. The danger is that if investors spend their time chasing unicorns they’ll miss out on the startups that could do with their help to build long term businesses that can make a difference to many markets.
So I think we need to add another category alongside unicorns. Keeping the mythical theme I’d go for centaurs. Sturdier than a unicorn, probably better in a fight and with a bit more intelligence (and opposable thumbs). They may not have the beauty or the (frankly over the top) horn of their flashier cousins but they are built for the long term, rather than mythical valuations that don’t necessarily deliver. Given the potential returns they can produce, it is time for investors to move away from the fascination with unicorns to more realistic startups that may be uglier, but have just as much potential.
There’s been a number of recent pieces about the rise of self-learning technology that uses artificial intelligence (AI) to carry out tasks that would previously have been too complex for a machine. From stock trading to automated translations and even playing Frogger, computers will increasingly take on roles that used to rely on people’s skills.
Netflix used an algorithm to analyse the most watched content on its service, and found that it included three key ingredients – Kevin Spacey, director David Fincher and BBC political dramas. So when it commissioned original content, it began with House of Cards, a remake of a BBC drama, starring Spacey and directed by (you’ve guessed it) Fincher.
This rise of artificial intelligence is worrying a lot of people – and not just Luddites. The likes of Stephen Hawking, Bill Gates and Elon Musk have all described it as a threat to the existence of humanity. They worry that we’ll see the development of autonomous machines with brains many thousands of times larger than our own, and whose interests (and logic) may not square with our own. Essentially the concern is that we’re building a future generation of Terminators without realising it.
They are right to be wary, but a couple of recent stories made me think that human beings actually have several big advantages – we’re not logical, we don’t follow the facts and we don’t give up. Psychologist Daniel Kahneman won a Nobel Prize for uncovering the fact that the human mind is made up of two systems, one intuitive and one rational. The emotional, intuitive brain is the default for decision making – without people realising it. So in many ways AI-powered computers do the things we don’t want to do, leaving us free to be more creative (or lazy, dependent on your point of view).
Going back to the advantages that humans have over systems, the first example I’d pick is the UK general election. All the polls predicted a close contest, and an inevitable hung parliament – but voters didn’t behave logically or according to the research and the Tories trounced the opposition. While you might disagree with the result, it shows that you can’t predict the future with the clarity that some expect.
Humans also have an in-built ability to try and game a system and find ways round it, often with unintended consequences. This has been dubbed the Cobra effect after events in colonial India. Alarmed by the number of cobras on the loose, the authorities in Delhi offered a bounty for every dead cobra handed in. People began to play the system, breeding snakes specifically to kill and claim their reward. When the authorities cottoned on and abandoned the programme, the breeders released the now worthless snakes, dramatically increasing the wild cobra population. You can see the same attempt to rig the system in the case of Navinder Singh Sarao, the day trader who is accused of causing the 2010 ‘flash crash’ by spoofing – sending sell orders that he intended to cancel but that tricked trading computers into thinking the market was moving downwards. Despite their intelligence, trading systems cannot spot this sort of behaviour – until it is obviously too late.
The final example is when humans simply ignore the odds and upset the form book. Take Leicester City. Rock bottom of the English Premiership, the Foxes looked odds-on to be relegated. Yet the players believed otherwise, kept confident and continued to plug away. The tide now looks as if it has turned, and the team is just a couple of points away from safety. A robot would have long since given up……..
So artificial intelligence isn’t everything. Giving computers the ability to learn and process huge amounts of data in fractions of a second does threaten the jobs of workers in the knowledge economy. However it also frees up humans to do what they do best – be bloody minded and subversive, think their way around problems, and use their intuition rather than the rational side of their brain. And of course, computers still do have an off switch………….