Home automation is the next battleground for technology. Following on the heels of Amazon’s launch of its Echo and Echo Dot devices, which feature its voice-controlled personal assistant Alexa, Google has unveiled its plans for a range of hardware to control the smart home. The Google Home speaker features a virtual assistant, excitingly called Google Assistant, that lets you give commands and then either provides information or controls your smart devices. For example, you can stream music, control the temperature and turn the lights up/down/off, as with the Echo. And Amazon and Google are not alone, with Apple announcing its HomeKit standard which will allow users to control devices through their iPhone via either apps or Siri.
When it comes to mass adoption, it is early days in the home automation market, and each one of the major players will need to overcome four big obstacles:
1 Do we need it?
Smart home kit has yet to really take off, with many consumers not willing to pay extra for internet-enabled light bulbs or thermostats. While Google Assistant and Amazon’s Alexa can do more than control your home, with the ability to find information, check the weather/traffic, book an Uber taxi etc., you don’t really need a separate device for this. You have one – your smartphone. So what each player has to do is find ways of encouraging people to adopt it, developers to create apps that use its functions, and manufacturers to incorporate it into their own hardware. Given that we’re talking about white goods such as fridges which are replaced infrequently and are normally price-sensitive purchases, this last point is going to take some time. As an early adopter I’m going to give Alexa a go, but I can’t see a compelling reason for mainstream consumers to buy an Echo or Home, until the ecosystem around them are more mature.
2 Is it clever enough?
As an existing Siri user I know that for a smart assistant it can be pretty dumb. It doesn’t really know enough about me to provide helpful answers and most attempts at ‘conversation’ end with switching it off and trying a Google search instead. Amazon and Google promise that their assistants will be much cleverer and will learn about you in order to provide a personalised experience that understands your context, location and previous behaviour. The jury is still out on whether it can be intelligent enough to replace human interaction for basic tasks.
3 Is it private?
The self-learning promise of Assistant and Alexa also has a darker side. Essentially, you are putting an internet-enabled microphone in the heart of your home, where it can listen and learn about you, before sharing that information with Google and Amazon. While both have privacy safeguards, the less you let it share, the less useful it will be. Many people will be concerned about where their data is going, and how it will be used – particularly given the amount of information Google and Amazon already possess about us all.
4 Are we going to be trapped in silos?
For me the main issue behind each of these platforms, is that essentially they are silos. You can’t play any music stored on iTunes on either of them for example, but have to either rely on Amazon Music, Google Play Music or Spotify. Even in an age of technology giants, very few of us rely on just one platform – we tend to use bits of each and value the fact that we can pick and choose where we get email, buy products or listen to music from. By their very nature, rivals are not going to push their competitors’ services, and no-one wants to have to buy multiple hardware to cover all their bases. What is needed is some form of interchange between all platforms, a kind of one ring to rule them all – but I can’t see that happening soon.
As with any innovation there’s a lot of hype around virtual assistants, and the hardware that they control. What is needed is some equally smart marketing that overcomes the objections listed above and really focuses on the benefits – otherwise mainstream consumers are likely to simply keep their dumb homes as they are.
For anyone like myself who was around during the dotcom boom, it is hard not to feel that you are suffering from déjà vu. Many of the exotic ideas and concepts that spectacularly flopped at the time have been reborn and are now thriving. Take ecommerce. Clothes retailer Boo.com was one of the biggest disasters of the period, burning through $135 million of venture capital in just 18 months, while online currency beenz aimed to provide a way of collecting virtual money that could be spent at participating merchants.
Offline, we were continuously promised/threatened with smart bins that would scan the barcodes of product packaging as we threw it away, and automatically order more of the same. And goods might arrive from a virtual supermarket, run as a separate business from your local Tesco or Sainsbury’s. You could pay for low value goods and services with a Mondex card instead of cash (though initially only if you lived in the trial town of Swindon). The first Personal Digital Assistants (PDAs) were launched, providing computing power in the palm of your hand. We’d already laughed out of the court the ridiculous concept of electric cars, as typified by the Sinclair C5.
Fast forward to now, and versions of all of these failed ventures are thriving. There are any number of highly graphical, video based clothes retailers, while you can take your pick of online currencies from Bitcoin to Ethereum. We’re still threatened with smart appliances that can re-order groceries (fridges being the latest culprit), but Amazon’s Dash buttons are a neater and simpler way of getting more washing powder delivered that put the consumer in control. And Dash bypasses the supermarket itself, with goods dispatched direct from Amazon. I can pay for small items by tapping my debit card on a card reader – even in my local village shop. More and more cars are hybrids, if not fully electric, while handheld computing power comes from our smartphones.
What has driven this change? First off, the dotcom boom was over 15 years ago, so there’s been a lot of progress in tech. We have faster internet speeds (one of the reasons for Boo’s demise was its graphics were too large for most dial-up modems to download), better battery life for digital devices and vehicles (iPhones excepted), hardware and sensors are much smaller and more powerful, and network technologies such as Bluetooth and ZigBee are omnipresent.
However, at the same time, the real change has been in the general public. Using technology has become part of everyone’s daily lives, and those that are not online are the exception, rather than the rule. It is a classic example of the move from early adopters to the majority, as set out in Geoffrey Moore’s Crossing the Chasm. And it has happened bit by bit, with false starts and cul de sacs on the way.
So what does this mean for marketers? It really brings home the importance of knowing your audience and targeting your product accordingly. Don’t expect raw tech to be instantly adopted by the majority, but build up to it, gain consumer trust (perhaps by embedding your new tech in something that already exists), and prepare to fail first time round. And the other lesson is to look at today’s big failures, and be prepared to resurrect them when the market has changed in the future……
Despite all the talk of innovation, there are plenty of things that people continue to do, even though they are no longer the optimal way to achieve something.
Take typing for example. The QWERTY keyboard dates back to the first, manual typewriters, where the typist hit a key manually pushing the inked letter onto a sheet of paper. The problem with the first typewriter designs was that people could hit the keys faster than the machine would cope with, leading to jams as multiple keys became intertwined. Hence adopting what was essentially a sub-optimal system in terms of speed, in order to make typewriters more efficient overall. Now, in the digital age jamming is no longer a problem, yet everyone still uses a QWERTY keyboard, as that is the de facto standard, irrespective of the fact that it can give you carpal tunnel and repetitive strain injuries.
Driving is another area where tradition dictates what we do. The reason that in England we drive on the left dates back to the days when people rode horses – as the majority of the population was right handed you could hold your reins with your left hand, leaving the other free for your sword. As part of the French Revolution this was reversed in France, and then imposed by Napoleon on the countries he conquered. This means that the majority of countries in the world now drive on the right, despite the fact that accident rates are lower amongst left hand drivers, perhaps due to right eye dominance.
These two examples demonstrate two things:
- The most logical, sensible solution can’t necessarily overcome the status quo, particularly if it means people have to completely relearn how they operate.
- People continue to choose a particular course of action, even if the reasons for it are lost in the mists of time. Tradition rules.
Why is this important? I meet a lot of technology startups, and many of them enthusiastically talk about how their invention will completely change a market or sector. Build it and they will come seems to be the mantra. All it takes is for people to see how outmoded and inefficient the current technology is, and switch to their new, unproven, but potentially much better solution. And normally relearn how they operate. And pay a bit more. Often, they then wonder why they fail to get market traction or growth.
Essentially people weren’t sufficiently convinced of the advantages to change what they did. They preferred to be inefficient rather than invest the time to solve a problem. We’ve all done this, spending an extra minute or so doing something on our PC because that’s how we were taught 20 years ago, rather than spending 15 minutes reading the manual and upgrading our knowledge.
This isn’t to say that innovation can’t happen. Look at the Dyson vacuum cleaner – the advantages of changing (no bag, better performance), outweighed the higher cost and learning how it worked. But in that case the benefits were extremely clear, and, most importantly, marketed very well.
So, the lessons for every business, whether a startup or not, are clear. The vast majority of the population generally doesn’t like change, and therefore the benefits of something new have to dramatically outweigh the disadvantages of how things have always been done. Innovation has to be clearly marketed if it is going to take root with the majority, as opposed to early adopters – it won’t just sell itself. It has to fit inside the ecosystem of what people are comfortable with, and provide them with the best overall experience. That’s why VHS beat the technologically superior Betamax technology – it had the content from Hollywood studios and was easier to operate. Often it can be easier to sell a better mousetrap than a completely new method of rodent killing device. Therefore talk to your audience, understand their pain points and make sure you provide a simple, powerful solution – otherwise you are likely to join the ranks of technically superior, but unused products, and all your innovation will be wasted.
The news that Google is canning its Wave product has the online world in a bit of a tizzy. In my opinion it suffered as people weren’t really sure what it did (email? Instant messaging? Document collaboration? All of the above?) and it was launched by invitation – hardly the way of building a mass audience quickly
So, if the likes of Google can’t convince us to use new tools, have we reached the end of the road for social media innovation?
I think not, but as social media moves more mainstream new services need to convince people (not just early adopters) to invest time and effort in trying something new.
Let’s go back to marketing science. Social media has crossed Geoffrey Moore’s famous chasm, so to gain interest new services have to appeal to the early majority, rather than just visionaries and early adopters. It doesn’t matter that social media tools tend to be free – what costs (and puts people off) is the amount of time they need to invest in learning them and the potential payback.
The early majority aren’t interested in tech for tech’s sake, they want something that will solve a problem or replace a tried and tested solution. And it has to be easy to use and not take up too much of their time to set-up. Wave failed on pretty much all of these points, meaning it was always destined to be a niche. So rather than a launch Google would have done better to define what it did, build a community of early adopters and then go mainstream.