Home automation is the next battleground for technology. Following on the heels of Amazon’s launch of its Echo and Echo Dot devices, which feature its voice-controlled personal assistant Alexa, Google has unveiled its plans for a range of hardware to control the smart home. The Google Home speaker features a virtual assistant, excitingly called Google Assistant, that lets you give commands and then either provides information or controls your smart devices. For example, you can stream music, control the temperature and turn the lights up/down/off, as with the Echo. And Amazon and Google are not alone, with Apple announcing its HomeKit standard which will allow users to control devices through their iPhone via either apps or Siri.
When it comes to mass adoption, it is early days in the home automation market, and each one of the major players will need to overcome four big obstacles:
1 Do we need it?
Smart home kit has yet to really take off, with many consumers not willing to pay extra for internet-enabled light bulbs or thermostats. While Google Assistant and Amazon’s Alexa can do more than control your home, with the ability to find information, check the weather/traffic, book an Uber taxi etc., you don’t really need a separate device for this. You have one – your smartphone. So what each player has to do is find ways of encouraging people to adopt it, developers to create apps that use its functions, and manufacturers to incorporate it into their own hardware. Given that we’re talking about white goods such as fridges which are replaced infrequently and are normally price-sensitive purchases, this last point is going to take some time. As an early adopter I’m going to give Alexa a go, but I can’t see a compelling reason for mainstream consumers to buy an Echo or Home, until the ecosystem around them are more mature.
2 Is it clever enough?
As an existing Siri user I know that for a smart assistant it can be pretty dumb. It doesn’t really know enough about me to provide helpful answers and most attempts at ‘conversation’ end with switching it off and trying a Google search instead. Amazon and Google promise that their assistants will be much cleverer and will learn about you in order to provide a personalised experience that understands your context, location and previous behaviour. The jury is still out on whether it can be intelligent enough to replace human interaction for basic tasks.
3 Is it private?
The self-learning promise of Assistant and Alexa also has a darker side. Essentially, you are putting an internet-enabled microphone in the heart of your home, where it can listen and learn about you, before sharing that information with Google and Amazon. While both have privacy safeguards, the less you let it share, the less useful it will be. Many people will be concerned about where their data is going, and how it will be used – particularly given the amount of information Google and Amazon already possess about us all.
4 Are we going to be trapped in silos?
For me the main issue behind each of these platforms, is that essentially they are silos. You can’t play any music stored on iTunes on either of them for example, but have to either rely on Amazon Music, Google Play Music or Spotify. Even in an age of technology giants, very few of us rely on just one platform – we tend to use bits of each and value the fact that we can pick and choose where we get email, buy products or listen to music from. By their very nature, rivals are not going to push their competitors’ services, and no-one wants to have to buy multiple hardware to cover all their bases. What is needed is some form of interchange between all platforms, a kind of one ring to rule them all – but I can’t see that happening soon.
As with any innovation there’s a lot of hype around virtual assistants, and the hardware that they control. What is needed is some equally smart marketing that overcomes the objections listed above and really focuses on the benefits – otherwise mainstream consumers are likely to simply keep their dumb homes as they are.
50 years ago, engineer Gordon Moore wrote an article that has become the bedrock of computing. Moore’s Law, as first described in the article, states that the number of elements that could be fitted onto the same size piece of silicon doubles every year. It was then revised to every two years, and elements changed to transistors, but has basically held true for five decades. Essentially it means that computing power doubles every two years – and consequently gets considerably cheaper over time.
What is interesting is to look back over the last 50 years and see how completely different the IT landscape is today. Pretty much all companies that were active in the market when Moore’s Law was penned have disappeared (with IBM being a notable exception and HP staggering on). Even Intel, the company Moore co-founded, didn’t get started until after he’d written the original article. At the same time IT has moved from a centralised mainframe world, with users interacting through dumb terminals to a more distributed model of a powerful PC on every desk. Arguably, it is now is heading back to an environment where the Cloud provides the processing power and we use PCs, tablets or phones that, while powerful, cannot come close to the speed of Cloud-based servers. This centralised model works well when you have fast connectivity but doesn’t function at all when your internet connection is down, leaving you twiddling your thumbs.
Looking around and comparing a 1960’s mainframe and today’s smartphone you can see Moore’s Law in action, but how long will it continue to work for? The law’s demise has been predicted for some time, and as chips become ever smaller the processes and fabs needed to make them become more complex and therefore more expensive. This means that the costs have to be passed on somehow – at the moment high end smartphone users are happy to pay a premium for the latest, fastest model, but it is difficult to see this lasting for ever, particularly as the whizzier the processor the quicker batteries drain. The Internet of Things (IoT) will require chips with everything, but size and power constraints, and the fact that the majority of IoT sensors will not need huge processing power means that Moore’s Law isn’t necessary to build the smart environments of the future.
Desktop and laptop PCs used to be the biggest users of chips, and the largest beneficiaries of Moore’s Law, becoming increasingly powerful without the form factor having to be changed. But sales are slowing, as people turn to a combination of tablets/phones and the processing power of the Cloud. Devices such as Google Chromebooks can use lower spec chips as it uses the Cloud for the heavy lifting, thus making it cheaper. At the same time, the servers within the datacentres that are running these Cloud services aren’t as space constrained, so miniaturisation is less of a priority.
Taken together these factors probably mean that while Moore’s Law could theoretically carry on for a long time, the economics of a changing IT landscape could finish it off within the next 10 years. However, its death has been predicted many times before, so it would take a brave person to write its epitaph just yet.
Cambridge is rightly highlighted as one of Europe’s biggest innovation hubs, particularly when it comes to commercialising ideas that began in the research lab. This has spawned a huge biotech sector, and helped create a series of billion dollar tech companies that lead their industries, such as ARM and Cambridge Silicon Radio (CSR).
The Internet of Things (IoT) has been identified by many commentators as a key emerging market – and one where Cambridge has the ecosystem, experience and ideas to play a major role. So the news that IoT pioneer Neul has been sold to Chinese telecoms equipment behemoth Huawei depressed me. Not for nationalistic reasons, but simply due to the low reported purchase price ($25m) and the fact that the company has cashed out so early in the growth process. While there was a fair amount of PR spin around Neul’s progress to date, I genuinely believed it could join the billion dollar Cambridge club by developing its technology and building alliances and routes to market.
At the same time, Cambridge Silicon Radio is mulling a multi-billion pound sale to US firm Microchip Technology, reducing the number of major, independent, quoted Cambridge companies. Obviously investors and founders do look to realise their profits at some point, but it is important to balance this by looking longer term. While those that put money into Neul no doubt got a decent return, think how much more they’d have received if the company had been allowed to grow and exploit its market position.
I’m not alone in taking this stance. Cambridge Innovation Capital (CIC), the University of Cambridge-backed VC fund, recently warned its portfolio companies against selling out too early and promised to provide long term, founder friendly, capital to help grow the next ARMs and CSRs.
So what we need is the support, both financial and in terms of time, that gives companies the ability to achieve their potential. Not all of them will make it, and many will be niche players that logically fit better within bigger companies – but at least they’ll have had the ability to aim for the stars before finding their real place in the world. Otherwise Cambridge (and other parts of the UK tech scene), will simply act as incubators that turn bright ideas into viable businesses that can be snapped up and digested by tech giants looking for the newest innovation. It is much better for both the local and national economy that some of these startups make it the stock market as fully fledged businesses, creating ecosystems that generate new sectors and jobs. This requires longer term thinking from everyone involved – otherwise the number of billion dollar Cambridge companies will shrink even further.
In many ways the news that Google has bought smart home company Nest Labs shouldn’t be a surprise. It has been talking to the company for some time and apparently lots of Google employees had installed the company’s sensor based thermostat in their own homes.
More to the point I think it fits in with Google’s overall objectives. As analysts have pointed out, Google isn’t a search engine company (and hasn’t been for some time), but is about data – collecting it (analysing search results, Google Glass, StreetView) and then using it to either sell you things (through adverts) or make your life better in some way.
With billions of sensors embedded in previously dumb objects that will be communicating in real-time, the Internet of Things promises to create a tidal wave of data. Each piece will be tiny, but if you can bring it together and analyse it you can get an even deeper view of the world around us, and the people in it. Nest’s products are much more than thermostats, and provide Google with the sensor/Internet of Things expertise it needs to add to its product portfolio. It already has Android-based smartphones/tablets to act as controllers, the mapping technology to show where sensors are located and the technology to analyse billions of events in real-time. And with Google Fiber rolling out in several US cities, it has a network to send the data through as well.
A simple example – your Nest thermostat notifies you that your boiler has gone wrong via your smartphone while you are at work. And suggests a registered tradesman that can fix it by trawling the web and any recommendations in your Google+ circles. Or alternatively gives you the address of the nearest clothing shop, so you can stock up on thick jumpers.
Many people (myself included) would find this a bit creepy, but it is potentially possible if you can knit all the technology together. What I think is interesting is how utilities will respond to the future entry of Google into the market. After all, as publishers and others have found, Googlification can squeeze out incumbents through sheer scale and by engaging more closely with customers. Utilities have to decide whether they want to partner with the likes of Google, risk losing the customer relationship and become commodity suppliers of gas and electricity or take a stand and build stronger engagement with customers. In current circumstances that’ll be difficult – people are at best ambivalent about their utility supplier, and in an era of rising prices and poor customer service many actively dislike them.
So there’s a big opportunity here – and something that Cambridge’s cluster of smart home/green tech companies could exploit. For example, AlertMe already has a partnership with British Gas, while Sentec is working with metering companies to make their products smarter. If energy companies don’t want to work with Google then they have two choices – do it themselves (teaming up with smaller tech companies), or partner with larger industrial tech companies, such as Siemens or Bosch. And these industrial giants will need the specialist expertise that smart home companies can provide.
The utility market doesn’t move fast, so don’t expect to see Google running your home in the next year, but the Nest acquisition should actually spur the whole sector on, attracting both interest and investment. The world just got more interested in smart homes, which is good news for relevant startups in Cambridge and beyond.
Like a lot of people I’ve given up on wearing a watch during the working day, replacing it with glancing at my phone, tablet or computer. So all the current noise about mooted smart watches from Apple (immediately dubbed the iWatch), Google, Samsung and now Microsoft puzzled me. Why would anyone try and replicate the features of a smart phone on a tiny screen on their wrist – particularly when they were probably carrying their phone in their pocket?
Take the Pebble watch. It essentially syncs with your smartphone and reminds you about your latest tweets, emails and phone calls – a cute accessory but hardly game changing for most people.
But a bit more thinking unlocks why the tech titans think there’s a market out there. The only time I actually wear a watch (except on the few occasions I want to appear smart) is when I go for a run and I use GPS to measure where I’ve gone and exactly how slowly. Essentially I’ve got a wearable sensor around my wrist, rather than a time keeping device.
That’s where the interest will be, not as a smaller second screen for your iPhone, but providing a way of measuring where you are, what you are doing and your vital signs. After all a watch has the benefit of being intimately connected to your person – few people are going to hold their phone to their wrist to measure their pulse. With an aging population, and increasing desire to manage our health, this is where the mass market will be. Add in the Internet of Things and you can see a connected web of wearable sensors managing our lives.
Thinking of the smart watch I’ve come up with five applications where it could be used – from the basic to the far fetched.
- Patient monitoring – both in hospitals and more importantly at home, the watch can send back vital statistics to doctors and monitoring services, raising the alarm if issues occur
- A smart wallet – why get your wallet or Oyster card out when you need to buy something? The watch automatically debits your account as you pass through ticket barriers or pick up that latte.
- Obesity control – measuring calories burned is standard on sports watches, so combine this with a camera and an electric shock buzzer. Not burnt enough calories and reaching for a doughnut? Cue a mild electric shock to remind the wearer of their diet
- Getting your dinner on the table. The watch senses when you’re half an hour from home and sends a signal to your oven to switch it on. Get stuck in traffic and it changes the heat so your dinner isn’t burnt to a crisp
- Surveillance. Very 1984 but just imagine if every smart watch could be tracked by governments – not only allowing them to see where you are but your state of health and everyday activities. Obviously the most far fetched application of all (we all hope)…..