50 years ago, engineer Gordon Moore wrote an article that has become the bedrock of computing. Moore’s Law, as first described in the article, states that the number of elements that could be fitted onto the same size piece of silicon doubles every year. It was then revised to every two years, and elements changed to transistors, but has basically held true for five decades. Essentially it means that computing power doubles every two years – and consequently gets considerably cheaper over time.
What is interesting is to look back over the last 50 years and see how completely different the IT landscape is today. Pretty much all companies that were active in the market when Moore’s Law was penned have disappeared (with IBM being a notable exception and HP staggering on). Even Intel, the company Moore co-founded, didn’t get started until after he’d written the original article. At the same time IT has moved from a centralised mainframe world, with users interacting through dumb terminals to a more distributed model of a powerful PC on every desk. Arguably, it is now is heading back to an environment where the Cloud provides the processing power and we use PCs, tablets or phones that, while powerful, cannot come close to the speed of Cloud-based servers. This centralised model works well when you have fast connectivity but doesn’t function at all when your internet connection is down, leaving you twiddling your thumbs.
Looking around and comparing a 1960’s mainframe and today’s smartphone you can see Moore’s Law in action, but how long will it continue to work for? The law’s demise has been predicted for some time, and as chips become ever smaller the processes and fabs needed to make them become more complex and therefore more expensive. This means that the costs have to be passed on somehow – at the moment high end smartphone users are happy to pay a premium for the latest, fastest model, but it is difficult to see this lasting for ever, particularly as the whizzier the processor the quicker batteries drain. The Internet of Things (IoT) will require chips with everything, but size and power constraints, and the fact that the majority of IoT sensors will not need huge processing power means that Moore’s Law isn’t necessary to build the smart environments of the future.
Desktop and laptop PCs used to be the biggest users of chips, and the largest beneficiaries of Moore’s Law, becoming increasingly powerful without the form factor having to be changed. But sales are slowing, as people turn to a combination of tablets/phones and the processing power of the Cloud. Devices such as Google Chromebooks can use lower spec chips as it uses the Cloud for the heavy lifting, thus making it cheaper. At the same time, the servers within the datacentres that are running these Cloud services aren’t as space constrained, so miniaturisation is less of a priority.
Taken together these factors probably mean that while Moore’s Law could theoretically carry on for a long time, the economics of a changing IT landscape could finish it off within the next 10 years. However, its death has been predicted many times before, so it would take a brave person to write its epitaph just yet.
If you needed evidence of the growth of the smartphone market and its move into every part of our lives, then this week’s Mobile World Congress (MWC) provides it. It wasn’t that long ago that the event was dominated by network infrastructure companies, but now it is essentially a consumer electronics show in all but name. And one that looks far beyond the handset itself. Ford launched an electric bike, Ikea announced furniture that charged your smartphone and a crowdfunded startup showed a suitcase that knows where it is and how much it weighs.
Five years ago none of these companies would have even thought of attending MWC – and it is all down to the rise of the smartphone. It is difficult to comprehend that the first iPhone was only launched in 2007, at a time when Apple was a niche technology player. It is now worth more than any other company in the world and 2 billion people globally have an internet-connected smartphone. By 2020 analysts predict that 80% of the world’s adults will own a smartphone.
As any honest iPhone owner will freely admit, they may be sleek, but they are actually rubbish for making and receiving calls. What they do provide is two things – a truly personal computer that fits in your pocket, and access to a global network of cloud-based apps. It is the mixture of the personal and the industrial that make smartphones central to our lives. We can monitor our own vital signs, and the environment around us through fitness and health trackers and mapping apps, and at the same time access any piece of information in the world and monitor and control devices hundreds or thousands of miles away. Provided you have a signal……….
So, based on what is on show at MWC, what are the next steps for the smartphone? So far it seems to split into two strands – virtual reality and the Internet of Things. HTC launched a new virtual reality headset, joining the likes of Sony, Microsoft, Samsung and Oculus Rift, promising a more immersive experience. Sensors to measure (and control) everything from bikes and cars to tennis racquets are also on show. The sole common denominator is that they rely on a smartphone and its connectivity to get information in and out quickly.
It is easy to look at some of the more outlandish predictions for connected technology and write them off as unlikely to make it into the mainstream. But then, back in 2007, when Steve Jobs unveiled the first iPhone, there were plenty of people who thought it would never take off. The smartphone revolution will continue to take over our lives – though I’m not looking forward to navigating streets full of people wearing virtual reality headsets who think they are on the beach, rather than on their way to work…………
The cover story in last week’s Economist looked at the growing global dominance of internet giants such as Google and Facebook. This was partly driven by the fact that the European Parliament recently passed a resolution to more tightly regulate internet search and potentially break up Google, as well as by ongoing worries about competition and online privacy.
So are effective online monopolies (Google has 90% of the European search market for example) a good or bad thing?
Obviously in the real world monopolies are viewed with suspicion, particularly when a dominant position is then used to raise prices, unfairly squeeze competitors and generally provide a poor deal to customers. But a monopoly on its own is not enough for regulators to step in. In many niche markets (say chemicals) the investment needed to compete with a dominant incumbent would put off any new entrants, so it becomes a monopoly by default. If it doesn’t abuse its position regulators tend to just monitor the situation without taking action.
So, no-one would argue against the fact that monopolies need to be watched closely. But what is interesting is the difference between the online and offline worlds, in four key ways. Firstly, the cost of entering an internet market is relatively small – you’d don’t need to build an expensive factory, but can rely on scalable, inexpensive cloud-based servers and storage to host your business. This makes expansion easy, particularly given the widespread adoption of the internet and mobile phones across the globe, providing a proven way of connecting with customers.
The second factor that causes internet businesses to grow exponentially is the network effect. Essentially the more users on a service, such as Facebook, the better it is for everyone involved as there are more people to interact with. In turn this attracts more people in a virtuous circle. It can work the other way though – as the fate of early social networks such as MySpace show.
Thirdly, the majority of the internet services being discussed are free to consumers. So they don’t directly see any negative impact from the monopoly (such as a rise in costs). What isn’t immediately obvious to users is the price of free. Essentially their personal data is used to power advertising, direct mail and other marketing campaigns, with many consumers having a hazy understanding of what their information is being used for, or how to increase privacy settings. In fact, it is advertisers that can feel the impact of higher prices, given the online control of the internet giants.
The final difference, and one that The Economist makes much of, is the speed of change in the technology space, and how this makes today’s monopolies tomorrow’s has-beens. Companies find it hard to jump from leading one wave of innovation to competing in a new space. IBM dominated the mainframe market, but has had to reinvent itself in order to survive, while the replacement of the personal computer with tablets and smartphones has dealt a major blow to Microsoft.
However, these are still multi-billion dollar companies and have hardly withered away. Therefore in my view, technology innovation alone is not enough to regulate the internet giants. What is needed aren’t heavy handed rules, but a more measured approach that balances the needs of consumers with the speed of innovation and the potential competitive impact of monopoly positions. It is an incredibly difficult balancing act – and will require give and take from both sides if it is to succeed. Done right and new breakthrough services will be allowed to grow, but without trampling on other businesses. Get it wrong and innovation is stifled, potentially harming consumers and businesses who want to access the latest technology and services.
Commentators are full of predictions that software will eat the world, with jobs, industries and traditional means to doing things swept away by the rise of technology. From automated journalism to connected cars, the claim is that we’re undergoing a transformation in how we work, live and play.
Software is revolutionising the world around us, but I’d contend that there’s a much more disruptive factor impacting our lives – the smartphone. It essentially provides an always-on, easy to use, ubiquitous interface with all of the software around us. Without it we wouldn’t be able to access the power of technology. So, rather than software eating the world, I’d pinpoint 9 ways that smartphones are making a meal of it:
Smartphones have the ability to monitor our vital signs and transmit information to doctors and medical staff in real-time. Whether it is using in-built or external, Bluetooth equipped sensors, smartphones will disrupt the health industry. Apple’s new focus on building a health ecosystem is just part of this trend, which can either be seen as a force for good or as allowing intrusive snooping on our most private moments. On the plus side patients can be monitored remotely, allowing them to remain at home rather than going into hospital for certain conditions, but confidentiality of data remains a worry. What if your insurance company could access your health data and amend your premiums accordingly?
2. Taxis and transport
Companies such as Uber and Lyft are radically changing the taxi market by removing the overhead (justified or otherwise) of traditional operators. Anyone can become a taxi driver – all they need is a car and a smartphone (which can also serve as your GPS, so you don’t need the Knowledge to direct you to the right place). This does raise potential issues about safety, vetting and insurance, hence the bitter battles being fought between traditional cab drivers and the new upstarts.
At no point in human history has so much data been available about individuals. The combination of ‘free’ services such as Google and Facebook that hoover up our personal information and preferences, with the geolocation data from a smartphone mean that companies have the ability to understand more about their consumers than ever before. The challenge for marketers is twofold – they need to ensure that they have real, informed consent from consumers when handling their private data, but at the same time have to evolve the skills to sift through this big data to deliver personalised marketing that drives engagement. The traditional model of campaigns that take months to plan and implement is rapidly going out of the window – if marketers can’t adapt they risk being sidelined by ever cleverer algorithms.
There is something impressive about a pile of cash – even if it is just one pence pieces. But carrying it around is another story. Replacing pounds and pence with the ability to tap to pay even the smallest amount with your phone promises to turn us into a cashless society. And it also removes the need for a wallet full of credit, debit or loyalty cards. All you’ll need to do is select how you want to pay on your phone and the software will handle the transfer. Could we see traditional banks and financial services companies replaced by Apple Money – or even currencies swept aside by electronic dosh? It is certainly possible, hence Apple’s move into the sector with the iPhone 6.
It may be difficult to remember, but when they began, mobile phones were for making phone calls or sending text messages (and playing Snake if you had a Nokia). Now the number of calls made and received is a fraction of before, as people move to messaging, email and free voice over IP services such as Skype. Many of us already pay more for our smartphone data plans than for calls and texts – meaning that mobile phone (and landline) operators will need to evolve new services if they are to be part of the smartphone future.
Growing up in an analogue world, toys and games were very straightforward. Now traditional toys are evolving to embrace both full on mobile gaming (think Angry Birds) and half way houses where the physical meets the virtual. Software such as Skylanders combines playing pieces containing electronic chips with fully fledged games to give a radically new experience. And this is just the beginning. As immersive technologies such as Google Glass and Oculus Rift gain traction we’ll find it difficult to tell reality and gaming apart. How long before people embed chips in themselves to become part of the latest smartphone game?
Buying power is a necessary evil – and the battery life of smartphones does mean we’ll always need electricity to recharge them. Mobile devices, combined with sensors and the Internet of Things provide the ability to monitor and adjust how we use power. From turning smart thermostats up or down, to only switching on lights when the smartphone user is in the vicinity, they can change energy use. Taken a step further, consumers could cut out the energy company and use their smartphone to buy power directly from smaller producers, adding flexibility and potentially bringing down prices.
The problem with insurance premiums is that they are based on averages, rather than knowledge of your individual circumstances. The data within a smartphone, either directly monitoring your movements, or linked to a sensor in your car, provides a deeper context around your behaviour and habits. Used properly this can help better judge the risks of insuring individuals – but again used incorrectly it will cause a privacy backlash.
9. Pub quizzes
As a Trivial Pursuit expert (and part of the reigning village quiz team champions) there’s nothing I like better than the chance to show off my knowledge. But how can pub quizzes survive in an era when Wikipedia can be accessed from your smartphone in milliseconds? Short of holding quizzes in exam conditions, with no toilet breaks where people can sneak off to check answers on the internet, cheating is going to become rife, making my carefully assembled general knowledge useless.
Research shows that the majority of us access the internet more through mobile devices than traditional PCs. And 20 per cent of young American adults admit to using their smartphones during sex. We look at our phones constantly, panic if they are out of sight for a minute and feel bereaved if they are lost or stolen. If it is true that software is eating the world, the smartphone is the knife, fork and plate responsible for the repast.
Everyone understands that the bigger a company gets, the more difficult it is to create and nurture ideas. There are a number of reasons. The sheer size of the organisation mitigates against change – it is incredibly difficult to get everyone to understand a game-changing idea and align themselves behind it. You get a fragmented approach and the whole thing can get mired down in bureaucracy and finger-pointing.
Large organisations are inherently conservative, with people not wanting to rock the boat, while there is fierce rivalry between different divisions/departments which can lead to ideas being squashed if they seem to tread on someone else’s turf. There’s also a fine line between a strong company culture and having too inward looking a focus. Even successful companies such as Facebook have been accused of a lack of perspective – because they solely use (and love) their own products they assume they everyone else believes they are equally awesome. Step outside the organisation and your obsession is just a minor part of the lives of your customers.
The good news is that the majority of organisations do understand the need for a stream of fresh ideas. After all, the world today is dominated by companies such as Google, Facebook and Amazon that either didn’t exist twenty years ago, or were considerably smaller. Competition in every market is increasing and no-one wants to go the way of Nokia or Woolworths.
So how do you align your company to create the best forum to create ongoing ideas? I’m no management consultant, but I’ve seen a few attempts over the last twenty years and it boils down to three broad types:
1 Innovation silos
In many industries (such as pharmaceuticals), where innovation relies on expensive capital equipment it makes sense to create separate, concentrated, research labs. These have the intellectual muscle and resources but can suffer from their sheer size and distance from the business. They can then hit the same problems as any other big organisation, with divisional rivalry and static corporate culture. Alternatively businesses have focused innovation in standalone business units – either skunkworks operations that are locked away from the rest of the organisation, incubators that support promising ideas at arms length or even smaller companies that have been bought and are run as ideas factories. All of these can work, provided management stay true to their word not to meddle or demand fast results, but there’s still no connection with the wider business and its needs.
2 The campus
You break up your monolithic organisation into a campus style environment, with different divisions occupying their own buildings, but close together. Splitting into smaller teams is good for creativity, and you get the economies of scale of having everyone on a single, but large, site. However the ability to cross-pollenate between groups can be limited – unless you happen to bump into someone over lunch you might be completely in the dark about what other sections of the company are working on.
3 The college
What I think is really interesting about the campus model is that it deliberately mimics the university campus structure. While this makes for a good working environment, it doesn’t help spread ideas. So I think companies need to look at a more collegiate model, similar to that of universities like Cambridge. You have two allegiances/bases – your division (essentially your college) and your actual project (your faculty). So you get the chance to mix with people from other divisions and collaborate on joint projects. Some people may find it disorienting, but if projects are scheduled to last 2-3 years the goal is never that far away.
Innovation is vital in every industry, and the size and structure depends on the sector and the market each company operates in. But I think it is time for more organisations to look at the college structure if they want to nurture and develop a stream of ideas that take their business forward over the long term.
I’ve talked before about the new ways marketers are trying to engage with consumers. This ranges from QR codes to augmented reality and relies on using the one device we always have with us – the smartphone. Being able to pinpoint exactly where someone is, for example the specific aisle of a shop, means they can serve up relevant marketing material that could turn a browser into a buyer. It is no wonder that the likes of Apple and Google are investing in technology that can help make indoor mapping more granular and detailed.
The latest technology to be touted to drive engagement is the beacon. Essentially a small, low cost, Bluetooth-enabled box that can be quickly fitted inside a building, it enables companies to send messages to suitably equipped smartphones in the near vicinity. As beacon technology is built into the latest Apple products, there are already over 200 million iOS devices out there that can act as both receivers and transmitters.
The possibilities are getting marketers, particularly in the US, extremely excited. Companies can automatically send relevant offers if you are in particular areas of a shop, such as in front of their products (or, if you’re being sneaky, in front of your rivals’ products). Airports or train stations could send automatic updates on delays or gate/platforms changes. Beacons can be used to measure dwell time in specific areas and provide offers of help. William Hill is planning to use beacons to send in-app betting messages at the forthcoming Cheltenham Festival, while outdoor advertising companies are looking at how it can drive engagement with adverts. Mobile phone networks EE, O2 and Vodafone have invested to create a joint venture – Weve, to target the space, with Eat trialling their technology. The reason for the interest is that essentially beacons promise the same digital tracking possibilities as online, but in the physical world.
However there are a still a couple of elephants in the room when it comes to mass market adoption. Consumers need to switch on Bluetooth, download an app, enable location services for the app and opt-in to receive notifications. So, even though iPhones now come with Bluetooth on as standard you still need to jump through a lot of hoops to be beacon ready.
And then there’s privacy. Perhaps you don’t want marketers to know whereabouts in the shop you were loitering or what you are buying at a detailed level. As the success of social media and loyalty cards have shown, people are willing to give up some of their privacy in return for a better experience and targeted offers, but none of these are as instant and real-world as beacons zapping a message straight onto your screen in real-time. At the moment all the advantages seem to be skewed towards retailers, with very little concrete benefit for consumers that will make them want to go through the rigmarole of making their phones ‘beaconable’.
At a time when consumers are just about getting their heads round paying for things by swiping cards rather than laboriously typing their PIN, I think beacons have a big job ahead to accelerate consumer adoption. The whole process needs to be made seamless and simple, with a focus on the benefits, rather than looking like another way to invade privacy and sell you more stuff. Only then will beacons deliver the insight that marketers and businesses are looking for.