50 years ago, engineer Gordon Moore wrote an article that has become the bedrock of computing. Moore’s Law, as first described in the article, states that the number of elements that could be fitted onto the same size piece of silicon doubles every year. It was then revised to every two years, and elements changed to transistors, but has basically held true for five decades. Essentially it means that computing power doubles every two years – and consequently gets considerably cheaper over time.
What is interesting is to look back over the last 50 years and see how completely different the IT landscape is today. Pretty much all companies that were active in the market when Moore’s Law was penned have disappeared (with IBM being a notable exception and HP staggering on). Even Intel, the company Moore co-founded, didn’t get started until after he’d written the original article. At the same time IT has moved from a centralised mainframe world, with users interacting through dumb terminals to a more distributed model of a powerful PC on every desk. Arguably, it is now is heading back to an environment where the Cloud provides the processing power and we use PCs, tablets or phones that, while powerful, cannot come close to the speed of Cloud-based servers. This centralised model works well when you have fast connectivity but doesn’t function at all when your internet connection is down, leaving you twiddling your thumbs.
Looking around and comparing a 1960’s mainframe and today’s smartphone you can see Moore’s Law in action, but how long will it continue to work for? The law’s demise has been predicted for some time, and as chips become ever smaller the processes and fabs needed to make them become more complex and therefore more expensive. This means that the costs have to be passed on somehow – at the moment high end smartphone users are happy to pay a premium for the latest, fastest model, but it is difficult to see this lasting for ever, particularly as the whizzier the processor the quicker batteries drain. The Internet of Things (IoT) will require chips with everything, but size and power constraints, and the fact that the majority of IoT sensors will not need huge processing power means that Moore’s Law isn’t necessary to build the smart environments of the future.
Desktop and laptop PCs used to be the biggest users of chips, and the largest beneficiaries of Moore’s Law, becoming increasingly powerful without the form factor having to be changed. But sales are slowing, as people turn to a combination of tablets/phones and the processing power of the Cloud. Devices such as Google Chromebooks can use lower spec chips as it uses the Cloud for the heavy lifting, thus making it cheaper. At the same time, the servers within the datacentres that are running these Cloud services aren’t as space constrained, so miniaturisation is less of a priority.
Taken together these factors probably mean that while Moore’s Law could theoretically carry on for a long time, the economics of a changing IT landscape could finish it off within the next 10 years. However, its death has been predicted many times before, so it would take a brave person to write its epitaph just yet.
If you needed evidence of the growth of the smartphone market and its move into every part of our lives, then this week’s Mobile World Congress (MWC) provides it. It wasn’t that long ago that the event was dominated by network infrastructure companies, but now it is essentially a consumer electronics show in all but name. And one that looks far beyond the handset itself. Ford launched an electric bike, Ikea announced furniture that charged your smartphone and a crowdfunded startup showed a suitcase that knows where it is and how much it weighs.
Five years ago none of these companies would have even thought of attending MWC – and it is all down to the rise of the smartphone. It is difficult to comprehend that the first iPhone was only launched in 2007, at a time when Apple was a niche technology player. It is now worth more than any other company in the world and 2 billion people globally have an internet-connected smartphone. By 2020 analysts predict that 80% of the world’s adults will own a smartphone.
As any honest iPhone owner will freely admit, they may be sleek, but they are actually rubbish for making and receiving calls. What they do provide is two things – a truly personal computer that fits in your pocket, and access to a global network of cloud-based apps. It is the mixture of the personal and the industrial that make smartphones central to our lives. We can monitor our own vital signs, and the environment around us through fitness and health trackers and mapping apps, and at the same time access any piece of information in the world and monitor and control devices hundreds or thousands of miles away. Provided you have a signal……….
So, based on what is on show at MWC, what are the next steps for the smartphone? So far it seems to split into two strands – virtual reality and the Internet of Things. HTC launched a new virtual reality headset, joining the likes of Sony, Microsoft, Samsung and Oculus Rift, promising a more immersive experience. Sensors to measure (and control) everything from bikes and cars to tennis racquets are also on show. The sole common denominator is that they rely on a smartphone and its connectivity to get information in and out quickly.
It is easy to look at some of the more outlandish predictions for connected technology and write them off as unlikely to make it into the mainstream. But then, back in 2007, when Steve Jobs unveiled the first iPhone, there were plenty of people who thought it would never take off. The smartphone revolution will continue to take over our lives – though I’m not looking forward to navigating streets full of people wearing virtual reality headsets who think they are on the beach, rather than on their way to work…………
The cover story in last week’s Economist looked at the growing global dominance of internet giants such as Google and Facebook. This was partly driven by the fact that the European Parliament recently passed a resolution to more tightly regulate internet search and potentially break up Google, as well as by ongoing worries about competition and online privacy.
So are effective online monopolies (Google has 90% of the European search market for example) a good or bad thing?
Obviously in the real world monopolies are viewed with suspicion, particularly when a dominant position is then used to raise prices, unfairly squeeze competitors and generally provide a poor deal to customers. But a monopoly on its own is not enough for regulators to step in. In many niche markets (say chemicals) the investment needed to compete with a dominant incumbent would put off any new entrants, so it becomes a monopoly by default. If it doesn’t abuse its position regulators tend to just monitor the situation without taking action.
So, no-one would argue against the fact that monopolies need to be watched closely. But what is interesting is the difference between the online and offline worlds, in four key ways. Firstly, the cost of entering an internet market is relatively small – you’d don’t need to build an expensive factory, but can rely on scalable, inexpensive cloud-based servers and storage to host your business. This makes expansion easy, particularly given the widespread adoption of the internet and mobile phones across the globe, providing a proven way of connecting with customers.
The second factor that causes internet businesses to grow exponentially is the network effect. Essentially the more users on a service, such as Facebook, the better it is for everyone involved as there are more people to interact with. In turn this attracts more people in a virtuous circle. It can work the other way though – as the fate of early social networks such as MySpace show.
Thirdly, the majority of the internet services being discussed are free to consumers. So they don’t directly see any negative impact from the monopoly (such as a rise in costs). What isn’t immediately obvious to users is the price of free. Essentially their personal data is used to power advertising, direct mail and other marketing campaigns, with many consumers having a hazy understanding of what their information is being used for, or how to increase privacy settings. In fact, it is advertisers that can feel the impact of higher prices, given the online control of the internet giants.
The final difference, and one that The Economist makes much of, is the speed of change in the technology space, and how this makes today’s monopolies tomorrow’s has-beens. Companies find it hard to jump from leading one wave of innovation to competing in a new space. IBM dominated the mainframe market, but has had to reinvent itself in order to survive, while the replacement of the personal computer with tablets and smartphones has dealt a major blow to Microsoft.
However, these are still multi-billion dollar companies and have hardly withered away. Therefore in my view, technology innovation alone is not enough to regulate the internet giants. What is needed aren’t heavy handed rules, but a more measured approach that balances the needs of consumers with the speed of innovation and the potential competitive impact of monopoly positions. It is an incredibly difficult balancing act – and will require give and take from both sides if it is to succeed. Done right and new breakthrough services will be allowed to grow, but without trampling on other businesses. Get it wrong and innovation is stifled, potentially harming consumers and businesses who want to access the latest technology and services.
Commentators are full of predictions that software will eat the world, with jobs, industries and traditional means to doing things swept away by the rise of technology. From automated journalism to connected cars, the claim is that we’re undergoing a transformation in how we work, live and play.
Software is revolutionising the world around us, but I’d contend that there’s a much more disruptive factor impacting our lives – the smartphone. It essentially provides an always-on, easy to use, ubiquitous interface with all of the software around us. Without it we wouldn’t be able to access the power of technology. So, rather than software eating the world, I’d pinpoint 9 ways that smartphones are making a meal of it:
Smartphones have the ability to monitor our vital signs and transmit information to doctors and medical staff in real-time. Whether it is using in-built or external, Bluetooth equipped sensors, smartphones will disrupt the health industry. Apple’s new focus on building a health ecosystem is just part of this trend, which can either be seen as a force for good or as allowing intrusive snooping on our most private moments. On the plus side patients can be monitored remotely, allowing them to remain at home rather than going into hospital for certain conditions, but confidentiality of data remains a worry. What if your insurance company could access your health data and amend your premiums accordingly?
2. Taxis and transport
Companies such as Uber and Lyft are radically changing the taxi market by removing the overhead (justified or otherwise) of traditional operators. Anyone can become a taxi driver – all they need is a car and a smartphone (which can also serve as your GPS, so you don’t need the Knowledge to direct you to the right place). This does raise potential issues about safety, vetting and insurance, hence the bitter battles being fought between traditional cab drivers and the new upstarts.
At no point in human history has so much data been available about individuals. The combination of ‘free’ services such as Google and Facebook that hoover up our personal information and preferences, with the geolocation data from a smartphone mean that companies have the ability to understand more about their consumers than ever before. The challenge for marketers is twofold – they need to ensure that they have real, informed consent from consumers when handling their private data, but at the same time have to evolve the skills to sift through this big data to deliver personalised marketing that drives engagement. The traditional model of campaigns that take months to plan and implement is rapidly going out of the window – if marketers can’t adapt they risk being sidelined by ever cleverer algorithms.
There is something impressive about a pile of cash – even if it is just one pence pieces. But carrying it around is another story. Replacing pounds and pence with the ability to tap to pay even the smallest amount with your phone promises to turn us into a cashless society. And it also removes the need for a wallet full of credit, debit or loyalty cards. All you’ll need to do is select how you want to pay on your phone and the software will handle the transfer. Could we see traditional banks and financial services companies replaced by Apple Money – or even currencies swept aside by electronic dosh? It is certainly possible, hence Apple’s move into the sector with the iPhone 6.
It may be difficult to remember, but when they began, mobile phones were for making phone calls or sending text messages (and playing Snake if you had a Nokia). Now the number of calls made and received is a fraction of before, as people move to messaging, email and free voice over IP services such as Skype. Many of us already pay more for our smartphone data plans than for calls and texts – meaning that mobile phone (and landline) operators will need to evolve new services if they are to be part of the smartphone future.
Growing up in an analogue world, toys and games were very straightforward. Now traditional toys are evolving to embrace both full on mobile gaming (think Angry Birds) and half way houses where the physical meets the virtual. Software such as Skylanders combines playing pieces containing electronic chips with fully fledged games to give a radically new experience. And this is just the beginning. As immersive technologies such as Google Glass and Oculus Rift gain traction we’ll find it difficult to tell reality and gaming apart. How long before people embed chips in themselves to become part of the latest smartphone game?
Buying power is a necessary evil – and the battery life of smartphones does mean we’ll always need electricity to recharge them. Mobile devices, combined with sensors and the Internet of Things provide the ability to monitor and adjust how we use power. From turning smart thermostats up or down, to only switching on lights when the smartphone user is in the vicinity, they can change energy use. Taken a step further, consumers could cut out the energy company and use their smartphone to buy power directly from smaller producers, adding flexibility and potentially bringing down prices.
The problem with insurance premiums is that they are based on averages, rather than knowledge of your individual circumstances. The data within a smartphone, either directly monitoring your movements, or linked to a sensor in your car, provides a deeper context around your behaviour and habits. Used properly this can help better judge the risks of insuring individuals – but again used incorrectly it will cause a privacy backlash.
9. Pub quizzes
As a Trivial Pursuit expert (and part of the reigning village quiz team champions) there’s nothing I like better than the chance to show off my knowledge. But how can pub quizzes survive in an era when Wikipedia can be accessed from your smartphone in milliseconds? Short of holding quizzes in exam conditions, with no toilet breaks where people can sneak off to check answers on the internet, cheating is going to become rife, making my carefully assembled general knowledge useless.
Research shows that the majority of us access the internet more through mobile devices than traditional PCs. And 20 per cent of young American adults admit to using their smartphones during sex. We look at our phones constantly, panic if they are out of sight for a minute and feel bereaved if they are lost or stolen. If it is true that software is eating the world, the smartphone is the knife, fork and plate responsible for the repast.