50 years ago, engineer Gordon Moore wrote an article that has become the bedrock of computing. Moore’s Law, as first described in the article, states that the number of elements that could be fitted onto the same size piece of silicon doubles every year. It was then revised to every two years, and elements changed to transistors, but has basically held true for five decades. Essentially it means that computing power doubles every two years – and consequently gets considerably cheaper over time.
What is interesting is to look back over the last 50 years and see how completely different the IT landscape is today. Pretty much all companies that were active in the market when Moore’s Law was penned have disappeared (with IBM being a notable exception and HP staggering on). Even Intel, the company Moore co-founded, didn’t get started until after he’d written the original article. At the same time IT has moved from a centralised mainframe world, with users interacting through dumb terminals to a more distributed model of a powerful PC on every desk. Arguably, it is now is heading back to an environment where the Cloud provides the processing power and we use PCs, tablets or phones that, while powerful, cannot come close to the speed of Cloud-based servers. This centralised model works well when you have fast connectivity but doesn’t function at all when your internet connection is down, leaving you twiddling your thumbs.
Looking around and comparing a 1960’s mainframe and today’s smartphone you can see Moore’s Law in action, but how long will it continue to work for? The law’s demise has been predicted for some time, and as chips become ever smaller the processes and fabs needed to make them become more complex and therefore more expensive. This means that the costs have to be passed on somehow – at the moment high end smartphone users are happy to pay a premium for the latest, fastest model, but it is difficult to see this lasting for ever, particularly as the whizzier the processor the quicker batteries drain. The Internet of Things (IoT) will require chips with everything, but size and power constraints, and the fact that the majority of IoT sensors will not need huge processing power means that Moore’s Law isn’t necessary to build the smart environments of the future.
Desktop and laptop PCs used to be the biggest users of chips, and the largest beneficiaries of Moore’s Law, becoming increasingly powerful without the form factor having to be changed. But sales are slowing, as people turn to a combination of tablets/phones and the processing power of the Cloud. Devices such as Google Chromebooks can use lower spec chips as it uses the Cloud for the heavy lifting, thus making it cheaper. At the same time, the servers within the datacentres that are running these Cloud services aren’t as space constrained, so miniaturisation is less of a priority.
Taken together these factors probably mean that while Moore’s Law could theoretically carry on for a long time, the economics of a changing IT landscape could finish it off within the next 10 years. However, its death has been predicted many times before, so it would take a brave person to write its epitaph just yet.
It is easy to write off 3D printing as a niche technology, best left to hobbyists or for businesses producing extremely specialised, one-off components. But having seen some of the latest products made with the technology, I think it is moving very quickly towards the mainstream. You can now produce incredibly intricate pieces on a home 3D printer, albeit a high spec one, and industrial 3D printers provide even more power, speed and performance. There are now more and more community spaces with 3D printers (like Makespace in Cambridge), and even some local copy shops have one, delivering another way of bringing the technology to the mass market.
So, why do I think 3D printing is going mainstream? Because it taps into three key trends:
In our mass produced, brand-led world, we have an increasing a desire for personalisation. Many people want to show their individuality, and are willing to pay for it. So whether it is jewellery customised to fit your own body shape or a sculpture you’ve designed yourself, there is a market for 3D printed objects.
2 Need for precision
The boundaries of the possible are being pushed back. Medical science can do things that were previously thought impossible, while miniaturisation is shrinking the size of everyday objects around us, while making them much more complex. 3D printing enables the creation of precisely made replacement bones for medical use, as well as significant parts of intricate jet engines. All of these are high value objects, but the same methods can be used on more mundane applications. Take spare parts for consumer goods – normally if something small breaks (such as the shelf bracket of your fridge), you need to buy a replacement from the manufacturer at an exorbitant price. And that’s if you can even track down the part. Now, it is technically possible to 3D print the replacement, and while this obviously infringes copyright, it will be difficult for the original manufacturer to find out, let alone prosecute, you.
3 Infrastructure in the Cloud
The combination of the internet, the Cloud and smartphones provides a complete, cost-effective infrastructure to support 3D printing. You can take high resolution photos with your phone, upload them and have them turned into product plans by using the immense processing power available on the Cloud. Short of inspiration? You can find and download plans for just about anything to make yourself (unfortunately including guns) through a quick search.
So what markets will it disrupt? Recent announcements point to two that have real potential. As mentioned before, parts for jet engines are being made experimentally using 3D printing by both Rolls-Royce and academic researchers in Australia. As well as the ability to work to extremely fine tolerances, 3D printing also has the benefit of producing much less waste, as objects are built up, layer by layer, rather than carved out from a larger block of expensive material.
Secondly, and more in the consumer space, Argos has announced that it will run a trial that allows people to customise jewellery, both by adding messages and also changing the item’s dimensions. Previously the likes of the Royal Mail, Amazon and Asda have run 3D printing trials. Moving to more of a “make to order” model will help Argos in keeping stock costs down – and also help differentiate it against other retailers on the high street through exclusive products. Given that the likes of Argos have been hard hit by the rise of online shopping, it is a smart move that could well be expanded to other products.
Like many technologies, 3D printing will not only change existing markets, but also spawn completely new ones that have not yet been thought up. What is definite is that it provides brands and companies with a challenge – will the ability for complex customisation be a threat or an opportunity to their business?
After announcements last year, this week saw the launch of the first Apple Watches, although they won’t go on sale until 24 April. The cutely named Spring Forward event saw the tech giant reveal all 38 models, which will range in price from £299 (for the sport model) to £8,000+, depending on screen size, design and whether you want it in 18 carat gold.
More importantly Apple showed a selection of the apps that it expects to drive demand for the device. You can make touchless payments, receive phone calls, open a compatible hotel room door (rather than using a keycard), and remotely open an internet-connected garage door (no, I don’t have one of those either). However for a large number of functions, such as messaging, GPS tracking and making phone calls you’ll need an iPhone 5 to run alongside your new watch.
Apple is not a stupid company and has grown to be the biggest quoted business in the world by revenues through reinventing the music and smartphone markets. It hired former Burberry chief executive Angela Ahrendts to head up its online and physical stores, partly to help its move from technology into fashion with watches. I remember loudly proclaiming that the iPad would never catch on due its innate pointlessness, and now I rely on it every day. But I still see some serious challenges to the Apple Watch attaining critical mass. Here are four of them:
The cost of the Sport model begins at £299, with prices for the mid-tier Watch version starting at £479. To me, this is a lot of money to spend on a watch, even one that looks as sleek as the
Apple device. And for £900+ you can buy a low-end TAG Heuer, that you know will last for a long time without needing to be upgraded as software advances. Yes, millions of people have iPhones, but the vast majority got them on subsidised deals that meant they didn’t have to fork out close to the real sales price. A better comparison is the similarly priced iPad, which has seen sales slow as the market becomes saturated over time. Therefore predictions of sales of 60 million seem excessive, with the market much more limited than that.
2. Does it do anything different?
Anyone of a certain age who saw or read Dick Tracy loves the idea of using their watch to make a call, even if it is to the office rather than for police back up. But Dick Tracy didn’t have a smartphone, which can do pretty much everything a watch can do – and more besides. And as Apple has said, you’ll need to retain your iPhone to provide many of the functions that can’t be squeezed into the watch. Admittedly the iPhone is getting bigger, making it more difficult to use for things such as contactless payments, but equally the watch could be seen as too small for many other activities.
3. A whole new market
Apple has always been known for its design excellence, and the Watch appears to be equally stunning, admittedly with a bulkier face than a traditional wristwatch. Hiring Ahrendts also points to a desire to bring in luxury marketing nous to help it move into a different sector, where factors outside technology excellence and cool apps could be more important. Can it become the fashion accessory that everyone wants? In the ultra-competitive watch market it will be difficult, though expect Apple to try to jump the chasm from geek to cool.
4. Battery life
Watch batteries traditionally last for years. In contrast iPhones provide just hours of charge, depending on how much Candy Crush you are actually playing. So the news that the Apple Watch will keep going for 18 hours is disappointing to say the least (although the company says that it will continue to show the time for up to 72 hours after that). Essentially consumers will need to charge the watch every night, plugging it in alongside their iPhone ready for the morning. It just reinforces that this is a technology product, rather than something you wear, and is bound to put some people off.
I could be as wrong about the Apple Watch as I was about the iPad, but to me, despite the hype, it won’t move beyond being a niche product for fanboys and girls who want to pair it with their latest iPhones. For me, if I had the spare cash I’d buy a TAG instead and leave technology to my phone……….
If you needed evidence of the growth of the smartphone market and its move into every part of our lives, then this week’s Mobile World Congress (MWC) provides it. It wasn’t that long ago that the event was dominated by network infrastructure companies, but now it is essentially a consumer electronics show in all but name. And one that looks far beyond the handset itself. Ford launched an electric bike, Ikea announced furniture that charged your smartphone and a crowdfunded startup showed a suitcase that knows where it is and how much it weighs.
Five years ago none of these companies would have even thought of attending MWC – and it is all down to the rise of the smartphone. It is difficult to comprehend that the first iPhone was only launched in 2007, at a time when Apple was a niche technology player. It is now worth more than any other company in the world and 2 billion people globally have an internet-connected smartphone. By 2020 analysts predict that 80% of the world’s adults will own a smartphone.
As any honest iPhone owner will freely admit, they may be sleek, but they are actually rubbish for making and receiving calls. What they do provide is two things – a truly personal computer that fits in your pocket, and access to a global network of cloud-based apps. It is the mixture of the personal and the industrial that make smartphones central to our lives. We can monitor our own vital signs, and the environment around us through fitness and health trackers and mapping apps, and at the same time access any piece of information in the world and monitor and control devices hundreds or thousands of miles away. Provided you have a signal……….
So, based on what is on show at MWC, what are the next steps for the smartphone? So far it seems to split into two strands – virtual reality and the Internet of Things. HTC launched a new virtual reality headset, joining the likes of Sony, Microsoft, Samsung and Oculus Rift, promising a more immersive experience. Sensors to measure (and control) everything from bikes and cars to tennis racquets are also on show. The sole common denominator is that they rely on a smartphone and its connectivity to get information in and out quickly.
It is easy to look at some of the more outlandish predictions for connected technology and write them off as unlikely to make it into the mainstream. But then, back in 2007, when Steve Jobs unveiled the first iPhone, there were plenty of people who thought it would never take off. The smartphone revolution will continue to take over our lives – though I’m not looking forward to navigating streets full of people wearing virtual reality headsets who think they are on the beach, rather than on their way to work…………
According to a new report, more and more of us are working in digital technology companies. Research led by Tech Nation has found that 1.46 million people (or 7% of the workforce) are employed by more than 47,000 digital companies across the UK – and of these just 250,000 are working in inner London. 74% of digital companies are located outside London.
To put that in perspective, according to other government figures, agriculture employs 535,000 workers, construction 2.2 million and manufacturing 2.6 million. So nearly three times as many people tend computers instead of animals. Heartening stuff, and a welcome antidote to some of the more extreme London-oriented digital stories seen in the media.
The highest density clusters in the report are Brighton & Hove, Inner London, Berkshire (including Reading), Edinburgh and Cambridge, while the highest rates of digital employment are in London, Bristol and Bath, Greater Manchester, Berkshire and Leeds.
It is easy to be cynical about the timing of the government-backed report, with an election coming up fast. I’d also query the definition of ‘digital’ – my PR business makes it in, which seems to show a wide classification range (not that I’m complaining). The headline findings that certain sectors have more digital companies than the national average (Brighton 3.3x, Cambridge 1.5x, for example), is interesting, but needs to be put into context. Brighton employs 7,458 people in digital, out of a population of 155,000 – under 5% compared to other clusters that potentially have a greater proportion of digital workers.
But what is more interesting is how the research reinforces the importance of clusters. Statistics include:
- 77% of respondents have a network of entrepreneurs with whom they share experiences and ideas. This rises to 90% in Cambridge.
- 54% believe their clusters help attract talent (65% in Cambridge).
- 40% believe their cluster gets them access to affordable property (such as science parks or co-working spaces).
- 33% believe their cluster helps attract inward investment
- For Cambridge, access to advice and mentorship was seen as twice as important to growth than nationally (scoring +100%), and the positive perception of the Cambridge brand (+62%), was also a key driver for expansion.
- Issues highlighted in Cambridge include poor transport infrastructure (scoring -111% compared to the UK average) and lack of available property (-31%).
This clearly demonstrates that to succeed and grow, tech businesses need to be part of an ecosystem that provides support, the right conditions to start (and grow) and that more and more of these are springing up across the UK. Nurturing a cluster takes time, so everyone involved, from local government to academia and investors have to think long term if they want to develop a tech ecosystem in their area.
What I’d like to see is companies and regions use this report as a starting point to build closer ties. Firstly, any businesses that feel they’ve missed out need to get on board and be given the chance to be added to the report. This is vital to keep it as a living, interactive document that maps changes over time.
Secondly, local government and organisations need to take a look and make sure that they are reaching the companies in their area, and providing them with the conditions for growth. At the very least local networks (or in their absence, local councils) should be making digital companies aware of their existence, and what they can do to help them. That way more sub clusters will form and grow, strengthening the overall picture.
I don’t think we’re yet the full Tech Nation that the report and research promises, but we’re definitely on the way – we therefore need continued focus and investment if we’re going to move forward, across the country.