The Raspberry Pi is a quintessentially British invention. It was originally created because the University of Cambridge Computing Department felt that new students hadn’t a high enough level of programming experience when they began their studies. So a cheap, accessible machine was designed, using off-the-shelf components and plugging into available devices such as USB keyboards, SD cards and TVs. Like the webcam, another Computing Department invention (it was trained on the filter coffee machine at the other end of the building to avoid wasted journeys if the jug was empty), it combines technology with quirkiness and the British love of tinkering.
From these humble beginnings over 3 million have now been sold. To put this in context it is double the number of sales of the BBC Micro, the original government-backed home computer of the 1980s, and not far off the 5 million Sinclair ZX Spectrum machines that spawned a generation of programmers back then. It has even been shown to the Queen at Buckingham Palace, with founder Eben Upton ticked off by the Duke of Edinburgh for not wearing a tie.
However, the impact of the Pi has gone far beyond sales figures. It has created an ecosystem that spans everything from desktop arcade machines to funky cases. It is also being used within a whole range of other projects, from weather balloons to creating a pirate radio station. You can even run Spectrum games on it, linking back to the 1980s. And all of this from a non-profit company, that is now manufacturing in the UK.
And I’d argue that it has actually had a major hand in putting programming back at the heart of UK education. From September all primary school pupils will be taught programming, as opposed to how to use word processing applications. This will introduce a whole new generation to writing their own programs.
Even if just 5% go on to forge a career in technology, it will deliver a vast new workforce to the sector in the UK – as well as giving the other 95% some basic skills that will help them thrive in a world run by software. The availability of the Pi means it will be central to delivering these lessons, and the community has already created a huge volume of materials for teachers.
Once lessons start I’d expect many more parents to invest in a Pi (either driven by pester power or because they want to help their children succeed) – and at 20 quid for the most basic version it is within the majority of families’ budgets, at less than the price of a new PlayStation or Xbox game.
So I’d argue that the Pi’s rise to prominence hasn’t even really started yet. The combination of its community support, simplicity and the growth of programming means it will go from strength to strength. If you’ll excuse the pun, the Pi really is the limit…………..
Governments across Europe are always obsessing about creating their own Silicon Valleys, rivals to California that will catapult their country/city to international tech prominence, create jobs and make them cool by association. As I’ve said before, this is partly because such talk is cheap – bung a few million pounds/euros into some accelerators, set up a co-working space near a university and you can make some tub-thumping speeches about investing in innovation.
Obviously there’s a lot more to creating a new Silicon Valley than that. So I was interested to read a recent EU survey of European ICT Hubs, which ranks activity across the region. It doesn’t just analyse start-up activity, but also factors such as university strength, external links and business growth. While Munich, East London and Paris top the table, (with Cambridge at the top of tier 2), what is interesting is the sheer number of hubs and their relative strengths, despite many being quite close to each other.
There is a European obsession with a single hub to take on Silicon Valley, but as Paul Stasse points out in this piece on Tech.EU, if you zoom out and centre your ‘hub’ on Brussels, a 400km radius will bring in the majority of the EU’s ICT hubs. So consequently you need to go beyond individual cities or regions to move to a larger scale view. After all, Silicon Valley itself is not a single place, but a collection of cities and towns, that spreads from San Francisco through the Santa Clara Valley. So, while the Santa Clara Valley is geographically 30 miles long and 15 miles wide, the actual area of ‘Silicon Valley’ itself is much bigger.
In that case, why can’t Europe create its own Silicon Valley encompassing multiple hubs? Or even Valleys within countries – it is around 60 miles from London to Cambridge, so it wouldn’t be a stretch to build the M11 Valley (though with a catchier name).
The trouble is, California has some pretty big advantages that have helped Silicon Valley grow. While entrepreneurs and programmers flock there from all around the world there’s one business language (English), one legal system and one predominant culture. Being part of the US gives immediate access to over 300 million people in a single market. Europe’s diversity is both a strength and a weakness – you can’t simply up sticks and move your company from, say, France to Belgium, with the same ease as from San Jose to Palo Alto.
In my opinion what is needed are three things:
1 Be more open
I’m as guilty as the next person, but individual hubs need to look outward more, rather than believing that success ends at the ring road. Only by encouraging conversation between hubs and idea sharing will innovation flourish.
2 Make movement easier
You are never going to change cultures, but the EU has a role to play in standardising the playing field when it comes to creating companies, harmonising legal systems and generally helping create a single market. That way entrepreneurs and companies can move more easily and collaborate, without having to duplicate bureaucracy or red tape.
3 Celebrate what we have
It is time to end the obsession with creating the new Silicon Valley. It isn’t going to happen. Instead, celebrate the ability Europe has to build multiple, interlinked hubs that play to our strengths, rather than bemoan our inability to spawn the next Facebook.
Silicon Valley, Europe may not happen but by supporting existing, successful clusters and hubs we can build a technology industry that can drive innovation, growth and jobs.
Everyone understands that the bigger a company gets, the more difficult it is to create and nurture ideas. There are a number of reasons. The sheer size of the organisation mitigates against change – it is incredibly difficult to get everyone to understand a game-changing idea and align themselves behind it. You get a fragmented approach and the whole thing can get mired down in bureaucracy and finger-pointing.
Large organisations are inherently conservative, with people not wanting to rock the boat, while there is fierce rivalry between different divisions/departments which can lead to ideas being squashed if they seem to tread on someone else’s turf. There’s also a fine line between a strong company culture and having too inward looking a focus. Even successful companies such as Facebook have been accused of a lack of perspective – because they solely use (and love) their own products they assume they everyone else believes they are equally awesome. Step outside the organisation and your obsession is just a minor part of the lives of your customers.
The good news is that the majority of organisations do understand the need for a stream of fresh ideas. After all, the world today is dominated by companies such as Google, Facebook and Amazon that either didn’t exist twenty years ago, or were considerably smaller. Competition in every market is increasing and no-one wants to go the way of Nokia or Woolworths.
So how do you align your company to create the best forum to create ongoing ideas? I’m no management consultant, but I’ve seen a few attempts over the last twenty years and it boils down to three broad types:
1 Innovation silos
In many industries (such as pharmaceuticals), where innovation relies on expensive capital equipment it makes sense to create separate, concentrated, research labs. These have the intellectual muscle and resources but can suffer from their sheer size and distance from the business. They can then hit the same problems as any other big organisation, with divisional rivalry and static corporate culture. Alternatively businesses have focused innovation in standalone business units – either skunkworks operations that are locked away from the rest of the organisation, incubators that support promising ideas at arms length or even smaller companies that have been bought and are run as ideas factories. All of these can work, provided management stay true to their word not to meddle or demand fast results, but there’s still no connection with the wider business and its needs.
2 The campus
You break up your monolithic organisation into a campus style environment, with different divisions occupying their own buildings, but close together. Splitting into smaller teams is good for creativity, and you get the economies of scale of having everyone on a single, but large, site. However the ability to cross-pollenate between groups can be limited – unless you happen to bump into someone over lunch you might be completely in the dark about what other sections of the company are working on.
3 The college
What I think is really interesting about the campus model is that it deliberately mimics the university campus structure. While this makes for a good working environment, it doesn’t help spread ideas. So I think companies need to look at a more collegiate model, similar to that of universities like Cambridge. You have two allegiances/bases – your division (essentially your college) and your actual project (your faculty). So you get the chance to mix with people from other divisions and collaborate on joint projects. Some people may find it disorienting, but if projects are scheduled to last 2-3 years the goal is never that far away.
Innovation is vital in every industry, and the size and structure depends on the sector and the market each company operates in. But I think it is time for more organisations to look at the college structure if they want to nurture and develop a stream of ideas that take their business forward over the long term.
There’s nothing as embarrassing as a politician trying to explain complex technology and completely missing the point. And as IT is increasingly seen as ‘cool’ and therefore something they want to be associated with, you can see a growing number of our elected leaders showing their ignorance in public. Following George Osborne’s cringeworthy appearance in the Year of Code video, and David Cameron’s attempted hijacking of Silicon Roundabout, the PM is at it again.
Speaking at CeBIT in Germany this week Cameron started off well, lauding the potential of the Internet of Things, praising the innovation of UK companies such as ARM and Neul, talking about Anglo-German co-operation and doubling funding for the area. But then what example did he trot out to show what it means to the general public? That our fridges can talk to each other, and order a pint of milk when we’re running low. Hardly a New Industrial Revolution.
The internet enabled fridge has been around as long as I’ve been in PR (nearly 20 years) and despite regular press appearances (and some actual products), it has failed to take off. Primarily because it is a stupid idea. Most of us (apart from one of my old housemates), can see when they are running low on food/milk and visit the shops accordingly. If not, are supermarkets expected to drop everything and rush you a single pint of semi-skimmed because your fridge told them to? Hardly economically viable. And what happens if you bought something and didn’t like it – will your fridge keep ordering more until your house is full of Dairylea cheese triangles? How will privacy be managed? Will the device send your eating habits direct to FMCG companies, like a giant ClubCard? What about security?
Poking fun at ill-advised politicians is easy, but the danger with the fridge fixation is it masks the real benefits of the Internet of Things and paints the wrong picture to the general population. We are talking about the ability to monitor our health, reduce the need for hospital stays as patients can be treated in their own homes, better manage our energy use, save us money and create smart cities that share information to make our travel and lives easier and more fulfilling. It is probably true to say that the real innovations of the Internet of Things haven’t even been thought of yet – but will develop on the platform once it becomes prevalent.
Due to the combination of the UK’s existing strengths in low power chip design, Bluetooth, availability of radio spectrum and the efforts of pioneers in the smart home space, there is a real chance that the country (and Cambridge) can become a major player in the Internet of Things. And given that the sector is expected to be worth £8.7 trillion globally when it hits maturity (according to Cisco), it is definitely the right market to target.
To succeed, UK companies need support, a well-defined technology plan that maximises investment in the right areas and a long term vision, rather than lazy examples of machines that no-one wants that will put people off the entire concept, raise potential privacy concerns and stifle acceptance. Install an internet-enabled fridge in Number 10 by all means, but do a better job of explaining the bigger benefits to the wider population if you want the Internet of Things to really take off.
I’m a passionate believer in getting more people to learn to code. Like a lot of those my age I grew up with a ZX Spectrum and learned basic programming on a BBC Computer at school. Not only did it reap benefits then (my horse racing game was a triumph, albeit not a financial one), but it gave me an idea of how computers worked that removed any fear of them when I went into the workplace.
And, as my career in PR has progressed, more and more of what I do has a technical element to it – whether that is getting a WordPress site up and running or stitching together data from different tools to measure the impact of campaigns. Not understanding technology or being unable to use it would significantly impact my productivity and my overall job prospects.
When I look back, comparing my childhood to now, the world has changed dramatically. On the plus side we’re now in an era where geekiness is cool and entrepreneurs are celebrated for their ideas. But the opportunities we have to code have been lessened – rather than ZX Spectrums we have gaming consoles that cannot be programmed, except by studios with multi-million pound budgets. Yes, we have the iOS and Android ecosystems where anyone can create an app, but the majority of us are consumers, not programmers.
Clearly there’s a need for change, and initiatives such as the Raspberry Pi and the inclusion of coding in the National Curriculum from September are helping accelerate this. However the fiasco that is the government-backed Year of Code project is an unwelcome bump in the road to the future. For those that haven’t heard of it, the Year of Code is supposed to be an umbrella organisation to encourage everyone to learn to code in 2014.
Unfortunately so far it appears to be a PR-led initiative to muscle in on the work that is already being done. Backed by venture capitalists, and the TechCity community, its main claim to fame is the ill-fated appearance of its executive director Lottie Dexter on Newsnight, where she earned the ire of Jeremy Paxman by admitting that she didn’t actually know how to code. More importantly it appears to have alienated many people who have been working in the space for years by simply not recognising what has already been done.
And, judging by its website, apart from a promotional film (warning – contains footage of George Osborne) and a commitment to “banging the drum for all the fantastic coding initiatives taking place over the course of year and helping many more people engage with technology and access important training opportunities,” it isn’t actually going to do much that is concrete. Essentially it is PR spin on a serious subject, trying to take the lead in the same way as the government has decreed that TechCity is the only viable tech cluster in the UK. It is jumping on a bandwagon and trying to take the reins from those that know what they are doing.
Coding is essential to our competiveness and the future of our children – it is simply too important to be left to a slick marketing machine that is imposed from the top down. Time for the Year of Code to be switched off and then on again to remove the bugs from the system.
The world of work has changed immeasurably over the last ten years, not just in the UK but across all developed countries. Repetitive, process driven jobs have been automated, with technology replacing paper-based workflows. In many cases this has led to a hollowing out of sectors and companies, with the remaining workforce split between menial roles and higher level management.
And these changes are accelerating. A report in The Economist points to new technological disruption in the workplace, driven by computers getting cleverer and becoming self-learning. Lightweight sensors, more powerful cameras, cloud computing, the Internet of Things, big data and advances in processing power are all contributing to helping computers do brain work. Innovations such as driverless cars and household robots don’t require human intervention to operate, and can do more than traditional machines.
Research from Oxford University suggests that 47% of today’s jobs could be automated within the next 20 years. Many of these roles are in previously ‘safe’ middle class professions such as accountancy, the law and even journalism.
So, this begs two questions. What skills do people need if they are going to thrive in this new world – and are we teaching them to children quickly enough?
The employees of the future will require skills that complement machine intelligence, rather than mirror it. Empathy, the ability to motivate, and being able to think outside the box will all be needed. Essentially soft skills, backed up by specialist knowledge that is based on experience that cannot be replicated by machines. Professions such as therapists, dentists, personal trainers and the clergy are all seen as being relatively safe from replacement by robots. Interestingly entrepreneurs often possess these talents, so expect them to thrive as they use technology such as the cloud to bring their innovations to market quickly.
As a knock on effect, the will be a change in the size of companies people work for. Before the Industrial Revolution most people worked either for themselves or in small organisations (the village carpenter and his apprentice for example). Industrialisation required scale, so vast mega companies grew up. These won’t disappear, but the number of people working for them will shrink dramatically as intelligent machines take over. We’ll move to a larger proportion of the population being self-employed, providing their services on a personal basis.
Looking at education, schools will also need to change. Pupils need to understand the world around them, so they have to be taught a certain number of facts and dates, but rote learning of what made the British Empire great is going to be useless for a large proportion of people’s careers. What is needed is to teach skills for learning and adapting, thinking for yourself and how to motivate and show empathy to others. Essentially, children starting school today will be going into careers that may not even exist yet – so lifelong learning and flexibility are critical.
The predictions of the havoc that technology will cause to the world of work may be overstated – just because something is technically possible, it doesn’t mean it will quickly become mass market. And governments, worried about massive social change, are likely to step in to mitigate the worst impact through legislation. But changes are coming, and we need to think more like entrepreneurs and less like machines if we’re going to thrive.
In many ways the news that Google has bought smart home company Nest Labs shouldn’t be a surprise. It has been talking to the company for some time and apparently lots of Google employees had installed the company’s sensor based thermostat in their own homes.
More to the point I think it fits in with Google’s overall objectives. As analysts have pointed out, Google isn’t a search engine company (and hasn’t been for some time), but is about data – collecting it (analysing search results, Google Glass, StreetView) and then using it to either sell you things (through adverts) or make your life better in some way.
With billions of sensors embedded in previously dumb objects that will be communicating in real-time, the Internet of Things promises to create a tidal wave of data. Each piece will be tiny, but if you can bring it together and analyse it you can get an even deeper view of the world around us, and the people in it. Nest’s products are much more than thermostats, and provide Google with the sensor/Internet of Things expertise it needs to add to its product portfolio. It already has Android-based smartphones/tablets to act as controllers, the mapping technology to show where sensors are located and the technology to analyse billions of events in real-time. And with Google Fiber rolling out in several US cities, it has a network to send the data through as well.
A simple example – your Nest thermostat notifies you that your boiler has gone wrong via your smartphone while you are at work. And suggests a registered tradesman that can fix it by trawling the web and any recommendations in your Google+ circles. Or alternatively gives you the address of the nearest clothing shop, so you can stock up on thick jumpers.
Many people (myself included) would find this a bit creepy, but it is potentially possible if you can knit all the technology together. What I think is interesting is how utilities will respond to the future entry of Google into the market. After all, as publishers and others have found, Googlification can squeeze out incumbents through sheer scale and by engaging more closely with customers. Utilities have to decide whether they want to partner with the likes of Google, risk losing the customer relationship and become commodity suppliers of gas and electricity or take a stand and build stronger engagement with customers. In current circumstances that’ll be difficult – people are at best ambivalent about their utility supplier, and in an era of rising prices and poor customer service many actively dislike them.
So there’s a big opportunity here – and something that Cambridge’s cluster of smart home/green tech companies could exploit. For example, AlertMe already has a partnership with British Gas, while Sentec is working with metering companies to make their products smarter. If energy companies don’t want to work with Google then they have two choices – do it themselves (teaming up with smaller tech companies), or partner with larger industrial tech companies, such as Siemens or Bosch. And these industrial giants will need the specialist expertise that smart home companies can provide.
The utility market doesn’t move fast, so don’t expect to see Google running your home in the next year, but the Nest acquisition should actually spur the whole sector on, attracting both interest and investment. The world just got more interested in smart homes, which is good news for relevant startups in Cambridge and beyond.