How to define innovation, how has it been studied in the recent past, and what does future innovation hold for the human race?
Sometimes the word innovation gets misused. Like when people use the word “technology” to mean recent gadgets and gizmos, instead of acknolwedging that the term encompasses the first wheel. “Innovation” is another tricky one. Our understanding of recent thoughts on innovation – as well as its contemporary partner, “disruption” – were thrown into question in June when Jill Lepore penned an article in The New Yorker that put our ideas about innovation and specifically on Clayton Christensen’s ideas about innovation in a new light. Christensen, heir apparent to fellow Harvard Business School bod Michael Porter (author of the simple, elegant and classic The Five Competitive Forces that Shape Strategy) wrote The Innovator’s Dilemma in 1997. His work on disruptive innovation, claiming that successful businesses focused too much on what they were doing well, missing what, in Lepore’s words, “an entirely untapped customer wanted”, created a cottage industry of conferences, companies and counsels committed to dealing with disruption, (not least this blog, which lists disruption as one its topics of interest). Lepore’s article describes how, as Western society’s retelling of the past became less dominated by religion and more by science and historicism, the future became less about the fall of Man and more about the idea of progress. This thought took hold particularly during The Enlightenment. In the wake of two World Wars though, our endless advance toward greater things seemed less obvious;
“Replacing ‘progress’ with ‘innovation’ skirts the question of whether a novelty is an improvement: the world may not be getting better and better but our devices our getting newer and newer”
The article goes on to look at Christensen’s handpicked case studies that he used in his book. When Christensen describes one of his areas of focus, the disk-drive industry, as being unlike any other in the history of business, Lepore rightly points out the sui generis nature of it “makes it a very odd choice for an investigation designed to create a model for understanding other industries”. She goes on for much of the article to utterly debunk several of the author’s case studies, showcasing inaccuracies and even criminal behaviour on the part of those businesses he heralded as disruptive innovators. She also deftly points out, much in the line of thinking in Taleb’s Black Swan, that failures are often forgotten about, and those that succeed are grouped and promoted as formulae for success. Such is the case with Christensen’s apparently cherry-picked case studies. Writing about one company, Pathfinder, that tried to branch out into online journalism, seemingly too soon, Lepore comments,
“Had [it] been successful, it would have been greeted, retrospectively, as evidence of disruptive innovation. Instead, as one of its producers put it, ‘it’s like it never existed’… Faith in disruption is the best illustration, and the worst case, of a larger historical transformation having to do with secularization, and what happens when the invisible hand replaces the hand of God as explanation and justification.”
Such were the ramifications of the piece, that when questioned on it recently in Harvard Business Review, Christensen confessed “the choice of the word ‘disruption’ was a mistake I made twenty years ago“. The warning to businesses is that just because something is seen as ‘disruptive’ does not guarantee success, or fundamentally that it belongs to any long-term strategy. Developing expertise in a disparate area takes time, and investment, in terms of people, infrastructure and cash. And for some, the very act of resisting disruption is what has made them thrive. Another recent piece in HBR makes the point that most successful strategies involve not just a single act of deus ex machina thinking-outside-the-boxness, but rather sustained disruption. Though Kodak, Sony and others may have rued the days, months and years they neglected to innovate beyond their core area, the graveyard of dead businesses is also surely littered with companies who innovated too soon, the wrong way or in too costly a process that left them open to things other than what Schumpeter termed creative destruction.
Outside of cultural and philosophical analysis of the nature and definition of innovation, some may consider of more pressing concern the news that we are soon to be looked after by, and subsequently outmaneuvered in every way by, machines. The largest and most forward-thinking (and therefore not necessarily likely) of these concerns was recently put forward by Nick Bostrom in his new book Superintelligence: Paths, Dangers, Strategies. According to a review in The Economist, the book posits that once you assume that there is nothing inherently magic about the human brain, it is evidence that an intelligent machine can be built. Bostrom worries though that “Once intelligence is sufficiently well understood for a clever machine to be built, that machine may prove able to design a better version of itself” and so on, ad infinitum. “The thought processes of such a machine, he argues, would be as alien to humans as human thought processes are to cockroaches. It is far from obvious that such a machine would have humanity’s best interests at heart—or, indeed, that it would care about humans at all”.
Beyond the admittedly far-off prognostications of the removal of the human race at the hands of the very things it created, machines and digital technology in general pose great risks in the near-term, too. For a succinct and alarming introduction to this, watch the enlightening video at the beginning of this post. Since the McKinsey Global Instititute published a paper in May soberly titled Disruptive technologies: Advances that will transform life, business, and the global economy, much editorial ink and celluloid (were either medium to still be in much use) has been spilled and spooled detailing how machines will slowly replace humans in the workplace. This transformation – itself a prime example of creative destruction – is already underway in the blue-collar world, where machines have replaced workers in automotive factories. The Wall Street Journal reports Chinese electronics makers are facing pressure to automate as labor costs rise, but are challenged by the low margins, precise work and short product life of the phones and other gadgets that the country produces. Travel agents and bank clerks have also been rendered null, thanks to that omnipresent machine, the Internet. Writes The Economist, “[T]eachers, researchers and writers are next. The question is whether the creation will be worth the destruction”. The McKinsey report, according to The Economist, “worries that modern technologies will widen inequality, increase social exclusion and provoke a backlash. It also speculates that public-sector institutions will be too clumsy to prepare people for this brave new world”.
Such thinking gels with an essay in the July/August edition of Foreign Affairs, by Erik Brynjolfsson, Andrew McAfee and Michael Spence, titled New World Order. The authors rightly posit that in a free market the biggest premiums are reserved for the products with the most scarcity. When even niche, specialist employment though, such as in the arts (see video at start of article), can be replicated and performed to economies of scale by machines, then labourers and the owners of capital are at great risk. The essay makes good points on how while a simple economic model suggests that technology’s impact increases overall productivity for everyone, the truth is that the impact is more uneven. The authors astutely point out,
“Today, it is possible to take many important goods, services, and processes and codify them. Once codified, they can be digitized [sic], and once digitized, they can be replicated. Digital copies can be made at virtually zero cost and transmitted anywhere in the world almost instantaneously.”
Though this sounds utopian and democratic, what is actually does, the essay argues, is propel certain products to super-stardom. Network effects create this winner-take-all market. Similarly it creates disproportionately successful individuals. Although there are many factors at play here, the authors readily concede, they also maintain the importance of another, important and distressing theory;
“[A] portion of the growth is linked to the greater use of information technology… When income is distributed according to a power law, most people will be below the average… Globalization and technological change may increase the wealth and economic efficiency of nations and the world at large, but they will not work to everybody’s advantage, at least in the short to medium term. Ordinary workers, in particular, will continue to bear the brunt of the changes, benefiting as consumers but not necessarily as producers. This means that without further intervention, economic inequality is likely to continue to increase, posing a variety of problems. Unequal incomes can lead to unequal opportunities, depriving nations of access to talent and undermining the social contract. Political power, meanwhile, often follows economic power, in this case undermining democracy.”
There are those who say such fears of a rise in inequality and the whole destruction through automation of whole swathes of the job sector are unfounded, that many occupations require a certain intuition that cannot be replicated. Time will tell whether this intuition, like an audio recording, health assessment or the ability to drive a car, will be similarly codified and disrupted (yes, we’ll continue using the word disrupt, for now).
It would be impossible to capture the disruptive influence the latest digital technologies are currently having on the world in a single blog post. But what Zeitgeist has collated here are some thoughts and happenings showing the different ways technology is changing our lives – from the way we do business to the way we interact with others.
Last night saw a highly enjoyable occurrence. No, not the Academy Awards in general, which as ever moved at a glacial pace as it ticked off a list of predicted favourites. Rather, it was a specific moment in the ceremony itself, when host Ellen DeGeneres took a (seemingly) impromptu picture of herself with a cornucopia of stars, tweeting it instantly. The host declared she wanted the picture (above) to be the most retweeted post ever. The previous holder was none other than the President of the United States, Barack Obama, whose re-election message saw over 500k retweets. It took Zeitgeist but a few minutes to realise that Ellen’s post would skyrocket past this. Right now it has been retweeted 2.7m times. Corporate tactic on the part of Samsung though it may have been, Zeitgeist felt himself feeling much closer to the action – being able to see on his phone a photo the host had taken moments ago several thousand miles away – and the incident helped inject a brief air of spontaneity into the show’s proceedings. Super fun, and easy to get definitive results in this case on how many people were really engaging with the content. But can we quantify how much Samsung and Twitter really benefited from the move, beyond fuzzy marketing metrics? Talking heads on CNBC saw room for improvement (see below).
The big news of late in tech circles of course has been Facebook’s $19bn acquisition of messaging application Whatsapp. Many, many lines of editorial have been spilled on this deal already. In the mainstream media, many commentators have found the price of the deal staggering. So it’s worth reading more considered views such as Benedict Evans’, whose post on the deal Zeitgeist highly encourages you to read. Despite the seemingly large amount of money the company has been acquired for – especially considering Facebook’s purchase of Instagram for a ‘mere’ $1bn – Evans sagely points out that per user the deal is about the same as Google made in its valuation when it purchased YouTube. So perhaps not that crazy after all. The other key point that Evans makes is on Facebook’s dedicated pursuit to be the ‘next’ Facebook, or conversely to stop anyone else from becoming the next Facebook. With a meteoric rise in members (see image below, as it outstrips growth by both Facebook and Twitter), Whatsapp was certainly looking a little threatening.
The worry for investors is how Facebook will monetise this platform, when the founders have professed an aversion to advertising. Is merely ensuring that Facebook is the ‘next’ Facebook a good enough reason for such acquisitions? Barriers to entry and sustainable advantages will be few and far between going down this route. The Financial Times, in its analysis of the acquisition, points out that innovation is quickly nipping at the heels of Whatsapp. CalPal, for example, is one example of a mobile application that lets users message each other from within an app. In the markets, there has been a relatively sanguine response to the purchase, but only because of broader trends. As the FT points out,
“External forces have also helped to push the headline prices of deals such as WhatsApp into the stratosphere. A global excess of cheap money, along with a scarcity of alternatives for growth-hungry investors, has boosted the stock prices of companies such as Facebook and Google.”
One of the most visibly exciting developments in technology in recent years is the explosion of the wearable tech sector. But it is Google’s flagship product, Glass, that has met with much ire and distress. An excellent piece of analysis appearing in MIT Technology Review last month hit the nail on the head when it identified why Glass was having trouble winning people over. The article rightly identifies the significant shift in external appearance inherent in making the switch from a device that needs to be taken out of a pocket as makes it clear when it is being interacted with (you need to cover half your face with the product to talk to someone, for example). The article also details the savvy approach Google have taken to the distribution of their product. It’s always sensible to try and mobilise the part of your base likely to be evangelists anyway, so as to build advance buzz before a full-blown release. But to get them to pay for the privilege, as Google are doing with their excitable fans, dubbed Explorers, is a stroke of genius for them. However, the key issue, and what the article states is an “insurmountable problem”, is that “Google’s challenge in making the device a successful consumer product will be convincing the people around you to ignore it”. It is this fundamental aspect of social interaction that is worrying many, and now Google is worried too. As detailed in the FT, the company has acknowledged that the product can look “pretty weird”. Recognising it has a “long journey” to mainstream adoption, it published a list of Dos and Don’ts. Highlights include,
“Ask for permission. Standing alone in the corner of a room staring at people while recording them through Glass is not going to win you any friends… If you find yourself staring off into the prism for long periods of time you’re probably looking pretty weird to the people around you.”
It indicates that Google may have a significant ‘Glasshole‘ problem it needs to attend to. The case may be overstated though. One of the problems may just be that potential customers have yet to see any practical uses for it. This is beginning to change. Last week, Virgin Atlantic announced a six-week trial of both Glass and Sony smartwatches. The idea will be for check-in attendants to use the devices to scan limousine number plates so that passengers can be greeted by name and be instantly updated on their flight status.
In the arts, digital technology has inspired much innovative work, as well as helped broaden its audience. David Hockney, one of England’s greatest living artists, recently exhibited a series of works produced entirely on his iPad at London’s Royal Academy of Arts. He is far from alone. Last week’s anniversary issue of The New Yorker featured work from Jorge Colombo on its front cover, again produced entirely on an iPad. Such digital innovation allows for increased productivity as well as new aesthetics. When done well, art can also involve the viewer, encouraging interaction. Digital technology helps with this too. Earlier in the year The New York Times covered how the New York City Ballet redesigned part of their floor in a new scheme to attract new visitors to the ballet. The result, roughly life-size pictures of dancers arranged on the floor, has seen great success, and an explosion of content on social media platforms like Instagram, where users have taken to posing on the floor as if interacting with the images (see above). It’s a simple tactic that now reaches a far greater audience thanks to new digital technologies.
A recently published book, ‘Now I know who my comrades are: Voices from the Internet Undergound’, by Emily Parker, seeks to demonstrate the ways in which digital technology has made helped to coalesce and support important activism in regions such as China and Latin America. But, as The Economist points out in its review, the disappointing situation in Egypt puts pay to some of the author’s claims; there are limits to how productive and transformative technology can be. In business, these hurdles are plain to see. A poll taken by McKinsey published last month shows that “45% of companies admit they have limited to no understanding on how their customers interact with them digitally“. This is staggering. For all executives’ talk of the power of Big Data, such technology is useless without the proper structures in place to successfully analyse it. We also perhaps need to think more about repercussions of increased technological advances and how they influence our social interactions. In the recently opened film Her (starring Joaquin Phoenix, pictured below), set in the very near future, a new operating system is so pervasive and seamless that it leads to fraught, thought-provoking questions on the nature and productivity of relationships. When does conversation – and more – with a simulacrum detract from interactions with the physical world? These considerations may seem lofty, but as we illustrated earlier, the germination of such thoughts are being echoed in discussions over Google Glass.
So technology in 2014 heralds some promise for the future. Wearable tech as a trend is merely the initial stage of a journey where our interaction with computing systems becomes seamless. It is on this journey though that we need to make sure that businesses are making the most of every opportunity to streamline costs and enhance customer service, and that individual early adopters do not leave the rest of us behind to deal with a bewildering and alarming new way of living. One of our favourite quotations, from the author William Gibson, is apt to end on: “The future’s already here, it’s just not very evenly distributed“.