How to define innovation, how has it been studied in the recent past, and what does future innovation hold for the human race?
Sometimes the word innovation gets misused. Like when people use the word “technology” to mean recent gadgets and gizmos, instead of acknolwedging that the term encompasses the first wheel. “Innovation” is another tricky one. Our understanding of recent thoughts on innovation – as well as its contemporary partner, “disruption” – were thrown into question in June when Jill Lepore penned an article in The New Yorker that put our ideas about innovation and specifically on Clayton Christensen’s ideas about innovation in a new light. Christensen, heir apparent to fellow Harvard Business School bod Michael Porter (author of the simple, elegant and classic The Five Competitive Forces that Shape Strategy) wrote The Innovator’s Dilemma in 1997. His work on disruptive innovation, claiming that successful businesses focused too much on what they were doing well, missing what, in Lepore’s words, “an entirely untapped customer wanted”, created a cottage industry of conferences, companies and counsels committed to dealing with disruption, (not least this blog, which lists disruption as one its topics of interest). Lepore’s article describes how, as Western society’s retelling of the past became less dominated by religion and more by science and historicism, the future became less about the fall of Man and more about the idea of progress. This thought took hold particularly during The Enlightenment. In the wake of two World Wars though, our endless advance toward greater things seemed less obvious;
“Replacing ‘progress’ with ‘innovation’ skirts the question of whether a novelty is an improvement: the world may not be getting better and better but our devices our getting newer and newer”
The article goes on to look at Christensen’s handpicked case studies that he used in his book. When Christensen describes one of his areas of focus, the disk-drive industry, as being unlike any other in the history of business, Lepore rightly points out the sui generis nature of it “makes it a very odd choice for an investigation designed to create a model for understanding other industries”. She goes on for much of the article to utterly debunk several of the author’s case studies, showcasing inaccuracies and even criminal behaviour on the part of those businesses he heralded as disruptive innovators. She also deftly points out, much in the line of thinking in Taleb’s Black Swan, that failures are often forgotten about, and those that succeed are grouped and promoted as formulae for success. Such is the case with Christensen’s apparently cherry-picked case studies. Writing about one company, Pathfinder, that tried to branch out into online journalism, seemingly too soon, Lepore comments,
“Had [it] been successful, it would have been greeted, retrospectively, as evidence of disruptive innovation. Instead, as one of its producers put it, ‘it’s like it never existed’… Faith in disruption is the best illustration, and the worst case, of a larger historical transformation having to do with secularization, and what happens when the invisible hand replaces the hand of God as explanation and justification.”
Such were the ramifications of the piece, that when questioned on it recently in Harvard Business Review, Christensen confessed “the choice of the word ‘disruption’ was a mistake I made twenty years ago“. The warning to businesses is that just because something is seen as ‘disruptive’ does not guarantee success, or fundamentally that it belongs to any long-term strategy. Developing expertise in a disparate area takes time, and investment, in terms of people, infrastructure and cash. And for some, the very act of resisting disruption is what has made them thrive. Another recent piece in HBR makes the point that most successful strategies involve not just a single act of deus ex machina thinking-outside-the-boxness, but rather sustained disruption. Though Kodak, Sony and others may have rued the days, months and years they neglected to innovate beyond their core area, the graveyard of dead businesses is also surely littered with companies who innovated too soon, the wrong way or in too costly a process that left them open to things other than what Schumpeter termed creative destruction.
Outside of cultural and philosophical analysis of the nature and definition of innovation, some may consider of more pressing concern the news that we are soon to be looked after by, and subsequently outmaneuvered in every way by, machines. The largest and most forward-thinking (and therefore not necessarily likely) of these concerns was recently put forward by Nick Bostrom in his new book Superintelligence: Paths, Dangers, Strategies. According to a review in The Economist, the book posits that once you assume that there is nothing inherently magic about the human brain, it is evidence that an intelligent machine can be built. Bostrom worries though that “Once intelligence is sufficiently well understood for a clever machine to be built, that machine may prove able to design a better version of itself” and so on, ad infinitum. “The thought processes of such a machine, he argues, would be as alien to humans as human thought processes are to cockroaches. It is far from obvious that such a machine would have humanity’s best interests at heart—or, indeed, that it would care about humans at all”.
Beyond the admittedly far-off prognostications of the removal of the human race at the hands of the very things it created, machines and digital technology in general pose great risks in the near-term, too. For a succinct and alarming introduction to this, watch the enlightening video at the beginning of this post. Since the McKinsey Global Instititute published a paper in May soberly titled Disruptive technologies: Advances that will transform life, business, and the global economy, much editorial ink and celluloid (were either medium to still be in much use) has been spilled and spooled detailing how machines will slowly replace humans in the workplace. This transformation – itself a prime example of creative destruction – is already underway in the blue-collar world, where machines have replaced workers in automotive factories. The Wall Street Journal reports Chinese electronics makers are facing pressure to automate as labor costs rise, but are challenged by the low margins, precise work and short product life of the phones and other gadgets that the country produces. Travel agents and bank clerks have also been rendered null, thanks to that omnipresent machine, the Internet. Writes The Economist, “[T]eachers, researchers and writers are next. The question is whether the creation will be worth the destruction”. The McKinsey report, according to The Economist, “worries that modern technologies will widen inequality, increase social exclusion and provoke a backlash. It also speculates that public-sector institutions will be too clumsy to prepare people for this brave new world”.
Such thinking gels with an essay in the July/August edition of Foreign Affairs, by Erik Brynjolfsson, Andrew McAfee and Michael Spence, titled New World Order. The authors rightly posit that in a free market the biggest premiums are reserved for the products with the most scarcity. When even niche, specialist employment though, such as in the arts (see video at start of article), can be replicated and performed to economies of scale by machines, then labourers and the owners of capital are at great risk. The essay makes good points on how while a simple economic model suggests that technology’s impact increases overall productivity for everyone, the truth is that the impact is more uneven. The authors astutely point out,
“Today, it is possible to take many important goods, services, and processes and codify them. Once codified, they can be digitized [sic], and once digitized, they can be replicated. Digital copies can be made at virtually zero cost and transmitted anywhere in the world almost instantaneously.”
Though this sounds utopian and democratic, what is actually does, the essay argues, is propel certain products to super-stardom. Network effects create this winner-take-all market. Similarly it creates disproportionately successful individuals. Although there are many factors at play here, the authors readily concede, they also maintain the importance of another, important and distressing theory;
“[A] portion of the growth is linked to the greater use of information technology… When income is distributed according to a power law, most people will be below the average… Globalization and technological change may increase the wealth and economic efficiency of nations and the world at large, but they will not work to everybody’s advantage, at least in the short to medium term. Ordinary workers, in particular, will continue to bear the brunt of the changes, benefiting as consumers but not necessarily as producers. This means that without further intervention, economic inequality is likely to continue to increase, posing a variety of problems. Unequal incomes can lead to unequal opportunities, depriving nations of access to talent and undermining the social contract. Political power, meanwhile, often follows economic power, in this case undermining democracy.”
There are those who say such fears of a rise in inequality and the whole destruction through automation of whole swathes of the job sector are unfounded, that many occupations require a certain intuition that cannot be replicated. Time will tell whether this intuition, like an audio recording, health assessment or the ability to drive a car, will be similarly codified and disrupted (yes, we’ll continue using the word disrupt, for now).
It’s fair to say that in the past ten years, the pace of technology has evolved at an ever-increasing rate. The way in which devices have changed, and with it our use of them, was humourously summed up in the above cartoon from The New Yorker. Digital trends have affected the way we communicate, the way we consume media, and indeed the way we consume goods and services, i.e. shop.
So it is a little surprising to many – your humble correspondent included – that we still have to put up with a film being released in one country one day, and in another months later. That we still have to wait a certain number of months for a film to amble its way from the cinema screens to our home, whether on Blu-ray / DVD or on VOD. It’s interesting to note that vertical integration isn’t a key issue; Disney recently launched the second subscription video on demand (SVOD) service in Europe, with a library of constantly refreshed titles that can be viewed on platforms ranging from TVs to Xbox to iPads. Indeed, Disney’s CEO Bob Iger announced way back in 2005 in an interview with The Wall Street Journal that he foresaw a day of collapsed release windows, when a film came out the same day at the cinema as it was available to watch in the home:
We’d be better off as a company and an industry if we compressed that window. We could spend less money pushing the box office and get to the next window sooner where a movie has more perceived value to the consumer because it’s more fresh.
So there is money to be saved in such an exercise. Yet seven years later, such a situation is still mostly a fantasy for major films. Studios have undoubtedly dipped their toe in the water, and some moderate success has been seen on the indie scene, specifically with recent films like Margin Call, Melancholia and Arbitrage. The former film was released simultaneously in the cinema and on VOD (seemingly only in the US, however), eventually recording strong results, months after its initial release at Sundance Film Festival. Again, what is the justification for such a change in platform release timings? Not meeting consumer desires and addressing piracy, but simple cost savings. Variety reports:
“We’re a star-driven culture, and on a crowded (VOD) menu, what are you going to be drawn to?” posits WME Global head Graham Taylor, who adds that with marketing budgets skyrocketing, the ability to use a single campaign across closely spaced bows on multiple platforms is an important cost savings.
The whole situation is quite frustrating for any fan of film or television. It is a frustration shared by Frederic Filloux, co-author of the excellent blog Monday Note, which Zeitgeist strongly recommends to anyone with an interest in insightful thoughts and reasoning on media industry goings-on.
Their most recent post also happened to detail the author’s frustrations with such seemingly arbitrary release windows. One of the most pertinent charts displays the achingly slow rate of change in platform release changes, that is so at odds with the pace of change in other media (above). The content of the post has rational recommendations, which at first glance seem eminently appropriate and overdue for implementation. Some of the recommendations though fail to account for the fact that the film industry and its machinations are often governed by winds of irrationality.
To summarise, Filloux recommends a global day-and date, shorter, more flexible window of time between cinema and home release. There are a number of obstacles to these ideas though. Firstly, exhibitors must be placated. They hold such a sway over studios that they cannot easily be ignored. Bob Iger, in the interview mentioned earlier, mentions exhibitors as being a key obstacle. Think about it, why on earth would a cinema want their film to be available in the comfort of their audience’s home any sooner than it already is? It wants to enforce scarcity, so that when the film’s marketing machine is at its height, the cinema is the only place you can see it. As already mentioned, indie films have had some success with multi-platform releases, but even these have met with consternation from exhibitors, as a recent example in Canada shows. The consternation becomes outright war for larger films. Zetigeist reported when, in 2010, many exhibitors refused to show Tim Burton’s Alice in Wonderland when the studio, Disney, flirted with releasing the film to home release less than four months after its theatrical debut. After much back and forth, exhibitors eventually relented, and the film went on to gross over a billion dollars at the global box office. Exhibitors are not going to be convinced about flat release windows anytime soon. They are perhaps the largest roadblock to such a move, and the largest point of advocating a return to vertical integration of production, distribution and exhibition that was the case until the Paramount Decree in 1948.
Moreover, while the argument about having flexible, shifting window releases depending upon a film’s success is logical, it does not acknowledge the existence of sleeper hits, films which do not open to huge returns but gradually accrue it over months of release (as illustrated by Margin Call, mentioned earlier). It would also be hard to define when a movie “succeeds” or “bombs”. You could use box office as a figure, but would this be without context, as a ratio of the film’s budget, or against its current peers? Using box office fails to take awards – principally Oscar – coverage into consideration, which invariably adds its own box office bump to a movie when it is nominated or wins.
The recommendation for simultaneous worldwide release is also a valid point. Zeitgeist has written before on the ridiculous prices pirated films go for in markets that have no access to the official product. To their credit, studios are moving further toward a “day and date” system. However, doing so exclusively would be dangerous. Releasing some films market by market allows the studio to gauge audience reaction, and if necessary tinker with the marketing or the film itself. Staggering release dates is also necessary for cultural events, such as the World Cup, which may be more relevant to some countries than others.
It is the last point made in the article, that of making TV shows “universally available from the day when they are aired on TV” that Zeitgeist could not agree more with. Apart from audience frustration – and recent technological development such as DVR show how the opportunity can shape viewer habits – such a move would also surely divert people from resorting to illegal downloading.
To conclude, while there are caveats and significant roadbumps to be addressed, and some progress has been made over the years, the film industry has a long way to go in a short time if it wants to catch up with consumer habits. Flat release windows should be an inevitability, and a priority. Moreover, they should not be seen purely as cost-saving measure, but as an important way of keeping an increasingly technologically and globally savvy customer base happy.
While the Mobile World Congress cools down – TechCrunch has some interesting thoughts – we wanted to touch on another tech issue, that of M2M.
Machine-to-machine communication is nothing especially new, but it is expected to see an explosion in use in the next 5-10 years. It is often referred to as ‘The Internet of Things’. Consultancy firm Analysys Mason recently held an interesting webinar on the subject, focussing on the B2B applications. The graph above is taken from one their webinar, and illustrates the expected rise in M2M device connections worldwide through 2020, according to device. Notably, the auto industry will see some expansion (think cars talking to each other to avoid colliding, staying in the right lane, basically driving themselves, a burgeoning trend recently picked up in The Economist).
Significant take-up will come from the home, with your dishwasher telling you when it’s time to put it on and your fridge telling you you’re out of milk and taking the trouble to order some more from Ocado without you lifting a finger. Zeitgeist asked one of the speakers, Steve Hilton, about how such devices could be promoted in the B2C world. One of the first things Mr. Hilton said needed to be done was to stop calling it M2M, instead communicating in a way that “isn’t all tech-y speech”. It would require focussing on the “fun”, “great” things you can do. Entertainment and security products using M2M will be of particular interest.
Currently though in the consumer sector this is a little-known technological movement that marketers will need to think carefully about how to communicate to their consumers, without making them worry about Skynet.
UPDATE (15/3/12): Not one to allay fears of any Skynet-like worries, CIA director David Petraeus last week commented on the rise of M2M devices and how much easier it will be to snoop on unsuspecting citizens, saying it would “change our notions of secrecy”. Wired elaborated,
“All those new online devices are a treasure trove of data if you’re a ‘person of interest’ to the spy community. Once upon a time, spies had to place a bug in your chandelier to hear your conversation. With the rise of the ‘smart home’, you’d be sending tagged, geolocated data that a spy agency can intercept in real time.”
The magazine gave the article the level-headed headline ‘We’ll spy on you through your dishwasher’.
Cisco is an interesting brand that Zeitgeist has briefly had the pleasure of working on regarding its IPTV offering. What’s that you say, Cisco are planning an IPTV offering? Well, yes, they were about 18 months ago; who knows now? One of the interesting things about the behemoth is that most lay consumers have probably heard of the company but couldn’t tell you what on earth they do, other than that they might be in the technology sector. The Economist bluntly assessed recently that Cisco had “vastly overdiversified”. This opacity has led to declining, if stabilising, stocks.
That doesn’t stop it from putting interesting infographics together, however. Spotted on integrated agency MBA’s blog, this lovely picture illustrates nicely the importance and ramifications when devices we use everyday are increasingly connected to the Internet and hence to each other, other seamless integration and communication. Nice one, Cisco. Of course this is the kind of thing that Bill Gates was writing about when he published The Road Ahead, when Zeitgeist was but a weedy 12-year old. We do finally seem closer to those imaginings now.
Movers and shakers and substantial tremors as social networks jostle for dominance…
Google+, which launched recently, is the latest volley from the behemoth in its efforts to battle with its similar-sized foe, Facebook. Time will tell whether it will encounter the same fate of the much-ballyhooed Buzz and Wave. Google is entering murky waters as it comes under scrutiny from Federal Trade Commission in the US, as well as the European Commission, for any anti-competitive activity. It is, increasingly, spreading its wings to areas previously considered far outside its remit. In some cases, such news is welcome, as when The Economist recently reported on the Summit Against Violent Extremism, “arranged by Google Ideas”. Importantly, the network effects of Googling are nothing compared to the network effects of Facebook, at least for now.
Meanwhile, Facebook announced “something awesome” this past week, which turned out to be the somewhat underwhelming news of group chat and video chat functionality, the latter a product of a collaboration with soon-to-be Microsoft’s Skype. It’s interesting to consider whether the audience for both platforms overlaps enough for it to be too much of a good thing; by allowing video chat on Facebook you might necessarily make Skype a much less crowded place, very quickly. The 750m users of Facebook are both a boon and a potential source of trouble for Skype. One of the things that was interesting in the conference was when the camera cut to further back in the press conference to reveal the journalists recording the event. Not as you might think, if you had watched too many West Wing episodes, were they all diligently leaning forward, facing the person speaking. Rather, as the picture above demonstrates, were they entirely arranged facing perpendicular to Mr. Zuckerberg, furiously typing away on their laptops. They weren’t reporting for tomorrow’s newspapers – or yesterday’s – they were reporting live, a constant stream of data for the data-hungry populous to instantly discuss and further disseminate.
Mr. Zuckerberg spoke confidently on Moore’s Law, applying it to the continuing growth in use of applications and tools by users on Facebook. Zeitgeist is in no position to question Zuckerberg’s thinking, yet it would seem that Moore’s Law applies to development and acceleration of technological development. Here, Zuckerberg is trying to apply it to sociological developments, rooted as they are in a technological sphere. However since Zeitgeist’s blog has not yet quite reached 750m users, we’ll defer to Zuckerberg’s opinions on the subject.
Who will win this showdown for social hegemony depends rather upon who you ask, but also upon what metric you’re looking at. Zuckerberg, rather dismissively, said it wasn’t about the number of users, but about how much they engaged with content. He may change his mind if news of a Facebook exodus in mature markets continues, and if Google has anything to say about it.
A quick thought while Zeitgeist takes a well-deserved break in the hinterlands of the Côte d’Azur, and that centres on continued desire for content and immediate access, versus a dilapidated infrastructure for providing that content. A recent front page article from film industry trade paper Variety expressed concerns over who will be able to fill the shoes as the new head of the Motion Picture Association of America, headed by the much-loved Jack Valenti, and latterly the effective Dan Glickman. The post requires juggling many balls and keeping disparate parties happy, from the cultural binaries of Washington and Los Angeles, to the contrasting desires of consumer and corporation, (the issue of Net Neutrality being a particularly important example).
One principal concern for whomever takes hold of the reins will be that of the continuing threat of piracy, and the fear of ending up like the moribund music industry. One significant move that Glickman was able to implement was ensuring the creation of a post for “copyright czar” at the White House. Worries continue though as, according to the article, “technology advances make Internet speeds ever faster”. While this is true in a normative sense, in practice things are not as simple. For while improvements in technology may make computers ever more capable of handling more data at faster speeds, the delivery systems that support the transfer of this data are not being kept up to date, specifically in the US and UK. Telco networks AT&T and O2 have both recently pulled their unlimited data plans for mobile use. What is the impact for services like Facebook, Twitter and Foursquare? Unfortunately it can only have a negative one, as users may begin to worry about updating their status if it will push them over their data limit for that month.
All these moves – including other industry machinations such as the decision by Hulu, a free, legal website, to begin charging – will serve only to further consumer confusion and distance the brand from their audience.
Foursquare is to the zeitgeist what Chatroulette was all those days ago. Location-based targeting has been gathering steam for some time, and the potential blossomed with the release of the iPhone 3GS last year. For the user, it allows them to ‘check in’ to a certain place, alerting those who follow them. If said user checks in to a certain place often enough, they become ‘mayor’ of that location. Moreover, with time a map builds up showing definitively where the user tends to go. It is this last point that is of particular interest to advertisers, who are always desperate for more facts and figures to make it appear that the industry they work in is one of cold, hard, calculable facts, with no irrational outliers in order to better know the consumer they are targeting.
An exhibition detailing the evolution of maps is currently on show at London’s British Library; today we seem to rely on maps ever more as they become – with GPS functionality – an important feature on most mobile devices. It was reported earlier today that the Foursquare service has now exceeded forty million check-ins. Not one to miss out on anything that involves the decay of personal privacy, Facebook shortly intends to release its own version where users can check-in through their site, with McDonald’s already on board.
eConsultancy has a list of ten select marketing examples using geo-location, however Zeitgeist are going to focus on two specifically. The first is that of the Financial Times and its walled garden. Borrowing a page from other brands of getting a user while they’re young, the FT may soon begin providing free access to those who check-in in certain areas. Those areas being “select coffee shops located by major financial centers and near business schools including Columbia, Harvard, the London School of Economics, London Business School and London’s Cass Business School”, in other words, superior centres of academia, that Zeitgeist may or may not call an alma mater. According to FT.com, “Only the ‘mayors’ will be granted a free pass, and only for a limited time”. It’s a nice incentive and it will be interesting to see how competitive the race for free content becomes among ostensibly cash-strapped students.
The other example Zeitgeist likes is that of the luxury shoemaker Jimmy Choo, who have decided to organise a shoe hunt. As one blog describes it, “The idea is pretty simple, a pair of Jimmy Choo’s new trainers will check into some of the most exclusive and fashionable places in London, if you can track them down and catch them while still checked in at a venue, then they are yours.” Sounds like a very fun idea and a fantastic excuse to run around town going to lots of great places. Let the games begin.