Archive
Mischief, managed – digital disruptors in need of legacy structures
“Move fast and break things”. That is the motto of Facebook, and unofficially many of its contemporaries. While much of the most visible impact of new digital organisations has been on how they respond to, engage with and influence user behaviour, just as significant has been the extent to which these organisations have eschewed traditional business models, ways of working and other internal practices. This includes traditional measures of success (hence the above cartoon from The New Yorker), but also of transparency and leadership. Such issues will be the focus of this piece, to compare the old with the new, and where opportunities and challenges can be found.
What makes digital-first organisations different
It’s important to acknowledge the utterly transformative way that digital-first companies do business and create revenue, and how different this is from the way companies operated for the past century. Much of this change can be summed up in the phrase “disruptive innovation”, coined by the great Clayton Christensen way back in 1995. I got to hear from and speak to Clay at a Harvard Business Review event at the end of last year; a clear-thinking, inspiring man. There are few things today that organisations would still find use in from the mid-90s, and yet this theory, paradoxically, holds. The market would certainly seem to bear this concept out. Writing for the Financial Times in April, John Authers noted,
Tech stocks… are leading the market. All the Fang stocks — Facebook, Amazon, Netflix and Google — hit new records this week. Add Apple and Microsoft, and just six tech companies account for 29 per cent of the rise in the S&P 500 since Mr Trump was inaugurated.
The FANG cohort are entirely data-driven organisations that rely on user information (specifically user-volunteered information) to make their money. The more accurately they can design experiences, services and content around their users, the more likely they are to retain them. The greater the retention, the greater the power of network effects and lock-in. (Importantly, their revenue also make any new entrants easily acquirable prey, inhibiting competition). These are Marketing 101 ambitions, but they are being deployed at a level of sophistication the likes of which have never been seen before. Because of this, they are different businesses to those operating in legacy areas. These incumbents are encumbered by many things, including heavily codified regulation. Regulatory bodies have not yet woken up to the way these new companies do business; but it is only a matter of time. Until then though, the common consensus has been that, working in a different way, and without the threat of regulation, means traditional business structures can easily be discarded for the sake of efficiency; dismissed entirely as an analogue throwback.
The dangers of difference
One of the conceits of digital-first organisations is that they tend to be set up in order to democratise the sharing of services or data; disruption through liberalising of a product so that everyone can enjoy something previously limited via enforced scarcity (e.g. cheap travel, cheap accommodation). At the same time, they usually have a highly personality-driven structure, where the original founder is treated with almost Messianic reverence. This despite high-profile revelations of the Emperor having no clothes, such as with Twitter’s Jack Dorsey as well as Google, then Yahoo’s, now who-knows-where Marissa Mayer. She left Yahoo with a $23m severance package as reward for doing absolutely zero to save the organisation. Worse, she may have obstructed justice by waiting years to disclose details of cyberattacks. This was particularly galling for Yahoo’s suitor, Verizon as information came to light in the middle of its proposed purchase of the company (it resulted in a $350m cut to the acquisition price tag). The SEC is investigating. The silence on this matter is staggering, and points to a cultural lack of transparency that is not uncommon in the Valley. A recent Lex column effectively summarised this leader worship as a “most hallowed and dangerous absurdity”.
Uber’s embodiment of the founder-driven fallacy
Ben Horowitz, co-founder of the venture capital group Andreessen Horowitz, once argued that good founders have “a burning, irrepressible desire to build something great” and are more likely than career CEOs to combine moral authority with “total commitment to the long term”. It works in some cases, including at Google and Facebook, but has failed dismally at Uber.
– Financial Times, June 2017
This culture that focuses on the founder has led to a little whitewashing (few would be able to name all of Facebook’s founders, beyond the Zuck) and a lot of eggs in one basket. Snap’s recent IPO is a great example of the overriding faith and trust placed in founders, given that indicated – as the FT calls it – a “21st century governance vacuum“. Governance appears to have been lacking at Uber, as well. The company endured months of salacious rumours and accusations, including candid film of the founder, Travis Kalanick, berating an employee. This all rumbled on without any implications for quite some time. Travis was Travis, and lip service was paid while the search for some profit – Uber is worth more than 80% of the companies on the Fortune 500, yet in the first half of last year alone made more than $1bn in losses – continued.
Uber’s cultural problems eventually reached such levels (from myriad allegations of sexual harassment, to a lawsuit over self-driving technology versus Google, to revelations about ‘Greyball’, software it used to mislead regulators), that Kalanick was initially forced to take a leave of absence. But as mentioned earlier, these organisations are personality-driven; the rot was not confined to one person. This became apparent when David Bonderman had to resign from Uber’s board having made a ludicrously sexist comment directed at none other than his colleague Arianna Huffington, that illustrated the company’s startlingly old-school, recidivist outlook. This at a meeting where the company’s culture was being reviewed and the message to be delivered was of turning a corner.
A report issued by the company on a turnaround recommended reducing Kalanick’s responsibilities and hiring a COO. The company has been without one since March. It is also without a CMO, CFO, head of engineering, general counsel and now, CEO. Many issues raise themselves as a start-up grows from being a small organisation to a large one. So it is with Uber – one engineer described it as “an organisation in complete, unrelenting chaos” – as it will be with other firms to come. There is only a belated recognition that structures had to be put in place, the same types of structures that the organisations they were disrupting have in place. The FT writes,
“Lack of oversight and poor governance was a key theme running through the findings of the report… Their 47 recommendations reveal gaping holes in Uber’s governance structures and human resources practices.”
These types of institutional practices are difficult to enforce in the Valley. That is precisely because their connotations are of the monolithic corporate mega-firms that employees and founders of these companies are often consciously fighting against. Much of their raison d’être springs from an idealistic desire to change the world, and methodologically to do so by running roughshod over traditional work practices. This has its significant benefits (if only in terms of revenue), but from an employee experience it is looking like an increasingly questionable approach. Hadi Partovi, an Uber investor and tech entrepreneur told the FT, “This is a company where there has been no line that you wouldn’t cross if it got in the way of success”. Much of this planned oversight would have been anathema to Kalanick, which ultimately is why the decision for him to leave was unavoidable. Uber now plans to refresh its values, install an independent board chairman, conduct senior management performance reviews and adopt a zero-tolerance policy toward harassment.
Legacy lessons from an incumbent conglomerate
Many of the recommendations in the report issued to Uber would be recognised by anyone working in a more traditional work setting (as a former management consultant, they certainly ring a bell to me). While the philosophical objection to such things has already been noted, the notion of a framework to police behaviour, it must also be recognised, is a concept that will be alien to most anyone working in the Valley. Vivek Wadhwa, a fellow at the Rock Center of Corporate Governance, clarified, “The spoiled brats of Silicon Valley don’t know the basics. It is a revelation for Silicon Valley: ‘duh, you have to have HR people, you can’t sleep with each other… you have to be respectful’.”
Meanwhile, another CEO stepped down recently in more forgiving circumstances, recently but which still prompted unfavourable comparisons; Jeff Immelt of General Electric. As detailed in a stimulating piece last month in The New York Times, Immelt has had a difficult time of it. Firstly, he succeeded in his role a man who was generally thought to be a visionary CEO; Jack Welch. Fortune magazine in 1999 described him as the best manager of the 20th century. So no pressure for Immelt there, then. Secondly, Immelt became Chairman and CEO four days before the 9/11 attacks, and also had the 2008 financial crisis in his tenure. Lastly, since taking over, the nature of companies, as this article has attempted to make clear, has changed radically. Powerful conglomerates no longer rule the waves.
Immelt has, perhaps belatedly, been committed to downsizing the sprawling offering of GE in order to make it more specialised. Moreover, the humility of Immelt is a million miles from the audacity, bragaddacio and egotism of Kalanick, acknowledging, “This is not a game of perfection, it’s a game of progress.”
So while the FANGs of the world are undoubtedly changing the landscape of business [not to mention human interaction and behaviours], they also need to recognise that not all legacy structures and processes are to be consigned to the dustbin of management history, simply because they work in a legacy industry sector. Indeed, more responsibility diverted from the founder, greater accountability and transparency, and a more structured employee experience might lead to greater returns, higher employee retention rates and perhaps even mitigate regulatory scrutiny down the line. The opportunity is there for those sensible enough to grasp it.
Too much content, too many channels, too little time?
We all seem to have less time to ourselves these days. But there seems to be more to watch – on more platforms – than ever before. What trends have led to this, and what’s the result? Much editorial ink has been spilled over the years about how our lives seem to be getting busier, with less free time to ourselves. This is somewhat of a painful irony given that many of our more intellectual ancestors thought our evolution as a species would quickly lead to a civilisation mostly consumed by thoughts of how to fill the days of leisure. In last week’s New Yorker, Harvard professor Thales Teixeira noted there are three major “fungible” resources we have as people – money, time and attention. The third, according to Teixeira, is the “least explored”. Interestingly, Teixeira calculated the inherent price of attention and how it fluctuates, by correlating it with rising ad rates for the Super Bowl. Last year, the price of attention jumped more than 20%. The article elaborates,
“The jump had obvious implications: attention—at least, the kind worth selling—is becoming increasingly scarce, as people spend their free time distracted by a growing array of devices. And, just as the increasing scarcity of oil has led to more exotic methods of recovery, the scarcity of attention, combined with a growing economy built around its exchange, has prompted R. & D. in the [retaining of attention].”
It’s such thinking that has persuaded executives to invest in increasingly multi-platform, creative advertising during the Super Bowl, and to media production companies taking their wares to the likes of YouTube and Netflix. But it’s all circular , as demonstrated last week when Amazon announced it would be producing films for cinema release. The plurality of such content over different channels carries important connotations for pricing strategies. At its most fundamental, what is a product worth when it is intangible and potentially only available in digital form? It chimes with an article written earlier this month in The Economist on the customer benefits of e-commerce. Though most knee-jerk reactions would assume price is the biggest benefit to customers, recent research illustrates this is not always the case. Researchers at MIT showed on average people paid an extra 50% for books online versus in-store. This isn’t because that latest David Baldacci is sold for more on Amazon, but rather because of the long tail. Which means more products are able to find the right owner, for a price, whereas in store comparatively they go unsold. More channels have meant more availability for content, which should benefit consumers in that more content destined to be a hit now finds a home, where once it might have been lost if turned down by the major TV or radio network stations. The Economist elaborates,
“Seasoned publishers have only a vague idea what book, film or song will be a hit. A major record label can sign only a fraction of the artists available, knowing full well it will unwittingly reject a future superstar. Thanks to cheap digital recording technology, file sharing, YouTube, streaming music and social media, however, barriers to entry have been dismantled. Artists can now record and distribute a song without signing to a major label. Independent labels have proliferated, and they are taking on the artists passed over by major labels. Hit songs are still a lottery, but the public gets three times as many lottery tickets.”
So while we may have less time to consume it, more content over more channels will allow for greater chances for breakout hits, particularly with avid niche audiences. Amazon Prime video content was until recently confined to a niche audience, and the show Transparent dealt with niche subject matter. But the show has broken out into the zeitgeist and won two awards at the recent Golden Globes ceremony. (Full disclosure, we know a producer on the show and were lucky enough to visit the set on the Paramount lot in Los Angeles last summer). It is likely such a great show – recently made available free for 24 hours as a way to upsell customers to Prime – would not have found a home on traditional TV networks, and thus in people’s homes, were it not for this plurality.
On past and future innovation – Disruption, inequality and robots
How to define innovation, how has it been studied in the recent past, and what does future innovation hold for the human race?
Sometimes the word innovation gets misused. Like when people use the word “technology” to mean recent gadgets and gizmos, instead of acknolwedging that the term encompasses the first wheel. “Innovation” is another tricky one. Our understanding of recent thoughts on innovation – as well as its contemporary partner, “disruption” – were thrown into question in June when Jill Lepore penned an article in The New Yorker that put our ideas about innovation and specifically on Clayton Christensen’s ideas about innovation in a new light. Christensen, heir apparent to fellow Harvard Business School bod Michael Porter (author of the simple, elegant and classic The Five Competitive Forces that Shape Strategy) wrote The Innovator’s Dilemma in 1997. His work on disruptive innovation, claiming that successful businesses focused too much on what they were doing well, missing what, in Lepore’s words, “an entirely untapped customer wanted”, created a cottage industry of conferences, companies and counsels committed to dealing with disruption, (not least this blog, which lists disruption as one its topics of interest). Lepore’s article describes how, as Western society’s retelling of the past became less dominated by religion and more by science and historicism, the future became less about the fall of Man and more about the idea of progress. This thought took hold particularly during The Enlightenment. In the wake of two World Wars though, our endless advance toward greater things seemed less obvious;
“Replacing ‘progress’ with ‘innovation’ skirts the question of whether a novelty is an improvement: the world may not be getting better and better but our devices our getting newer and newer”
The article goes on to look at Christensen’s handpicked case studies that he used in his book. When Christensen describes one of his areas of focus, the disk-drive industry, as being unlike any other in the history of business, Lepore rightly points out the sui generis nature of it “makes it a very odd choice for an investigation designed to create a model for understanding other industries”. She goes on for much of the article to utterly debunk several of the author’s case studies, showcasing inaccuracies and even criminal behaviour on the part of those businesses he heralded as disruptive innovators. She also deftly points out, much in the line of thinking in Taleb’s Black Swan, that failures are often forgotten about, and those that succeed are grouped and promoted as formulae for success. Such is the case with Christensen’s apparently cherry-picked case studies. Writing about one company, Pathfinder, that tried to branch out into online journalism, seemingly too soon, Lepore comments,
“Had [it] been successful, it would have been greeted, retrospectively, as evidence of disruptive innovation. Instead, as one of its producers put it, ‘it’s like it never existed’… Faith in disruption is the best illustration, and the worst case, of a larger historical transformation having to do with secularization, and what happens when the invisible hand replaces the hand of God as explanation and justification.”
Such were the ramifications of the piece, that when questioned on it recently in Harvard Business Review, Christensen confessed “the choice of the word ‘disruption’ was a mistake I made twenty years ago“. The warning to businesses is that just because something is seen as ‘disruptive’ does not guarantee success, or fundamentally that it belongs to any long-term strategy. Developing expertise in a disparate area takes time, and investment, in terms of people, infrastructure and cash. And for some, the very act of resisting disruption is what has made them thrive. Another recent piece in HBR makes the point that most successful strategies involve not just a single act of deus ex machina thinking-outside-the-boxness, but rather sustained disruption. Though Kodak, Sony and others may have rued the days, months and years they neglected to innovate beyond their core area, the graveyard of dead businesses is also surely littered with companies who innovated too soon, the wrong way or in too costly a process that left them open to things other than what Schumpeter termed creative destruction.
Outside of cultural and philosophical analysis of the nature and definition of innovation, some may consider of more pressing concern the news that we are soon to be looked after by, and subsequently outmaneuvered in every way by, machines. The largest and most forward-thinking (and therefore not necessarily likely) of these concerns was recently put forward by Nick Bostrom in his new book Superintelligence: Paths, Dangers, Strategies. According to a review in The Economist, the book posits that once you assume that there is nothing inherently magic about the human brain, it is evidence that an intelligent machine can be built. Bostrom worries though that “Once intelligence is sufficiently well understood for a clever machine to be built, that machine may prove able to design a better version of itself” and so on, ad infinitum. “The thought processes of such a machine, he argues, would be as alien to humans as human thought processes are to cockroaches. It is far from obvious that such a machine would have humanity’s best interests at heart—or, indeed, that it would care about humans at all”.
Beyond the admittedly far-off prognostications of the removal of the human race at the hands of the very things it created, machines and digital technology in general pose great risks in the near-term, too. For a succinct and alarming introduction to this, watch the enlightening video at the beginning of this post. Since the McKinsey Global Instititute published a paper in May soberly titled Disruptive technologies: Advances that will transform life, business, and the global economy, much editorial ink and celluloid (were either medium to still be in much use) has been spilled and spooled detailing how machines will slowly replace humans in the workplace. This transformation – itself a prime example of creative destruction – is already underway in the blue-collar world, where machines have replaced workers in automotive factories. The Wall Street Journal reports Chinese electronics makers are facing pressure to automate as labor costs rise, but are challenged by the low margins, precise work and short product life of the phones and other gadgets that the country produces. Travel agents and bank clerks have also been rendered null, thanks to that omnipresent machine, the Internet. Writes The Economist, “[T]eachers, researchers and writers are next. The question is whether the creation will be worth the destruction”. The McKinsey report, according to The Economist, “worries that modern technologies will widen inequality, increase social exclusion and provoke a backlash. It also speculates that public-sector institutions will be too clumsy to prepare people for this brave new world”.
Such thinking gels with an essay in the July/August edition of Foreign Affairs, by Erik Brynjolfsson, Andrew McAfee and Michael Spence, titled New World Order. The authors rightly posit that in a free market the biggest premiums are reserved for the products with the most scarcity. When even niche, specialist employment though, such as in the arts (see video at start of article), can be replicated and performed to economies of scale by machines, then labourers and the owners of capital are at great risk. The essay makes good points on how while a simple economic model suggests that technology’s impact increases overall productivity for everyone, the truth is that the impact is more uneven. The authors astutely point out,
“Today, it is possible to take many important goods, services, and processes and codify them. Once codified, they can be digitized [sic], and once digitized, they can be replicated. Digital copies can be made at virtually zero cost and transmitted anywhere in the world almost instantaneously.”
Though this sounds utopian and democratic, what is actually does, the essay argues, is propel certain products to super-stardom. Network effects create this winner-take-all market. Similarly it creates disproportionately successful individuals. Although there are many factors at play here, the authors readily concede, they also maintain the importance of another, important and distressing theory;
“[A] portion of the growth is linked to the greater use of information technology… When income is distributed according to a power law, most people will be below the average… Globalization and technological change may increase the wealth and economic efficiency of nations and the world at large, but they will not work to everybody’s advantage, at least in the short to medium term. Ordinary workers, in particular, will continue to bear the brunt of the changes, benefiting as consumers but not necessarily as producers. This means that without further intervention, economic inequality is likely to continue to increase, posing a variety of problems. Unequal incomes can lead to unequal opportunities, depriving nations of access to talent and undermining the social contract. Political power, meanwhile, often follows economic power, in this case undermining democracy.”
There are those who say such fears of a rise in inequality and the whole destruction through automation of whole swathes of the job sector are unfounded, that many occupations require a certain intuition that cannot be replicated. Time will tell whether this intuition, like an audio recording, health assessment or the ability to drive a car, will be similarly codified and disrupted (yes, we’ll continue using the word disrupt, for now).
On movie release windows – I love the sound of breaking glass
It’s fair to say that in the past ten years, the pace of technology has evolved at an ever-increasing rate. The way in which devices have changed, and with it our use of them, was humourously summed up in the above cartoon from The New Yorker. Digital trends have affected the way we communicate, the way we consume media, and indeed the way we consume goods and services, i.e. shop.
So it is a little surprising to many – your humble correspondent included – that we still have to put up with a film being released in one country one day, and in another months later. That we still have to wait a certain number of months for a film to amble its way from the cinema screens to our home, whether on Blu-ray / DVD or on VOD. It’s interesting to note that vertical integration isn’t a key issue; Disney recently launched the second subscription video on demand (SVOD) service in Europe, with a library of constantly refreshed titles that can be viewed on platforms ranging from TVs to Xbox to iPads. Indeed, Disney’s CEO Bob Iger announced way back in 2005 in an interview with The Wall Street Journal that he foresaw a day of collapsed release windows, when a film came out the same day at the cinema as it was available to watch in the home:
We’d be better off as a company and an industry if we compressed that window. We could spend less money pushing the box office and get to the next window sooner where a movie has more perceived value to the consumer because it’s more fresh.
So there is money to be saved in such an exercise. Yet seven years later, such a situation is still mostly a fantasy for major films. Studios have undoubtedly dipped their toe in the water, and some moderate success has been seen on the indie scene, specifically with recent films like Margin Call, Melancholia and Arbitrage. The former film was released simultaneously in the cinema and on VOD (seemingly only in the US, however), eventually recording strong results, months after its initial release at Sundance Film Festival. Again, what is the justification for such a change in platform release timings? Not meeting consumer desires and addressing piracy, but simple cost savings. Variety reports:
“We’re a star-driven culture, and on a crowded (VOD) menu, what are you going to be drawn to?” posits WME Global head Graham Taylor, who adds that with marketing budgets skyrocketing, the ability to use a single campaign across closely spaced bows on multiple platforms is an important cost savings.
The whole situation is quite frustrating for any fan of film or television. It is a frustration shared by Frederic Filloux, co-author of the excellent blog Monday Note, which Zeitgeist strongly recommends to anyone with an interest in insightful thoughts and reasoning on media industry goings-on.
Their most recent post also happened to detail the author’s frustrations with such seemingly arbitrary release windows. One of the most pertinent charts displays the achingly slow rate of change in platform release changes, that is so at odds with the pace of change in other media (above). The content of the post has rational recommendations, which at first glance seem eminently appropriate and overdue for implementation. Some of the recommendations though fail to account for the fact that the film industry and its machinations are often governed by winds of irrationality.
To summarise, Filloux recommends a global day-and date, shorter, more flexible window of time between cinema and home release. There are a number of obstacles to these ideas though. Firstly, exhibitors must be placated. They hold such a sway over studios that they cannot easily be ignored. Bob Iger, in the interview mentioned earlier, mentions exhibitors as being a key obstacle. Think about it, why on earth would a cinema want their film to be available in the comfort of their audience’s home any sooner than it already is? It wants to enforce scarcity, so that when the film’s marketing machine is at its height, the cinema is the only place you can see it. As already mentioned, indie films have had some success with multi-platform releases, but even these have met with consternation from exhibitors, as a recent example in Canada shows. The consternation becomes outright war for larger films. Zetigeist reported when, in 2010, many exhibitors refused to show Tim Burton’s Alice in Wonderland when the studio, Disney, flirted with releasing the film to home release less than four months after its theatrical debut. After much back and forth, exhibitors eventually relented, and the film went on to gross over a billion dollars at the global box office. Exhibitors are not going to be convinced about flat release windows anytime soon. They are perhaps the largest roadblock to such a move, and the largest point of advocating a return to vertical integration of production, distribution and exhibition that was the case until the Paramount Decree in 1948.
Moreover, while the argument about having flexible, shifting window releases depending upon a film’s success is logical, it does not acknowledge the existence of sleeper hits, films which do not open to huge returns but gradually accrue it over months of release (as illustrated by Margin Call, mentioned earlier). It would also be hard to define when a movie “succeeds” or “bombs”. You could use box office as a figure, but would this be without context, as a ratio of the film’s budget, or against its current peers? Using box office fails to take awards – principally Oscar – coverage into consideration, which invariably adds its own box office bump to a movie when it is nominated or wins.
The recommendation for simultaneous worldwide release is also a valid point. Zeitgeist has written before on the ridiculous prices pirated films go for in markets that have no access to the official product. To their credit, studios are moving further toward a “day and date” system. However, doing so exclusively would be dangerous. Releasing some films market by market allows the studio to gauge audience reaction, and if necessary tinker with the marketing or the film itself. Staggering release dates is also necessary for cultural events, such as the World Cup, which may be more relevant to some countries than others.
It is the last point made in the article, that of making TV shows “universally available from the day when they are aired on TV” that Zeitgeist could not agree more with. Apart from audience frustration – and recent technological development such as DVR show how the opportunity can shape viewer habits – such a move would also surely divert people from resorting to illegal downloading.
To conclude, while there are caveats and significant roadbumps to be addressed, and some progress has been made over the years, the film industry has a long way to go in a short time if it wants to catch up with consumer habits. Flat release windows should be an inevitability, and a priority. Moreover, they should not be seen purely as cost-saving measure, but as an important way of keeping an increasingly technologically and globally savvy customer base happy.
Marketing M2M Services
While the Mobile World Congress cools down – TechCrunch has some interesting thoughts – we wanted to touch on another tech issue, that of M2M.
Machine-to-machine communication is nothing especially new, but it is expected to see an explosion in use in the next 5-10 years. It is often referred to as ‘The Internet of Things’. Consultancy firm Analysys Mason recently held an interesting webinar on the subject, focussing on the B2B applications. The graph above is taken from one their webinar, and illustrates the expected rise in M2M device connections worldwide through 2020, according to device. Notably, the auto industry will see some expansion (think cars talking to each other to avoid colliding, staying in the right lane, basically driving themselves, a burgeoning trend recently picked up in The Economist).
Significant take-up will come from the home, with your dishwasher telling you when it’s time to put it on and your fridge telling you you’re out of milk and taking the trouble to order some more from Ocado without you lifting a finger. Zeitgeist asked one of the speakers, Steve Hilton, about how such devices could be promoted in the B2C world. One of the first things Mr. Hilton said needed to be done was to stop calling it M2M, instead communicating in a way that “isn’t all tech-y speech”. It would require focussing on the “fun”, “great” things you can do. Entertainment and security products using M2M will be of particular interest.
Currently though in the consumer sector this is a little-known technological movement that marketers will need to think carefully about how to communicate to their consumers, without making them worry about Skynet.
UPDATE (15/3/12): Not one to allay fears of any Skynet-like worries, CIA director David Petraeus last week commented on the rise of M2M devices and how much easier it will be to snoop on unsuspecting citizens, saying it would “change our notions of secrecy”. Wired elaborated,
“All those new online devices are a treasure trove of data if you’re a ‘person of interest’ to the spy community. Once upon a time, spies had to place a bug in your chandelier to hear your conversation. With the rise of the ‘smart home’, you’d be sending tagged, geolocated data that a spy agency can intercept in real time.”
The magazine gave the article the level-headed headline ‘We’ll spy on you through your dishwasher’.
Cisco’s Internet of Things
Cisco is an interesting brand that Zeitgeist has briefly had the pleasure of working on regarding its IPTV offering. What’s that you say, Cisco are planning an IPTV offering? Well, yes, they were about 18 months ago; who knows now? One of the interesting things about the behemoth is that most lay consumers have probably heard of the company but couldn’t tell you what on earth they do, other than that they might be in the technology sector. The Economist bluntly assessed recently that Cisco had “vastly overdiversified”. This opacity has led to declining, if stabilising, stocks.
That doesn’t stop it from putting interesting infographics together, however. Spotted on integrated agency MBA’s blog, this lovely picture illustrates nicely the importance and ramifications when devices we use everyday are increasingly connected to the Internet and hence to each other, other seamless integration and communication. Nice one, Cisco. Of course this is the kind of thing that Bill Gates was writing about when he published The Road Ahead, when Zeitgeist was but a weedy 12-year old. We do finally seem closer to those imaginings now.
Social Struggles & Facebook Fiefdoms
Movers and shakers and substantial tremors as social networks jostle for dominance…
Google+, which launched recently, is the latest volley from the behemoth in its efforts to battle with its similar-sized foe, Facebook. Time will tell whether it will encounter the same fate of the much-ballyhooed Buzz and Wave. Google is entering murky waters as it comes under scrutiny from Federal Trade Commission in the US, as well as the European Commission, for any anti-competitive activity. It is, increasingly, spreading its wings to areas previously considered far outside its remit. In some cases, such news is welcome, as when The Economist recently reported on the Summit Against Violent Extremism, “arranged by Google Ideas”. Importantly, the network effects of Googling are nothing compared to the network effects of Facebook, at least for now.
Meanwhile, Facebook announced “something awesome” this past week, which turned out to be the somewhat underwhelming news of group chat and video chat functionality, the latter a product of a collaboration with soon-to-be Microsoft’s Skype. It’s interesting to consider whether the audience for both platforms overlaps enough for it to be too much of a good thing; by allowing video chat on Facebook you might necessarily make Skype a much less crowded place, very quickly. The 750m users of Facebook are both a boon and a potential source of trouble for Skype. One of the things that was interesting in the conference was when the camera cut to further back in the press conference to reveal the journalists recording the event. Not as you might think, if you had watched too many West Wing episodes, were they all diligently leaning forward, facing the person speaking. Rather, as the picture above demonstrates, were they entirely arranged facing perpendicular to Mr. Zuckerberg, furiously typing away on their laptops. They weren’t reporting for tomorrow’s newspapers – or yesterday’s – they were reporting live, a constant stream of data for the data-hungry populous to instantly discuss and further disseminate.
Mr. Zuckerberg spoke confidently on Moore’s Law, applying it to the continuing growth in use of applications and tools by users on Facebook. Zeitgeist is in no position to question Zuckerberg’s thinking, yet it would seem that Moore’s Law applies to development and acceleration of technological development. Here, Zuckerberg is trying to apply it to sociological developments, rooted as they are in a technological sphere. However since Zeitgeist’s blog has not yet quite reached 750m users, we’ll defer to Zuckerberg’s opinions on the subject.
Who will win this showdown for social hegemony depends rather upon who you ask, but also upon what metric you’re looking at. Zuckerberg, rather dismissively, said it wasn’t about the number of users, but about how much they engaged with content. He may change his mind if news of a Facebook exodus in mature markets continues, and if Google has anything to say about it.
The Consumption Conundrum
A quick thought while Zeitgeist takes a well-deserved break in the hinterlands of the Côte d’Azur, and that centres on continued desire for content and immediate access, versus a dilapidated infrastructure for providing that content. A recent front page article from film industry trade paper Variety expressed concerns over who will be able to fill the shoes as the new head of the Motion Picture Association of America, headed by the much-loved Jack Valenti, and latterly the effective Dan Glickman. The post requires juggling many balls and keeping disparate parties happy, from the cultural binaries of Washington and Los Angeles, to the contrasting desires of consumer and corporation, (the issue of Net Neutrality being a particularly important example).
One principal concern for whomever takes hold of the reins will be that of the continuing threat of piracy, and the fear of ending up like the moribund music industry. One significant move that Glickman was able to implement was ensuring the creation of a post for “copyright czar” at the White House. Worries continue though as, according to the article, “technology advances make Internet speeds ever faster”. While this is true in a normative sense, in practice things are not as simple. For while improvements in technology may make computers ever more capable of handling more data at faster speeds, the delivery systems that support the transfer of this data are not being kept up to date, specifically in the US and UK. Telco networks AT&T and O2 have both recently pulled their unlimited data plans for mobile use. What is the impact for services like Facebook, Twitter and Foursquare? Unfortunately it can only have a negative one, as users may begin to worry about updating their status if it will push them over their data limit for that month.
All these moves – including other industry machinations such as the decision by Hulu, a free, legal website, to begin charging – will serve only to further consumer confusion and distance the brand from their audience.