Not that we like to dwell on “I told you so” situations, but Zeitgeist has been rambling on about the missed opportunities of WhatsApp – relative to its Asian counterparts like Line and WeChat – for at least a year now. The platform, owned by Facebook, has had a real opportunity to borrow a page from its analogous peers in the East, particularly with regard to B2C opportunities, for some time now. It was hugely gratifying therefore when last week it was announced that WhatsApp will allow businesses to send messages to users of the platform.
Whatappening in business
The Financial Times suggests example messages along the lines of “fraud alerts from banks and updates from airlines on delayed flights”. It’s about random companies sending you somewhat-tailored messages. Snore. The potential here is so much more monumental. Think of the potential for a fast-food service, or a news publisher (we said think; we’re not going to do all your work for you). What the platform won’t do is start serving banner ads in the app. Firstly because Facebook surely acknowledge what a horrendous impact this would have on UX; secondly because WhatsApp strongly pushes their e2e encryption feature.
Interestingly, the way this will work is that Facebook will get access to your phone number (if you haven’t succumbed to their pleas asking for it already). It will formalise the link between your old-school Facebook account and your not so-old-school-but-not-quite-Snapchat-either WhatsApp account, as suggested by New York magazine. Apparently Facebook will also be able to offer you friend suggestions. Whew, yeah because that’s a tool I really am concerned about and wish was more useful and efficient.
The potential we referred to earlier (we’re still not going to do all your work for you) is around chatbots. Chatbots and this new era for WhatsApp surely make sense. And people are clamouring for them. According to eMarketer’s data from May, nearly 50% of UK internet users say they would use a chatbot to obtain quick emergency answers if the option were available. About 4 in 10 also said they would use a chatbot to forward a question or request to an appropriate human.
Whatsappening in the rest of the world
But to say WhatsApp has been missing the boat in terms of additional data insight or revenue streams outside Western markets is a touch unfair. As the FT detailed at the beginning of the month,
“Whether you are in the market for a nicely fattened goat from the United Arab Emirates or freshly caught fish in the port of Mangalore in India, you can place your order on WhatsApp”
Indeed, it seems though outside Western markets the app is used in an entirely different way. Even within Europe there are differences. In Spain it is extremely common to make and receive calls over WhatsApp. In the UK, many a caller has been befuddled by my attempts to reach them via the platform. The likes of WhatsApp though are particularly crucial in emerging markets like India, where many citizens have never registered for and may never now register for an email address. If this sounds ludicrous, it means you’re old. It’s why the aforementioned pleas from Facebook for your phone number, why Twitter occasionally does screen takeovers when you open the app asking for it, and why in a recent project engagement I managed, we recommended a major international film and TV broadcasting company that they do the same for their own login feature. The data below for emerging markets shows the astounding reach WhatsApp has managed (and the foresight in its purchase by Zuck):
While Benedict Evans of Andreessen Horowitz says the platform has struggled to acquire new customers for businesses versus Facebook and Instagram, it undoubtedly has been successful in strengthening relationships with existing customers. This is fine in Zeitgeist’s eyes. Retention is cheaper than acquisition; if you create a good CX you don’t need to worry about getting new customers. The emphasis should be on engendering loyalty, not on scrambling to reach the newbies all the time.
WeChat’s inimitable template
At the start of the piece we mentioned China’s WeChat (or Weixin) messaging platform, of which Zeitgeist is a big fan. Others are too, which is why by some estimates it’s worth $80bn. One of the advantages inherent in both WeChat and WhatsApp is that users have naturally gravitated to these applications without the need for them to be incentivised or “walled garden”ed into such interaction. And such engagement doesn’t start before you’re old enough to even lift a mobile device, again, you’re too old. As The Economist detailed in a piece earlier this month,
“[Four year-old Yu Hui] uses a Mon Mon, an internet-connected device that links through the cloud to the WeChat app. The cuddly critter’s rotund belly disguises a microphone, which Yu Hui uses to send rambling updates and songs to her parents; it lights up when she gets an incoming message back”
For the child’s mother, WeChat has replaced such antiquated features as a voice plan, as well as email. The application also integrates features for business use that mimic that of Slack in the US. According to the article she even uses QR codes to scan business associate profiles more than she uses business cards. QR came a little late to Western markets and despite the intentions of agencies like Ogilvy in the 2010s, has failed to take off. Its owner, Tencent, has used its powerful brand and powerful authentication convince millions to part with their credit card details. The likes of Snapchat and WhatsApp have yet to make the convincing case for this. It is this crucial element that allows the father of said family to use the app for eCommerce, contactless payments in store, utility bills, splitting the bill at restaurants, paying for taxis, paying for food delivery, theatre tickets and hospital appointments, all within the WeChat ecosystem. It is then no surprise that a typical user interacts with the app at least ten times a day.
Although we mentioned no incentivisation has been necessary, a state-backed campaign last Chinese New Year saw a competition for millions of dollars in return for people vigorously shaking their handset during a TV show, the way to both have the app interact with a TV programme as well as the way for users to make new friends who are also users, according to The Economist, which reported that “punters did so 11 billion times during the show, with 810m shakes a minute recorded at one point”.
McKinsey reported last year that 15% of WeChat users have made a purchase through the platform; data from the same consulting firm this year shows that figure has now more than doubled, to 31%. Can such figures be replicated in the West? Time and culture have led to WeChat’s pervasive effectiveness and dominance. Just like QR codes have never taken off in the West, so SMS and email never took off in China, so there was never a competing platform to ween people off when it came to messaging. What some people had used was Tencent’s messaging platform QQ, the successor of which became WeChat. QQ contacts were easily transferable. Gift-giving idiosyncracies, leveraged and promoted with a big marketing push, as well as online games (from where over half of revenues derive) are both still nascent behaviours and territories for consumers and platforms, respectively, in the West.
It’s fascinating of course that none of these apps for a moment consider charging for voice calls; that would anachronistic and simply bizarre. With WhatsApp’s latest announcement, it takes a step in the right direction, opening up additional revenue streams while also trying to develop a more cohesive ecosystem for its user base. Whether users in Western markets will be comfortable with a consolidation of features on one platform – owned by a company that is viewed by some as already having consolidated too much data on them – is an open question, and surely the first hurdle to begin tackling.
On the face of it, organisations around the world seem – to borrow a phrase from last year’s Bond film Spectre – like “a kite dancing in a hurricane” as they try to counter the creative destruction that is being wreaked on them by new customer trends, sales channels and competing entrants, facilitated by digital.
In February, McKinsey published a podcast entitled Achieving a Digital State of Mind, saying that digital profoundly impacted “business models, customer journeys, and organizational agility”. That same month, Boston Consulting Group, another consultancy, upped the ante. For those lost at sea in a world of hashtags and start-ups, BCG offered Navigating a World of Digital Disruption. In it they continue the naval navigation analogy, warning of the impending third – and most destructive – wave of digital disruption about to hit, with “profound implications not only for strategy but also for the structures of companies and industries”.
So what to make of news in The Economist this week that indirectly shows the rather pathetic impact – not to mention particularly calm seas – of all this disruption? While stories of Uber disrupting Luddite taxi firms around the world are commonplace, The Economist reports that things are only getting better for the successful legacy companies at the top: “A very profitable American firm has an 80% chance of being that way ten years later. In the 1990s the odds were only about 50%”. How to account for increased chances of long-term, consistent success in a world where your USP and customer base are stolen from right under your nose by a newer, nimbler, digital doppelganger, supposedly the moment you turn your back? The article continues:
Unfortunately the signs are that incumbent firms are becoming more entrenched, not less. Microsoft is making double the profits it did when antitrust regulators targeted the software firm in 2000.
The Economist reasons that increasingly concentrated ownership, coupled with an onerous regulatory environment, are to blame. It is sad to see that while digital takes on work cultures, shapes strategy and provides new opportunities, it cannot compete with themes as old as business itself: monopolies and red tape.
We’ve written several times over the years about the deployment of Big Data. One of the key challenges with such tools is the seductive risk of treating the data as a catch-all answer to a question not asked. Zeitgeist in the past worked with a large client in the public sector that understood this pitfall and studiously avoided it by knowing beforehand what Big Data meant to them, and how it could be used to improve its strategy and operations.
Without such forethought, applications of Big Data can be ineffectual, if not outright harmful, as the president of eBay Marketplaces said last year in an interview with McKinsey. Governments around the world – particularly in the West – have been using Big Data for some time now to help identify extremists. The jury is still out for some as to how harmful government digital surveillance can be. The deliberate weakening of virtual systems has its root in the fact that the US government originally classified once-arcane cryptography as a munition, which when licensed abroad was watered down. “The idea of deliberately weakening cryptography in the name of national security has not gone away”, writes The Economist. An article published in The New Yorker earlier this year investigated the NSA’s uses of Big Data – specifically mass surveillance of individuals in the US and beyond over cellphone metadata, social media, etc. – and found it wanting. This appears to be partly because there is no pre-determined strategy for what they want the data to do, other than to figuratively chuck it onto the pile with the rest of the data they have, which at some point might be used. The efficacy of such a practice, according to the article, has been minimal. In all of its surveillance, the article claims there was but a single case “
“Patrick Skinner, a former C.I.A. case officer who works with the Soufan Group, a security company, told me… ‘We knew about these networks,’ he said, speaking of the Charlie Hebdo attacks. Mass surveillance, he continued, ‘gives a false sense of security. It sounds great when you say you’re monitoring every phone call in the United States. You can put that in a PowerPoint. But, actually, you have no idea what’s going on.’
By flooding the system with false positives, big-data approaches to counterterrorism might actually make it harder to identify real terrorists before they act. Two years before the Boston Marathon bombing, Tamerlan Tsarnaev, the older of the two brothers alleged to have committed the attack, was assessed by the city’s Joint Terrorism Task Force. They determined that he was not a threat. This was one of about a thousand assessments that the Boston J.T.T.F. conducted that year, a number that had nearly doubled in the previous two years, according to the Boston F.B.I. As of 2013, the Justice Department has trained nearly three hundred thousand law-enforcement officers in how to file ‘suspicious-activity reports.’ In 2010, a central database held about three thousand of these reports; by 2012 it had grown to almost twenty-eight thousand. ‘The bigger haystack makes it harder to find the needle,’ Sensenbrenner told me. Thomas Drake, a former N.S.A. executive and whistle-blower who has become one of the agency’s most vocal critics, told me, ‘If you target everything, there’s no target.’“
This last quotation applies to strategy in general. Without anything specific to focus on as a strategic achievement or direction, one shouldn’t expect any improvement in that area.
Back in July of this year, while schoolchildren dreamt of holidays and commuters sweated their way to work, management consultancy McKinsey sat down with president of eBay Marketplaces Devin Wenig. The interview is above; we’re going to pick on some highlights below as Wenig pontificated on the future of bricks and mortar stores, the change needed in marketing, the fallacy of big data and what will make for good competitive advantage over other retailers in the months and years to come. Often with talking heads the output can be generic and anodyne. Wenig though offers some insightful thoughts.
The future of the store: “I think stores are going to become as much distribution and fulfillment centers as they are full-fledged shopping experiences… They’ll become technology enabled so that you can go to a store and see enough inventory, but you may shop “shoppable windows.” We’re building those right now for retailers around the world. You may end up hollowing out the real estate, where the showroom is a much smaller part of the footprint, and the inventory and the distribution center become more of that footprint.”
How marketing needs to change: “There are still many instances that I see where it is old-school marketing. It’s still about major TV campaigns, get people into the stores. That’s still important, and that’s not going to go away. But understanding how to engage in a world of exploding social networks, how to use search, how to use catalog, how to optimize, and how to engage—very different skills.”
Competitive advantage: “I think the answer is data… While from the merchant standpoint incredible selection may seem great, from the consumer standpoint it can be overwhelming. I actually don’t want to shop in a store with a billion items for sale, I’m just looking for this. Data is the way to connect a long-tail advantage with consumers that oftentimes want simplicity.”
Executing on strategy: “Great data is both art and science. There’s a lot of press about the science; there’s not as much about the art. But the truth is that judgment matters a lot… we bring quantitative analysis to that to say, “The right way to look at our customers is this, not this,” even though there are infinite ways we could.”
The fallacy of big data: “It’s not about big data, it’s about small data. Big data is useless… it’s about me connecting with you, my business connecting with you. You don’t want to be part of a big data set; you’re just looking to buy a shirt. And that’s about small data. That’s about understanding insights that I can glean about you that don’t feel intrusive, don’t feel creepy, and don’t feel artificial—but feel natural. That, to me, is the future. There are glimmers of success there. I wouldn’t say the industry has arrived. For all the rhetoric about data, it’s a work in progress, but a critically important work in progress.”
Merging experiences: “E-commerce [fulfills] a utilitarian function… Stores have an important element of serendipity… The future of digital commerce is trying to get the best of both… we’re trying to spur inspiration.”
Late last month, Zeitgeist went with friends to his local theatre to see “Teh [sic] Internet is a Serious Business”. The play, a story of the founding of the hacktivist group Anonymous, was the most well-publicised dawn of cyberattacks on businesses and governments. The organisation, at its best, set it sights on radical groups that promoted marginalisation of others, whether that was the Church of Scientology in the US or those trying to dampen the Arab Spring in Tunisia. This collective, run by people, some of whom were still in school, showed the world how vulnerable institutions were to being targeted online. We wrote about cybersecurity as recently as this summer, summarising the key points in a recent report from The Economist on what was needed to mitigate against future attacks and how to reduce the damage such attacks inflict. The issue is not going away (and in fact is likely to become worse before it gets better).
It was back in January that management consultancy McKinsey produced a report, ‘Risk and responsibility in a hyperconnected world: Implications for enterprises’, where they estimated the total aggregate impact of cyberattacks at $3 trillion. There is much to be done to avert such losses, but the current picture is far from rosy. Most tech executives gave their institutions “low scores in making the required changes”, the report states; nearly 80% of them said they cannot keep up with attackers’ – be they nation-states or individuals – increasing sophistication. Moreover, though more money is being directed at this area, “larger expenditures have not translated into an increased maturity” yet. And while the attacks themselves carry potentially devastating economic impact on a company, their prevention comes at a price too for the business, beyond the financial. McKinsey reports that security concerns are delaying mobile functionality in enterprises by an average of six months. If attacks continue, the consultancy posits this could result in “a world where a ‘cyberbacklash’ decelerates digitization [sic]”. Revelations about pervasive cyberspying by Western governments on their own citizens could well be a catalyst to this. Seven points are made in the report for enterprises to manage disruptions better:
- Prioritise the greatest business risks to defend and invest in.
- Provide a differentiated approach to defence of assets, based on their importance.
- Move from “simply bolting on security to training their entire staff to incorporate it from day one into technology projects”.
- Be proactive; develop capabilities “to aggregate relevant information” to attune defence systems
- Test. Test. Test again.
- Enlist CxOs to help them understand the value in protection.
- Integrate risk of attack with other corporate risk analysis
Given the amount of business and social issues that involve digital processes – “IP, regulatory compliance, privacy, customer experience, product development, business continuity, legal jurisdiction” – there is a huge amount of disagreement about how much state involvement there should be in the degree to which enterprises must take steps to protect themselves. This is an important point for discussion though, and we touched on it when we wrote about cyberattacks previously.
But that report was way back in January, things must have solved themselves since then, right? Last week, PwC reported that corporate cyber security budgets are being slashed, even while cyberattacks are becoming far more frequent. The FT reported that global security budgets fell 4% YoY in 2014, while the number of reported security incidents increased 48%. Bear in mind these are only reported incidents. This is potentially no bad thing, if we’re to go by McKinsey’s diagnosis of too much money being thrown at the problem in the first place. At the same time, it’s not exactly comforting.
Only a few days after PwC’s figures were published, JP Morgan revealed that personal data for 76 million households – about two-thirds of total US households – had been “compromised” by a cyberattack that had happened earlier in the year. Information stolen included names, phone numbers and email addresses of customers. It was also revealed that other financial institutions were probed too. Worryingly, the WSJ reports that investigators disagree on what exactly the hackers did. It was also unclear who was to blame; nation state or individual. Such disagreements over the ramifications of the attack, the identity of the attackers as well as the delayed revelation of the attack itself, illustrate just how necessary transparency is, if such attacks are to be better protected against and managed in the future.
For those in London at the end of the month, The Economist is hosting an event for those who apply, on October 21, examining “how businesses can and should respond to a data breach, whether it stem from a malicious insider, an external threat or simple carelessness”. Hope to see you there.
How to define innovation, how has it been studied in the recent past, and what does future innovation hold for the human race?
Sometimes the word innovation gets misused. Like when people use the word “technology” to mean recent gadgets and gizmos, instead of acknolwedging that the term encompasses the first wheel. “Innovation” is another tricky one. Our understanding of recent thoughts on innovation – as well as its contemporary partner, “disruption” – were thrown into question in June when Jill Lepore penned an article in The New Yorker that put our ideas about innovation and specifically on Clayton Christensen’s ideas about innovation in a new light. Christensen, heir apparent to fellow Harvard Business School bod Michael Porter (author of the simple, elegant and classic The Five Competitive Forces that Shape Strategy) wrote The Innovator’s Dilemma in 1997. His work on disruptive innovation, claiming that successful businesses focused too much on what they were doing well, missing what, in Lepore’s words, “an entirely untapped customer wanted”, created a cottage industry of conferences, companies and counsels committed to dealing with disruption, (not least this blog, which lists disruption as one its topics of interest). Lepore’s article describes how, as Western society’s retelling of the past became less dominated by religion and more by science and historicism, the future became less about the fall of Man and more about the idea of progress. This thought took hold particularly during The Enlightenment. In the wake of two World Wars though, our endless advance toward greater things seemed less obvious;
“Replacing ‘progress’ with ‘innovation’ skirts the question of whether a novelty is an improvement: the world may not be getting better and better but our devices our getting newer and newer”
The article goes on to look at Christensen’s handpicked case studies that he used in his book. When Christensen describes one of his areas of focus, the disk-drive industry, as being unlike any other in the history of business, Lepore rightly points out the sui generis nature of it “makes it a very odd choice for an investigation designed to create a model for understanding other industries”. She goes on for much of the article to utterly debunk several of the author’s case studies, showcasing inaccuracies and even criminal behaviour on the part of those businesses he heralded as disruptive innovators. She also deftly points out, much in the line of thinking in Taleb’s Black Swan, that failures are often forgotten about, and those that succeed are grouped and promoted as formulae for success. Such is the case with Christensen’s apparently cherry-picked case studies. Writing about one company, Pathfinder, that tried to branch out into online journalism, seemingly too soon, Lepore comments,
“Had [it] been successful, it would have been greeted, retrospectively, as evidence of disruptive innovation. Instead, as one of its producers put it, ‘it’s like it never existed’… Faith in disruption is the best illustration, and the worst case, of a larger historical transformation having to do with secularization, and what happens when the invisible hand replaces the hand of God as explanation and justification.”
Such were the ramifications of the piece, that when questioned on it recently in Harvard Business Review, Christensen confessed “the choice of the word ‘disruption’ was a mistake I made twenty years ago“. The warning to businesses is that just because something is seen as ‘disruptive’ does not guarantee success, or fundamentally that it belongs to any long-term strategy. Developing expertise in a disparate area takes time, and investment, in terms of people, infrastructure and cash. And for some, the very act of resisting disruption is what has made them thrive. Another recent piece in HBR makes the point that most successful strategies involve not just a single act of deus ex machina thinking-outside-the-boxness, but rather sustained disruption. Though Kodak, Sony and others may have rued the days, months and years they neglected to innovate beyond their core area, the graveyard of dead businesses is also surely littered with companies who innovated too soon, the wrong way or in too costly a process that left them open to things other than what Schumpeter termed creative destruction.
Outside of cultural and philosophical analysis of the nature and definition of innovation, some may consider of more pressing concern the news that we are soon to be looked after by, and subsequently outmaneuvered in every way by, machines. The largest and most forward-thinking (and therefore not necessarily likely) of these concerns was recently put forward by Nick Bostrom in his new book Superintelligence: Paths, Dangers, Strategies. According to a review in The Economist, the book posits that once you assume that there is nothing inherently magic about the human brain, it is evidence that an intelligent machine can be built. Bostrom worries though that “Once intelligence is sufficiently well understood for a clever machine to be built, that machine may prove able to design a better version of itself” and so on, ad infinitum. “The thought processes of such a machine, he argues, would be as alien to humans as human thought processes are to cockroaches. It is far from obvious that such a machine would have humanity’s best interests at heart—or, indeed, that it would care about humans at all”.
Beyond the admittedly far-off prognostications of the removal of the human race at the hands of the very things it created, machines and digital technology in general pose great risks in the near-term, too. For a succinct and alarming introduction to this, watch the enlightening video at the beginning of this post. Since the McKinsey Global Instititute published a paper in May soberly titled Disruptive technologies: Advances that will transform life, business, and the global economy, much editorial ink and celluloid (were either medium to still be in much use) has been spilled and spooled detailing how machines will slowly replace humans in the workplace. This transformation – itself a prime example of creative destruction – is already underway in the blue-collar world, where machines have replaced workers in automotive factories. The Wall Street Journal reports Chinese electronics makers are facing pressure to automate as labor costs rise, but are challenged by the low margins, precise work and short product life of the phones and other gadgets that the country produces. Travel agents and bank clerks have also been rendered null, thanks to that omnipresent machine, the Internet. Writes The Economist, “[T]eachers, researchers and writers are next. The question is whether the creation will be worth the destruction”. The McKinsey report, according to The Economist, “worries that modern technologies will widen inequality, increase social exclusion and provoke a backlash. It also speculates that public-sector institutions will be too clumsy to prepare people for this brave new world”.
Such thinking gels with an essay in the July/August edition of Foreign Affairs, by Erik Brynjolfsson, Andrew McAfee and Michael Spence, titled New World Order. The authors rightly posit that in a free market the biggest premiums are reserved for the products with the most scarcity. When even niche, specialist employment though, such as in the arts (see video at start of article), can be replicated and performed to economies of scale by machines, then labourers and the owners of capital are at great risk. The essay makes good points on how while a simple economic model suggests that technology’s impact increases overall productivity for everyone, the truth is that the impact is more uneven. The authors astutely point out,
“Today, it is possible to take many important goods, services, and processes and codify them. Once codified, they can be digitized [sic], and once digitized, they can be replicated. Digital copies can be made at virtually zero cost and transmitted anywhere in the world almost instantaneously.”
Though this sounds utopian and democratic, what is actually does, the essay argues, is propel certain products to super-stardom. Network effects create this winner-take-all market. Similarly it creates disproportionately successful individuals. Although there are many factors at play here, the authors readily concede, they also maintain the importance of another, important and distressing theory;
“[A] portion of the growth is linked to the greater use of information technology… When income is distributed according to a power law, most people will be below the average… Globalization and technological change may increase the wealth and economic efficiency of nations and the world at large, but they will not work to everybody’s advantage, at least in the short to medium term. Ordinary workers, in particular, will continue to bear the brunt of the changes, benefiting as consumers but not necessarily as producers. This means that without further intervention, economic inequality is likely to continue to increase, posing a variety of problems. Unequal incomes can lead to unequal opportunities, depriving nations of access to talent and undermining the social contract. Political power, meanwhile, often follows economic power, in this case undermining democracy.”
There are those who say such fears of a rise in inequality and the whole destruction through automation of whole swathes of the job sector are unfounded, that many occupations require a certain intuition that cannot be replicated. Time will tell whether this intuition, like an audio recording, health assessment or the ability to drive a car, will be similarly codified and disrupted (yes, we’ll continue using the word disrupt, for now).
A recent McKinsey report declared that, for businesses, “The age of experimentation with digital is over“. That may be for most B2B and B2C private sector companies, but not for the luxury goods industry. Bemoaning the woeful development and investment in strategic initiatives for luxury brands online is something this blog has done once or twice before. There are understandable reasons why the industry has been reticent to commit to online retail, based on customer insight (the assumption that HNWIs don’t like to shop for something without being able to see and touch it for themselves) and conflicting priorities (physical store expansion into China and more experiential events has been the name of the game in recent years). But with a China slowdown mooted, particularly in the area of luxury gifting, and no real concrete research to show that HNWIs aren’t just as digitally savvy as their less liquid counterparts, there becomes less and less justification for what are, across the industry, woeful examples of digital strategy and innovation.
It can’t be easy for profitable businesses like LVMH, with an eye on quarterly earnings, to make drastic investments in the online space. Luxury’s brand equity often comes from provenance and tradition; a company’s roots are in its founding stores, the connotations of Milan, Florence, Paris, etc. They also worry about their neighbours; a flash-sale site or, worse, one full of counterfeit knock-offs, is always just a click away. From a logistical point of view, there is also the issue of back-end infrastructure to contend with. For several years, PPR (now Kering) ran much of its e-commerce business through Yoox, as we’ve talked about before. It would be wrong to single out those in luxury. L2 Thinktank recently tweeted with much excitement about Bacardi’s “cocktail discovery site” that worked seamlessly across web, mobile and tablet. Well, forgive us if we don’t leap for joy in an ecstasy of delirium, but this is 2014, that should be the minimum deliverable. Still, luxury is a sector in blatant need of redirection.
Burberry is lauded by many as an outlier in this world of luxury goods, a company that has truly embraced digital. For all the talk of such innovation though, the website itself is utterly dominated by a rote e-commerce site, as are its social networks such as Google+. It is the physical stores where technological innovation has been injected. And this is supposedly the company pushing the rest of its peers forward. It comes as little surprise then that eConsultancy published a superb piece at the end of April excoriating the sector, leaving no brand unscathed. Headlines included, “painfully slow load times“, “awful UX” and “not making much effort“. But the worst and most perplexing atrocity had to be the above screengrab on the purposeful hiding away of an e-commerce platform, one that was presumably quite expensive to source and implement in the first place. We can’t overestimate the necessity of having a clear user journey through to purchase, just as it would be difficult to overestimate the amount of luxury good companies that are guilty of this sin for which Dolce & Gabbana have been singled out for here.
On this note, Gucci’s recently relaunched mobile site – replacing among other things a tablet site that had been left to wither since 2010 – was welcome news to us, as it seemed to be also (logically) to those wishing to actually part with their money on Gucci wares. L2 in May reported the news, saying that the new site now accounts for 27% of all traffic, a 150% YoY increase. Sounds good, except that means traffic through the mobile site in 2013 was a miniscule 0.18%, right? Terrible.
There are signs of hope. Gucci’s move to invest in a new mobile site, though monumentally belated, is a welcome one. As more brands cotton on to the importance of online, the Financial Times recently reported on the moves many are making to secure ‘.luxury’ suffixes, in the wake of IPv6, if only to avoid the complications of cybersquatting. And Michael Kors, which seems only to be going from strength to strength every quarter, has praised its own social media presence for “driving international sales”. We’ve almost entirely focused on fashion brands here, but other companies within the luxury sector are getting the message loud and clear. Take the auction house Christie’s, a legacy company if ever there was one, having been founded in 1766. Not only have they dedicated time and energy to investing in major online auctions, they have also recently created a new sector vertical of ‘luxury’ within the house itself. New thinking might well take new talent, it will also take C-suite buy-in, as well an acceptance that digital commerce is an integral part of business now, no matter how exclusive your product is.
It would be impossible to capture the disruptive influence the latest digital technologies are currently having on the world in a single blog post. But what Zeitgeist has collated here are some thoughts and happenings showing the different ways technology is changing our lives – from the way we do business to the way we interact with others.
Last night saw a highly enjoyable occurrence. No, not the Academy Awards in general, which as ever moved at a glacial pace as it ticked off a list of predicted favourites. Rather, it was a specific moment in the ceremony itself, when host Ellen DeGeneres took a (seemingly) impromptu picture of herself with a cornucopia of stars, tweeting it instantly. The host declared she wanted the picture (above) to be the most retweeted post ever. The previous holder was none other than the President of the United States, Barack Obama, whose re-election message saw over 500k retweets. It took Zeitgeist but a few minutes to realise that Ellen’s post would skyrocket past this. Right now it has been retweeted 2.7m times. Corporate tactic on the part of Samsung though it may have been, Zeitgeist felt himself feeling much closer to the action – being able to see on his phone a photo the host had taken moments ago several thousand miles away – and the incident helped inject a brief air of spontaneity into the show’s proceedings. Super fun, and easy to get definitive results in this case on how many people were really engaging with the content. But can we quantify how much Samsung and Twitter really benefited from the move, beyond fuzzy marketing metrics? Talking heads on CNBC saw room for improvement (see below).
The big news of late in tech circles of course has been Facebook’s $19bn acquisition of messaging application Whatsapp. Many, many lines of editorial have been spilled on this deal already. In the mainstream media, many commentators have found the price of the deal staggering. So it’s worth reading more considered views such as Benedict Evans’, whose post on the deal Zeitgeist highly encourages you to read. Despite the seemingly large amount of money the company has been acquired for – especially considering Facebook’s purchase of Instagram for a ‘mere’ $1bn – Evans sagely points out that per user the deal is about the same as Google made in its valuation when it purchased YouTube. So perhaps not that crazy after all. The other key point that Evans makes is on Facebook’s dedicated pursuit to be the ‘next’ Facebook, or conversely to stop anyone else from becoming the next Facebook. With a meteoric rise in members (see image below, as it outstrips growth by both Facebook and Twitter), Whatsapp was certainly looking a little threatening.
The worry for investors is how Facebook will monetise this platform, when the founders have professed an aversion to advertising. Is merely ensuring that Facebook is the ‘next’ Facebook a good enough reason for such acquisitions? Barriers to entry and sustainable advantages will be few and far between going down this route. The Financial Times, in its analysis of the acquisition, points out that innovation is quickly nipping at the heels of Whatsapp. CalPal, for example, is one example of a mobile application that lets users message each other from within an app. In the markets, there has been a relatively sanguine response to the purchase, but only because of broader trends. As the FT points out,
“External forces have also helped to push the headline prices of deals such as WhatsApp into the stratosphere. A global excess of cheap money, along with a scarcity of alternatives for growth-hungry investors, has boosted the stock prices of companies such as Facebook and Google.”
One of the most visibly exciting developments in technology in recent years is the explosion of the wearable tech sector. But it is Google’s flagship product, Glass, that has met with much ire and distress. An excellent piece of analysis appearing in MIT Technology Review last month hit the nail on the head when it identified why Glass was having trouble winning people over. The article rightly identifies the significant shift in external appearance inherent in making the switch from a device that needs to be taken out of a pocket as makes it clear when it is being interacted with (you need to cover half your face with the product to talk to someone, for example). The article also details the savvy approach Google have taken to the distribution of their product. It’s always sensible to try and mobilise the part of your base likely to be evangelists anyway, so as to build advance buzz before a full-blown release. But to get them to pay for the privilege, as Google are doing with their excitable fans, dubbed Explorers, is a stroke of genius for them. However, the key issue, and what the article states is an “insurmountable problem”, is that “Google’s challenge in making the device a successful consumer product will be convincing the people around you to ignore it”. It is this fundamental aspect of social interaction that is worrying many, and now Google is worried too. As detailed in the FT, the company has acknowledged that the product can look “pretty weird”. Recognising it has a “long journey” to mainstream adoption, it published a list of Dos and Don’ts. Highlights include,
“Ask for permission. Standing alone in the corner of a room staring at people while recording them through Glass is not going to win you any friends… If you find yourself staring off into the prism for long periods of time you’re probably looking pretty weird to the people around you.”
It indicates that Google may have a significant ‘Glasshole‘ problem it needs to attend to. The case may be overstated though. One of the problems may just be that potential customers have yet to see any practical uses for it. This is beginning to change. Last week, Virgin Atlantic announced a six-week trial of both Glass and Sony smartwatches. The idea will be for check-in attendants to use the devices to scan limousine number plates so that passengers can be greeted by name and be instantly updated on their flight status.
In the arts, digital technology has inspired much innovative work, as well as helped broaden its audience. David Hockney, one of England’s greatest living artists, recently exhibited a series of works produced entirely on his iPad at London’s Royal Academy of Arts. He is far from alone. Last week’s anniversary issue of The New Yorker featured work from Jorge Colombo on its front cover, again produced entirely on an iPad. Such digital innovation allows for increased productivity as well as new aesthetics. When done well, art can also involve the viewer, encouraging interaction. Digital technology helps with this too. Earlier in the year The New York Times covered how the New York City Ballet redesigned part of their floor in a new scheme to attract new visitors to the ballet. The result, roughly life-size pictures of dancers arranged on the floor, has seen great success, and an explosion of content on social media platforms like Instagram, where users have taken to posing on the floor as if interacting with the images (see above). It’s a simple tactic that now reaches a far greater audience thanks to new digital technologies.
A recently published book, ‘Now I know who my comrades are: Voices from the Internet Undergound’, by Emily Parker, seeks to demonstrate the ways in which digital technology has made helped to coalesce and support important activism in regions such as China and Latin America. But, as The Economist points out in its review, the disappointing situation in Egypt puts pay to some of the author’s claims; there are limits to how productive and transformative technology can be. In business, these hurdles are plain to see. A poll taken by McKinsey published last month shows that “45% of companies admit they have limited to no understanding on how their customers interact with them digitally“. This is staggering. For all executives’ talk of the power of Big Data, such technology is useless without the proper structures in place to successfully analyse it. We also perhaps need to think more about repercussions of increased technological advances and how they influence our social interactions. In the recently opened film Her (starring Joaquin Phoenix, pictured below), set in the very near future, a new operating system is so pervasive and seamless that it leads to fraught, thought-provoking questions on the nature and productivity of relationships. When does conversation – and more – with a simulacrum detract from interactions with the physical world? These considerations may seem lofty, but as we illustrated earlier, the germination of such thoughts are being echoed in discussions over Google Glass.
So technology in 2014 heralds some promise for the future. Wearable tech as a trend is merely the initial stage of a journey where our interaction with computing systems becomes seamless. It is on this journey though that we need to make sure that businesses are making the most of every opportunity to streamline costs and enhance customer service, and that individual early adopters do not leave the rest of us behind to deal with a bewildering and alarming new way of living. One of our favourite quotations, from the author William Gibson, is apt to end on: “The future’s already here, it’s just not very evenly distributed“.
The videogame industry, like many of the protagonists in the games it creates, is under attack. The competition is fierce. Not only is there healthy competition amongst legacy companies – including Nintendo, Sony and Microsoft – but new devices are increasingly distracting consumers, and digital disruption elsewhere is changing the way these companies do business.
Part of the problem is cyclical; the market has gone longer than usual without a major new console launch from either Sony or Microsoft, which in turn makes game manufacturers hesitate from making new product. But the industry needs to be wary that their audience has changed, in multiple ways. Sony are now starting to talk about their PS4 (due to be released in around six months’ time), beginning with a dire two hour presentation recently that failed to reveal price, release date, or an image of the console. And the word over at TechCrunch? “A tired strategy… [O]verall the message was clear: Sony’s PS4 is an evolution, not an about-face, or a realization that being a game console might not mean what it used to mean.”
We’ve written before on creative destruction in other industries, and talked before about shifting parameters for companies like Nintendo. The inventor of NES and Game Boy is currently struggling with poor sales of its new console, while at the same the chief executive of Nintendo America recently stated that digital downloads of videogames were becoming a “notable contributor” to their bottom line. Companies like Apple are surrounded by perpetual rumours of developing their own videogame platform. New companies in their own right, such as the Kickstarter-funded company producing the $99 Ouya, is among several players shaking up the industry. The upshot of such turmoil – a “burning platform” as the Electronic Arts CEO described the situation in 2007, referring to the dilemma of holding onto the burning oil rig and drowning in the process, or risk jumping off into who-knows-what – is a loss of market share. Accenture in January published a report predicting the demise of single-use devices such as cameras and music players whose revenues would be eaten into as more and more consumers flocked to tablet and other multi-purpose gadgets. Videogame console purchase intent was not researched, but it is not hard to make the analogy.
It was enlightening and reassuring then to read McKinsey Quarterly’s interview recently with Bryan Neider, COO of Electronic Arts. Some interesting take-outs follow. First, in 2007, the company recognised there was a problem: “game-quality scores were down and our costs were rising”. The company wanted to shift from having a relationship with retailers to having one with gamers. This meant having a focus on digital delivery. This fiscal year, digital is forecast to represent 40% of the overall business. Neider recognises this closeness to the consumer makes them even more susceptible to their whims and preferences, so they’re relying far more on data-backed analysis than they have before, including a system with profiles of over 200m customers. This data is used for everything from QA to predicting game usage. Neider elaborates,
“Key metrics answer the following questions: where in the game are consumers dropping out? What is the network effect of getting new players into the game? How many people finish a game? Did we make it too difficult or too long? Did we overdevelop a product or underdevelop it? Did people finish too fast? Those sorts of things are going to be critical… However, the challenge is that parts of the gaming audience are pretty vocal—they either really like a game or they really don’t like it. The trick is to find ways to get feedback from the lion’s share of the audience that is generally silent and make sure we’re giving these people what they want.”
Interestingly, the company’s structure was changed to reflect individual fiefdoms according to franchise – be it FIFA or Need for Speed – the needs for which are managed in that line of business. Each vertical competes with the others to deliver the highest rates of return, while also being able to draw on central resources (marketing, for example). Electronic Arts, as a developer of software for other manufacturers, will to some extent always be at the mercy of which devices are in vogue and the cycle of obsolescence. It is impressive though to see that the company has recognised the need to change the way it does business. The operational and technological sides of business don’t seem to have distracted Neider from the key insight in the industry, “Ultimately, we’re in the people business“.