Be More Human

What is it that makes us human? Many of the characteristics that appear unique to or species also exist in nature. Higher order animals can demonstrate self-awareness, tools are widely used from crows to beavers and creativity is often shown, from bird song to puffer fish who make highly decorative patterns in the sand. The true definition of humanness is something that scientists and philosophers have struggled with for eons.  Cambridge psychology professor, Simon Baron Cohen, believes he has the answer – it is our ability to invent that sets us apart from the rest of the animal world.

In a new book, The Pattern-Seekers: A New Theory of Human Invention, Baron Cohen suggests that genetic development led to a distinct cognitive leap around 100,000 years ago that saw a ‘systemising mechanism’, resulting in human invention. Whilst birds may use a rock as a tool to access food, it is achieved through simple cause and effect. However, it lacks the foresight of true invention. Humans have a unique causal reasoning that sets us apart from other creatures. It allows us to think beyond immediate consequences to create inventions whether it be computer algorithms, smartphones or DNA sequencing.

For myself, I have been a strong believer that good ideas can come from anyone, thanks to our causal reasoning. The challenge, however, are the limitations that are put in place by work or education. There’s a perception that a scientist is not a creative thinker and an artist is not an engineer. The current emphasis on STEM (science, technology, engineering and maths) is not helpful either. It suggests that there is greater value in these areas and less in creative subjects. Yet, the process of invention is creative endeavour. New ideas result from making different and unusual connections, which is exactly what creativity is about. Not only that, but innovation is largely a collaborative process that brings in a range of skills and ways of thinking.

Not everyone is bounded by the limited definitions of scientist and artist which is why we are still able to invent. In education this is interdisciplinary approach more formalised through trends such as STEAM – STEM, with the addition of Arts. A creative approach is the way in which we will solve many of our current and future challenges. We may not all become the next Ada Lovelace, Hedy Lamar or Rosalind Franklin but there’s no question that the ability to invent or innovate exists within all of us.

Generation Emoji

When I saw the news that Apple would be releasing 217 new emojis into the world, I did what I always do: I asked my undergraduates what it meant to them. “We barely use them any more” they scoffed. Apparently, emojis are now only used by ‘middle aged people’ like their parents. “And they use them all wrong anyway” my cohort from generation Z added earnestly.

My work focuses on how people use technology, and I’ve been following the rise of the emoji for a decade. With 3,353 characters available and 5 billion sent each day, emojis are now a significant language system. When the emoji database is updated, it usually reflects the needs of the time. This latest update, for instance, features a new vaccine syringe and more same-sex couples.

But if my undergraduates are anything to go by, emojis are also a generational battleground. Like skinny jeans and side partings, the ‘laughing crying emoji‘, better known as 😂, fell into disrepute among the young in 2020 – just five years after being picked as the Oxford Dictionaries’ 2015 Word of the Year. For gen Z TikTok users, it’s millennials who are responsible for rendering many emojis utterly unusable – to the point that some in gen Z barely use emojis at all.

Research can help explain these spats over emojis. Because their meaning is interpreted by users, not dictated from above, emojis have a rich history of creative use and coded messaging. Apple’s 217 new emojis will be subjected to the same process of creative interpretation: accepted, rejected or repurposed by different generations based on pop culture currents and digital trends.

Face the facts

When emojis were first designed by Shigetaka Kurita in 1999, they were intended specifically for the Japanese market. But just over a decade later, the Unicode Consortium, sometimes described as “the UN for tech”, unveiled these icons to the whole world.

In 2011, Instagram tracked the uptake of emojis through user messages, watching how 🙂 eclipsed 🙂 in just a few years. Since that time, the Unicode Consortium now meets each year to consider new types of emoji, including emojis that support inclusivity. In 2015, a new range of skin colours was added to existing emojis. In 2021, the Apple operating system update will include mixed-race and same-sex couples, as well as men and women with beards.

The End of English?

Not everyone has been thrilled by the rise of the emoji. In 2018, a Daily Mail headline lamented that “Emojis are ruining the English language“, citing research by Google in which 94% of those surveyed felt that English was deteriorating, in part because of emoji use.

But such criticisms, which are sometimes levelled by older generations, tend to misinterpret emojis, which are after all informal and conversational, not formal and oratory. Studies have found no evidence that emojis have reduced overall literacy.

On the contrary, it appears that emojis actually enhance our communicative capabilities, including in language acquisition. Studies have shown how emojis are an effective substitute for gestures in non-verbal communication, bringing a new dimension to text. A 2013 study, meanwhile, suggested that emojis connect to the area of the brain associated with recognising facial expressions, making a 😀 as nourishing as a human smile. Given these findings, it’s likely that those who reject emojis actually impoverish their language capabilities.

Questions on the impact of emerging forms of media communication are not new. When the use of SMS became popular amongst teenagers there was a suggestion that highly abbreviated words, or ‘text speak’, may be harmful to literacy. A research study did not find any significant negative impact and in children, the use of abbreviated or phonetic speech generally supported literacy and language development. In particular the research found that children who were initiating new forms of text speak demonstrated a strong grasp of the structure of language. Akin to Jazz music, it seems that knowing the rules allows them to be broken.

Although current research has considered the emotional and sociological effect, we can see from its usage that Emoji also brings a level of creativity. There are stories, novels, celebrity biographies and even The Bible written in Emoji. They have also been adopted by artists both as a medium of expression and as tool to bring meaning to existing visual art.

Creative criticism

It’s not only about which emojis are used, there are also different, confusing meanings for specific generations. Although the Unicode Consortium has a definition for each icon, including the 217 Apple are due to release, out in the wild they often take on new meanings. Many emojis have more than one meaning: a literal meaning, and a suggested one, for instance. Subversive, rebellious meanings are often created by the young: today’s gen Z.

The aubergine 🍆 is a classic example of how an innocent vegetable has had its meaning creatively repurposed by young people. The brain 🧠 is an emerging example of the innocent-turned-dirty emoji canon, which already boasts a large corpus.

And it doesn’t stop there. With gen Z now at the forefront of emerging digital culture, the emoji encyclopaedia is developing new ironic and sarcastic double meanings. It’s no wonder that older generations can’t keep up, and keep provoking outrage from younger people who consider themselves to be highly emoji-literate.

Emojis remain powerful means of emotional and creative expression, even if some in gen Z claim they’ve been made redundant by misuse. This new batch of 217 emojis will be adopted across generations and communities, with each staking their claim to different meanings and combinations. The stage is set for a new round of intergenerational mockery.

A version of this article first appeared in The Conversation on 25.2.21.

What’s New for 2021?

What happened last year might be best summed up by a quote from Lenin, ‘there are decades when nothing happens and there are weeks when decades happen’. In the initial few months of the pandemic McKinsey found that video calling for work more than doubled, in what they described as a five-year technology leap. Zoom, the clear winner for video platforms, reported a jump in paid users from 10m in December 2019 to 200m by March 2020. Teams and Google Meet also saw large user increases and Slack’s paid customer base doubled in 2020.

More Tools for Digital Work and Personal Lives

With the significant increase in online working, there has been much discussion as to whether this shift represents a permanent change. Previously, companies have seen office-based jobs as a form of compliance, with working from home often regarded as a less productive option. The experience of 2020 has demonstrated the compliance argument is largely false. You don’t need to be sitting at an office desk to send emails or to join meetings. Observers suggest that post-pandemic, up to 20% of face-to-face work will permanently move online and many other jobs will be done through a hybrid online/office model. Even with some movement back into physical offices, 2021 will continue to see a growth in online work-related tools. One trend is the use of plug-ins that enhance the online presenter experience for video conferences and meetings. A good example is mmhmm (chosen, because you can say the brand name with your mouthful). The software replaces the camera in Zoom or Google Meet with the presenter, slides and videos managed in a single, integrated screen. It avoids the ‘can anyone see my screen’ situation, offering a much slicker presenter experience.

Changes in 2020 weren’t just about work. Ofcom data revealed that UK personal video calling jumped from 34% of users to over 70%, with a majority made on Facebook’s platforms, WhatsApp and Messenger. The app downloads reported by Apple and Google are also telling. In 2020 Zoom, TikTok and Disney+ were the most downloaded apps, highlighting our need to stay connected and be entertained. Amongst their most recommended apps were Endel (stress reduction) and Loona (sleep management), indicating an unsurprising trend for digitally-based well-being.

User-driven Innovation

Most of us faced a rapid learning curve with online working, both getting to grips with the technology, but also finding the most effective ways in which to use it. That was often a process of individual discovery, supported by a greater sharing of life-hack solutions. It meant that in 2020 we all became innovators. Accenture’s future consultancy, Fjord, identified this trend as Do It Yourself Innovation highlighting bicycle repair pop-ups and online work-out platforms as examples of this. This innovation trend has also led bourgeoning hyperlocal businesses exemplified by artisanal food production such as micro-bakeries. It is an area that will likely buck the downward trend in physical retail. Although initially driven by necessity, DIY innovation offers considerable potential in the coming year. Digital platforms are providing opportunities to monetize innovative or creative endeavours. One example is TikTok’s partnership with Shopify. This is a significant development now that the platform has matured beyond lip sync and dance challenges.

Changing Cities

More working from home has led to a shift in commuting patterns alongside a need for less infection-risky public transport (not to mention the urgent need to reduce carbon footprints). The drop in physical retail has also raised significant questions of the role of city centres. In many places there was an explosion of cycling, not just for commuting but also for pleasure, along with increased sales of electric bikes and scooters. Even in the UK, where electric scooters are largely not yet street legal, the retailer Halfords reported a three-fold increase in sales. The country also trialled electric scooter hire schemes in some cities. The evidence from both public transport use and housing purchase shows a move away from busy city centres to more localised approaches. Somewhat prophetically in early 2020, the mayor of Paris has proposed a concept called “ville du quart d’heure the quarter of an hour city – in which all the main amenities are available within a 15 minute walk or cycle ride. Many of these behaviour shifts appear permanent, so expect to see a rise in sales of electric personal vehicles alongside more localised, specialised retail in 2021.

High Anxiety

Inevitably, the high level of video calling in 2020 brought its own specific problems. Broadly referred to as Zoom Anxiety there are considerable negative consequences of staring at a screen for hours. With fewer non-verbal signals there is much greater anxiety. Studies also found high levels of stress associated with the technology challenges that most of us experienced. 2020 also highlighted a digital divide. Moving meetings or education online reveals a host of inequalities in terms of connectivity, technology and even working/living spaces. Ofcom, for example, reported that over 50% of 75+ do not use the internet. That is a worrying sign in aging population, where isolation leads to poor mental health and increased mortality rates. In 2021 governments, technology providers, businesses and educators will need to take significant steps if they want to prevent further inequalities from the digital divide. 

AI – The Good, The Bad and The Ugly

2020 also saw the inevitable march of artificial intelligence. Although we are currently at the machine learning, or weak AI stage, the last year saw many new examples of the potential that the technology has to offer. The Deep Mind AI found the solution to a long-standing conundrum on protein folding. Whilst that’s a specific application, a good demonstration of the possibilities for broader use were Adobe Photoshop’s Neural Filters. Although face swapping and ageing have been available in social media platforms for a while, Photoshop’s filters applied them with greater sophistication to high resolution images. Inevitably AI also had its fair share of blame. Its use to predict A level results led to a debacle in which the UK Prime Minister blamed it on a ‘rogue algorithm’. And herein lies the challenge for AI. There was nothing rogue about the algorithm, it functioned as it was programmed to do based on the parameters and the data that was provided. Technology will often be blamed for human problems, but as we move forward in the next year, there needs to be a broader understanding of the bias that is built into all algorithms and data*. Further challenges of AI were highlighted by Channel 4 with their alternative Christmas message for 2020. They created a deep fake version of The Queen to deliver a manipulated Xmas speech that demonstrated some of the dangers associated with the spread of these technologies.

The Dopamine Affect

Whilst the challenge of digital device addiction has been recognised for some years, the Netflix documentary, The Social Dilemma brought this to the fore. That was especially pertinent in a year when we spent more time online than ever before. There are many facets to the digital addiction challenge, but it can be summed up by the Like. These social affirmations generate small hits of dopamine that build addiction in much that same way as recreational drugs. It puts users in a constant state of low-level anxiety whereby they are continually seeking more dopamine hits – more likes. This addictive behaviour is monetized by the social media platforms who package it for advertisers to a point in which the user becomes the product. Understandably concerned, Facebook wrote a rebuttal to some of the points made in the documentary. They accused the programme of lacking nuance and scapegoating social media for wider societal problems. Arguably, if Facebook was that concerned about these issues they might simply remove or limit the Like button. Author and academic, Shoshanna Zuboff was interviewed in the Social Dilemma. Her book The Age of Surveillance Capitalism offers a more detailed argument on the challenges that social media create. 2021 will undoubtedly see the debate continue on the responsibility of the social media platforms, a need for greater moderation and continued calls to break up Facebook’s properties. It looks like it will be a year in which our relationship with technology will move forward and be questioned in equal measure.

* I would thoroughly recommend reading Hannah Fry’s excellent book Hello World. It gives a good understanding of how algorithms are made. And if you want to know more about data bias, have a look at the Caroline Priado-Perez’s Invisible Women and Safia Umoja Noble’s Algorithms of Oppression

The Next Thing After Mobile? It Won’t be VR or The IoT, It’ll Be Cyborgs

The always there, always on smartphone has become our core computing device. Benedict Evans from Andreessen Horowitz described it succinctly; “Smartphones are the sun and everything else now orbits around it.” I would identify the tipping point as June 2010, the launch of the iPhone 4. The power of that phone and the other devices that followed was greater than a Cray 2 Supercomputer from the 1980s. The machine cost the equivalent of $32 million and filled a room. Fast forward 20 years, and when users have a supercomputer in their pocket, everything changes. Before 2010, the most powerful consumer device was generally the one on the office desk. With the advent of high-power mobile computing it was people, not businesses, that took control. Now we can search the web, download apps, connect on social media, or take pictures of our dinner anywhere and at anytime we like.

Where do we go after mobile? I would suggest that the next big thing is … mobile, still. There’s been plenty over debate on how new devices might replace smartphones. It’s been suggested that the Internet of Things (IoT), such as wearables and contected devices might the next thing after mobile. I’m less convinced. Consider how we use the IoT. Whether it’s a smartwatch, a Fitbit or a Nest, they still need a mobile device to drive them. Smartphones are typically used to provide input and output information in audio or visual formats. These emerging technologies are essentially satellite devices to a core mobile computer. IoT devices enhance the experience, but I would suggest our phones will remain largely the same for now. Sure, phones are going to get faster, the screens will be brighter and maybe bigger, and the camera will get better (but battery life will still be poor). Fundamentally, though, mobile devices will not change significantly.

A development that I find interesting are voice controlled intelligent assistants. Amazon’s Echo has caught people’s imagination, Google Home was launched with much interest and there’s been talk that Apple will have a similar offering soon. These devices are essentially speakers with ever-listening microphones (scary) that use cloud-based artificial intelligence. There’s even an attempt in Japan to make them into a virtual girlfriend. Potential love interest aside, are these the next big thing after mobile? Probably not. Whilst they are proving popular right now, I would argue they are little more than a fancy egg timer (a report suggested this was the most used function of the Echo). These speakers are stop-gap technologies that are waiting for the likes of Siri, on mobile devices, to catch up. With better speakers on phones, the Echo will be redundant.

Some commentators are hoping that mixed realities, such as virtual or augmented reality are the logical next step for devices. Will our phone be replaced by a pair of glasses? Benedict Evans, in a recent article, raised a number of challenges that augmented glasses will need to address before they become mass market. AR and VR are interesting technologies, but as I’ve previously blogged, I believe they have specific uses that makes them niche devices.

Right now, there’s nothing that replaces the computing power and audio-visual interface that a mobile phone has. It leads me to one conclusion, mobile devices will only be replaced when we no longer need that interface, and computing is embedded in people. Yup, cyborgs. Elon Musk has stated that if we want to beat the robots we need to become part of them. He spoke about “neuroprosthetics”, which would tap into the neural activity to communicate complex ideas telepathically. Once you can do that, the mobile interface becomes less necessary. The Space X/Tesla Boss is not the only person thinking about embeded computing as a future device. In Sweden a company is offering employees the option of a rice-grain sized implant, instead of an ID card. More ambitiously, Cyborg Nest has developed an implant called North Sense that acts as a compass and direction finder.

It is conceivable that technologically enhanced bodies will become the core computing devices, replacing mobile phones. That will move us into a world in which the distinction between on and offline will all but disappear. A few technology outliers, such as Neil Harbisson are already embracing the idea of cyborgs. Understandably, most of us are worried or even repelled by the idea. Yet Neil Harbisson uses his implant to address his colour blindness. And what if computer implants could improve the lives of people suffering from a stroke or dealing with dementia? The cyborg question raises many ethical and philosophical issues that society hasn’t addressed yet. Maybe the concept of cyborgs isn’t that far fetched afterall. We are already attached to our smartphones. They are right next to us all of the time, and we are utterly reliant on them for communications and information. Maybe we have already become cyborgs by proxy?

The Trouble With VR

I was watching, yet another a virtual reality (VR) experience at an ad agency the other day. Before it began, our demo guy said to the assembled group ‘stand well back, as last week I punched someone’. It demonstrates both the benefits and challenges of VR. It’s a highly immersive experience, something that many brands strive for, however it is also a closed experience that removes the user from the real world.

Sensorama - early VRVR isn’t that new. In the early 1960s, Morton Heilig came up with the Sensorama. It had vision, it had sound, it had movement. It even had smell … now why didn’t Oculus think of that? The next iterations were in the late 90s. It was a time of optimism in technology, and companies such as Atari were creating expensive futuristic-looking headsets. The problem was not just the price, the technology also struggled to deliver a convincing virtual experience. In the last couple of years we’ve seen the launch of numerous new headsets – Oculus, HTC Vive, Samsung Gear and even Google’s Cardboard. The technology has caught up with the concept. We have retina screens, we have gyroscopes and above all, we have sufficient computing power to reduce that lag that made the 90s versions somewhat vomit inducing.

With the new generation of devices, we’re beginning to discover applications for VR. Naturally there is strong appeal in gaming. The immersive nature makes it ideal for driving, shooting or fantasy formats. Outside of the gamer world, useful VR applications are being developed in medicine, engineering and architecture. It’s also becoming a useful research tool in retail. Virtual stores can be easily built and different configurations tested on shopper focus groups.

Not to be outdone, advertising agencies are also jumping on the VR bandwagon aiming to deliver that illusive, immersive brand experience. And therein lies the problem. They’re replicating existing experiences without creating true audience engagement. So far, brands have built some well produced, mildly interesting, yet obvious applications. From Audi to Volvo, a number of brands have created virtual driving experiences. Fine, but really, I’d just like to test-drive the car. Getting a feel of the vehicle on real highways is far more important than looking out on simulated mountain road. Occasionally, there are some nice executions, such as Merrill’s VR trek. However, I would argue that all of these are just stunts.

I see two main problems with VR in the brand context. Firstly, consumer adoptions rate is low. Though much cheaper than the 90s versions, current devices are still pricey. Although people were happy to splash out similar sums on smartphones and iPads, the limited application of VR makes purchasing less appealing. Brands therefore need to deliver high-end VR experiences in-situ, for example in a store. Unlike other in-store experiences, such as digital billboards, VR offers fewer possibilities for the consumer to interact with their own mobile device, or connect to social media. For brands, interaction is often key to creating scale through sharing with a wider audience. VR doesn’t easily lend itself to scale.

The second, greater challenge I see is that brands are struggling to understand VR as a medium. The problem comes from the immersive nature of these new devices. It’s making a connection in the first person. That naturally works in gaming, but brands think of their engagement in the third person, delivering a message to a remote viewer. Digital and particularly social media, are making brands more two-way, more intereactive. However the ‘share your selfie/like this hashtag’ approach is still very third person thinking. It’s an interaction that aims to create scale through sharing to be viewed in the third person. Because VR is immersive and first person, it requires a re-think of how brands approach their audience. For now, though, VR is a tactic for brands – essential for delivering stunts with a short hit of interest.

Digital Strategy: from here to the future

This blog takes a deeper look at digital planning and strategy in brand marketing and advertising. It accompanies a useful overview of these strategic tools from my Birmingham City University colleague, Mike Villiers-Stuart in his blog, Overview: Working The Methodology. This article considers how we can deliver strategic approaches that bring value in a constantly changing landscape of digital channels and platforms. In short, how can we develop an effective digital (and future-proof) methodology?

 Platforms, Channels and Formats 

“There was a time when people felt the internet was another world, but now people realise it’s a tool that we use in this world.”  Tim Berners-Lee

Since the explosion of the internet in the mid-1990s we have seen the continued growth of digital marketing channels (Meeker, M. 2015). Within the broad definition of ‘online’ there are an increasing number of platforms that offer opportunities for brand advertising; desktop/laptops, smartphones, tablets and, potentially, wearable devices. Eventually we might even include the connected fridge as a platform. These technologies have driven the emergence of new media channels, specifically web, mobile or social media. These channels have led to a plethora of formats from display ads to paid search to native content or video ads. As a result of this combination of platforms, channels and formats, marketing and advertising has become an increasingly complex landscape.

The other side of this challenge is that consumer behaviour in the brand context has become similarly complex. Arguably, our basic human needs, as defined by Maslow remain largely the same. The difference is that these needs are being played out in many different places. Way, way back, before the internet, advertising and marketing had limited touch points. The aim of advertising was to create something memorable, such as a jingle, a catch phrase or a concept, in the hope that consumers would remember them later at the point of purchase. In the UK the 1980s was a heyday for creative advertising. People such as John Hegarty (Levi’s 501), John Webster (Cadbury Smash, John Smiths, Courage Best) or Tony Kaye (Real Fires, British Rail, Dunlop) were writing adverts that were memorable, often repeated, and memed in playgrounds.

In today’s digital landscape the problem is not simply the numerous channel options, but also that consumers adopt new ones faster than brand advertising. Larry Downes’ identifies an increasing gap between these two groups (Downes, L. 2009) due to the exponential growth of the technology that drives digital media. It means that brands in digital are constantly playing catch-up with their audience.

maslow1

The Role of Planning and Strategy

‘If you have all the research, all the ground rules, all the directives, all the data — it doesn’t mean the ad is written. Then you’ve got to close the door and write something — that is the moment of truth which we all try to postpone as long as possible.’ David Ogilvy

The primary purpose of advertising and marketing is to create brand value, sell products and services, and through that, to create a return on investment (ROI). Simply placing adverts without any strategy in a complex digital media landscape is unlikely to deliver a return. Although some brands have tried this approach, advertising needs something more than just a ‘spray and pray’ strategy.

As early as the 1960s, even with fewer channel choices than now, it became apparent to some advertising UK execs that account managers could not rely on intuition or guesswork to develop their campaigns. Two people are credited with the development of the discipline known as planning; Steven King at JWT and Stanley Pollitt at BMP (Morison, M A et al 2012). They believed that they could produce much better results for clients if advertising professionals looked beyond pure marketing research and interpreted data in a more meaningful way. The discipline has since matured, or to use an advertising term, ‘rebranded’ into a range of roles. The Account Planning Group[1] lists a dozen key positions, and creative recruitment sites include terms such as account planner, strategic planner, strategist, creative strategist, and an old favourite, ‘brand anthropologist’ (Morison M A et al, 2012, Pg 7). The precise definition of who does what, and especially, who is more important in the pecking order, seems to be cause for considerable debate. Tracy Fellows, Chair of the Association of Account Planning said there is “a culture emerging in our industry that isn’t clear on what strategy is or what it does.” (Tiltman, D, 2011). In spite of this confusion, there are some broad principles that underpin the discipline that go beyond a single job definition. Some would even argue that job definitions are in fact, irrelevant.[2] Ultimately any strategy aims to take a brand from pain to gain – identifying a problem and finding a solution. That boils down to understanding research, developing some insight and through that, offering the brand a creatively led solution or ‘a moment of truth’. Regardless of what we title it, strategy is a meeting of analysis and creativity.

The Rise of Digital

Even in the 60s, the amount of market research was too much for an account manager to interpret (Morison M A et al, 2012). Now, with an explosion of digital channels we have many more measures and research data. Along side the market research offered by companies such as Nielsen, Kantar, IPSOS and Comscore, planners, or strategists for that matter, can also access some exotic analytics data generated from digital channels. Typically it covers usage information, such as web or app analytics[3] that works in conjunction with primary research conducted by a brand or agency.

161003-metrics

Tom Fishburne, Marketoonist.com

At the other end of the process is the deployment of marketing and advertising. What tactics will the marketer use in order to deliver the strategy? As new platforms develop, tactics will inevitably change. The shift and the resulting opportunities to advertisers in digital platforms has been closely tracked for some years by Mary Meeker’s annual Internet Trends. Other indicators of a shift include Google’s ad revenue for search and display, which has nearly doubled to $20bn in a decade (Reported in Ad Week, July 2016), and social media advertising, where Facebook has seen revenues go from zero to $3.3bn in less than 10 years (Seetharaman, D. 2016, WSJ).

Such rapid growth creates a confusing landscape in which to deploy brand engagement. One example is the Internet Advertising Bureau’s Ad Unit Portfolio[4] which shows hundreds of options for just one of many advertising channels, display advertising. Another indicator of this is the growth is in technology providers in the digital landscape. Scott Brinker in the Chief Marketing Technologist blog found that the number of players in the market almost doubled between 2014-15[5].

The challenge for advertising agencies that stems from a changing landscape is that of risk. How do we prove engagement in new, emerging channels? Brands, by their very nature, tend to be risk-averse and shy away from what they see as untried channels. Therefore the role of strategy becomes more important in helping brands to understand the new landscape and manage that risk (and, of course, to spend the money).

Technology as a Tool

One of the biggest challenges of the modern digital landscape is that particular platforms or technologies drive a campaign. It’s not unusual to hear a client say ‘put a QR[6] code on it’ , ‘we want an app’, ‘let’s use iBeacons[7]’ or ‘we want an augmented reality campaign’. In some ways it is understandable. An emerging platform such as the smartphone is very feature driven, and thus the technology is at the forefront. The challenge is that tech-driven thinking is not strategic. All it is doing is describing the tools to deliver a campaign. Imagine that you commissioned a portrait but you only told the artist the brushes you wanted them to use? It’s no different in advertising. Describing a technological feature fails to show an understanding of the audience or the context in which they might be engaged. It’s important to keep in mind that the technology itself is agnostic – there is nothing wrong with it – but without a strategy it is often used poorly.

Furthermore a lack of strategy in emerging platforms creates a ‘me too’ approach – using a channel simply because everyone else seems to be doing it. When some brands are early into a channel and see a small measure of success, others jump on the bandwagon. User generated content in social media is a good example of this. There’s a Tumblr dedicated to this, called tellusyourstoryblog.tumblr.com, that documents numerous brands that have asked users to share their story. The most surprising of these is PreparahionH, the pile cream. A strategic approach might reveal that another ‘send us your selfie’ campaign is not the best way to engage an audience in this context.

From Pain to Gain: Strategic Methodologies

Today’s planning discipline in advertising agencies is essentially an applied practice rather than a theoretical concept. Although there are supporting theories, strategic methodologies are rooted in a practical delivery of a campaign to an audience – the straight forward pain-to-gain process. An effective strategic tool can be understood in three stages; research, insight and creative. Within each stage there are a number of smaller processes that contribute to the development of the strategy. Although it is often described in a linear way, the reality is that development doesn’t always take a such simple path. For example, creative might be considered at an early stage in the process then reviewed or changed following further strategic insights.

In a complex digital world, this process can understandably become confusing. In order to formulate an effective strategy it is therefore necessary to use a framework. Agencies often like to invent their own, but three such examples are DPDDD (Villiers-Stuart, M. 2014), SOSTAC (Chaffey, D. 2012) or Storyscaping (Legorburu, G. & MaColl, D. 2013). These are not solutions that can provide an answer, but instead are tools that describe the flow of development in which to deliver an effective, meaningful strategy.

Strategic Realists

Realists or cynics? It all depends on your perspective. With the rise of digital channels, strategic methodologies have become more complex. Inevitably, some argue that it creates an unnecessary mystification of the process. Chief amongst these is Byron Sharp. He proposes that advertising is not sufficiently underpinned by empirical research and suggests that the objectives of many strategic methodologies are false. Strategists may focus on creating brand value through storytelling that will deliver brand fans or advocates. Advertisers will cite passion brands such as Harley Davidson or Apple who deliver a brand story that creates advocates who will purchase more frequently and encourage others to do so. The most referenced example of such brand fans are those that have the Harley Davidson tattoo. Sharp’s research identifies the concept of a fan as largely a myth and that the purchase/repurchase rate is similar for most brands. A little more vociferous in his argument is Bob Hoffman, Ad Contrarian (2007). A long-standing ad-man, Hoffman believes that brand strategists are mistaken in identifying consumers as ‘fans’. He suggests that what is described as ‘brand advocacy’ is nothing more than convenience and habit. He would go so far as to argue that brands are misled by their agencies who are creating something of ‘an emperor’s new clothes’ about what is fundamentally a simple process. He suggests that this is why brands will deliver inappropriate campaigns such as social media ‘Tell Us Your Story’, smartphone apps that are hardly used, and banner ads that are rarely clicked on. The Advertising Realists believe that it is really not that complicated. Arguably it’s in the interests of agencies (and their fees) to make it seem so. The Strategic Realists are typified by an irreverent, almost punk spirit that seeks to break down much of the pompousness of current advertising strategy. An Open Letter to All of Marketing and Advertising is a good example of this approach (Anon, 2010), which ends with the plea:

‘If you’d like to tell me what’s good about your product, fine. I may buy it. I may not …But, not to put too-fine-a-point on it, please, please, PLEASE, if you wouldn’t mind, awfully. Leave me alone. Thanks.’

Tools for The Job

All of this presents our core question. Which strategic methodology, if any, is the most appropriate? Perhaps the solution is to take a strategic approach to strategy itself. Just as technology is a tool, a strategic framework is simply there to deliver a job. The job of good advertising. That choice of tool should consider the values of the agency, the needs of the client and the campaign objectives. Thus, the DPDDD methodology is an effective tool for clients who are results driven, looking for an end-to-end solution. SOSTAC, on the other hand, is designed to meet the needs of a marketing-led approach. Storyscaping is a more in-depth tool that can help build the brand idea, especially in an omni-channel landscape. Storyscaping lends itself well to engagement for passion brands but it may over complicate the process where consumers are uninvolved with the brand, such as the classic FMCG product, a packet of soap powder.

These are just examples of how the tools might be applied and are far from exhaustive. Ultimately the choice of methodologies should be underpinned by the basic principles of strategy – identify some relevant research, critically assess it to create some insight and develop creative that meets the needs of the brand.

Bibliography

Legorburu, G. & McColl, D. Storyscaping: Stop Creating Ads, Start Creating Worlds, 2014,

Villiers-Stuart, M. Overview: Working the Methdology, 2016,
http://blogs.bcu.ac.uk/futuremedia/2016/10/07/overview-working-the-methodology/

Meeker, M. Internet Trends 2016, 2015, KPCB

Morison, MA et al Using Qualitative Research in Advertising, 2012 (2nd Ed) Sage Publications

Downes, L, The Laws of Disruption:Harnessing the New Forces That Govern Life and Business in the Digital Age, 2009, Basic Books

Tiltman, D. The death of the big idea and the future of strategy, 2014, Brand Report
http://www.brandreportblog.com/the-death-of-the-big-idea-and-the-future-of-strategy-david-tiltman-warc/

Seetharaman, D. Facebook Revenue Soars on Ad Growth, 2016, WSJ
http://www.wsj.com/articles/facebook-revenue-soars-on-ad-growth-1461787856

Sharp, B. How Brands Grow, What Marketers Don’t Know, 2010, OUP

Hoffman, B. The Ad Contrarian, Getting beyond the fleeting trends, false goals, and dreadful jargon of contemporary Advertising, 2007

Anon, An Open Letter to All of Advertising and Marketing, 2010, PSFK
http://www.psfk.com/2010/08/an-open-letter-to-all-of-advertising-and-marketing.html

[1] There is a list of key planning roles to be found at http://www.apg.org.uk/#!apg-planningjobguide/c1lkf

[2] This might be controversial to some who believe the difference is significant, such as Jinal Shah who argued the importance of the planner vs strategist definition in his Constant Beta blog: http://jinalshah.com/2012/05/29/lets-fuckin-set-the-record-straight-account-planners-and-digital-strategists-are-not-the-same/

[3] These kinds of analytics are typically drawn from individual web-site based logs into formats such as Google Analytics. In addition there are aggregations of this data through tools such as GS Stats Counter or Google Trends to highlight just two.

[4] The IAB Ad Unit Portolio [http://www.iab.net/adunitportfolio] looks at ‘display ads’ such as web banners, mobile and video advertising formats.

[5] A rise from 947 in 2014 to 1,876 in 2015 http://chiefmartec.com/2015/01/marketing-technology-landscape-supergraphic-2015/

[6] QR codes originated as a means of tracking parts in the vehicle industry. They were adopted in Japan as a means of delivering a URL to a mobile device due to the complexities of translating Japanese into Roman-type URLs. However, there are few examples of successful campaigns in Europe and the US. I discussed this problem with Graham Charlton at Econsultancy (2013) in the following blog post: https://econsultancy.com/blog/62397-qr-codes-the-good-the-bad-and-the-ugly/

[7] I looked at the challenge for Beacons in this blog post: https://brandsandinnovation.com/2014/10/28/beacons-the-saviour-of-retail-probably-not/

Does Length Matter? Twitter’s 10K Character Dilemma

Twitter recently suggested that they might increase the length of Tweets to 10,000 characters. Unsurprisingly it created something of a Twitter storm. Social media users are passionate about their networks and rarely like change. It also happened when they changed the ‘faves’ star to hearts. But what will longer Tweets mean? Some commentators suggested that it will make the social media site no different to any other blogging platform. That points to the challenge that Twitter has an identify crisis. It doesn’t know what it is any longer. Celebrities and their audience have mostly left Twitter to go to Instagram. Perhaps they are simply driven by narcissism but it’s very telling that four of the top ten Instagram accounts are from the Kardashian clan. Twitter though, seems have become the place for politicians’ indiscretions, journalists Tweeting their own articles and the middle class moaning at brands over service failures.

That is Twitter’s broad problem. Over the last year its growth has slowed down considerably and had just over 300m active users in 2015. Well below expectations. Compare that to WhatsApp, the messaging platform is rapidly approaching 1 billion users. Since its IPO, Twitter has seen a fall in its share price, so it needs raise revenue (and investor confidence). For the mico-blogging site, that means bringing in more advertising, but it has not managed to deliver the expected revenues. Although it has grown, their advertising remains a bit-part player to Facebook’s highly successful offering. In part, it’s because they lack the reach of their competitor, but the key to Facebook’s ad success has been to create a walled garden and keep the users within the site. Twitter is trying a number of formats to address this issue. They recently launched a Conversational Ad format that with call to action options.In a similar vein, longer Tweets means that users should (in theory) spend more time in the channel. And that’s good for advertising.

But what about the users? The complaints about the changes are in part, a reflection that their audience cares about Twitter. Ultimately though, social media sites must evolve. Twitter has regularly added new features – from the (user driven) hashtag to their recent Moments. However, I think the problem for longer Tweets is that it goes against the prevailing trend. We are moving to shorter, message-based content.

Snapchat is a good example where social media this is going. The ten-second life of pictures and videos has caught the imagination of 200m+ users. The FT reported in Sept 2015 that the app had 6 billion video views per day – that’s a 3-fold increase in 7 months and rapidly approaching Facebook’s figure of 8 billion views per day. The fact is that from content to our attention spans, everything’s getting shorter (as a Microsoft study found). Certainly Twitter has to evolve but the answer probably doesn’t lie with longer Tweets.

Why the Internet of Things Needs More Personality

It seems as though everything is becoming connected. It’s not just smartwatches from the likes of Apple or Samsung. It’s also cars, homes, health, industry and agriculture. We have connected babies (well onesies), WiFi sniffing cats and even (the slightly pointless) a connected yoga mat. That’s all very well, but the mere existence of technology does not equal adoption. Cue Cat is my favourite example of a large technology investment with no user take-up. When it comes to the IoT Michael Humphrey writing in Forbes summed it up well – we have an ‘enthusaism gap’.

Clearly, going from innovation to adoption is not easy. Bill Buxton talks about The Long Nose of Innovation. Development happens over many decades until we create a truly usable product. The computer mouse and smartphone touch screens are two examples. How could we apply the long nose to the IoT? Some people suggest it will reach true innovation when it becomes invisible and we don’t know it’s there. That might be true in part, but I think there is a flip side – we need to create more enthusiasm by making the IoT more visible and giving objects a personality.

Things That Tweet
The micro-blogging channel has been put to good use, not just by people but also Tweeting objects. We have Mars Curiosity (@marscuriosity), the Crossrail Tunneling machine, Big Bertha (@BerthaDigsCR99) and there’s Tom Coates’ Tweeting house (@houseofcoates). Fun? Yes. But it seems to go deeper than that. @houseofcoates has 1400 followers (slightly more than I do), and some of them get into conversations with the house (and very occasionally, it replies).

Enchanted Objects
MIT Lab scientist, David Rose, harks back to the days of beautifully crafted artifacts that fulfilled just specific tasks. He worries that the future of most objects will be little more than a black slab of glass without any enchantment (and without personality). He is on a mission to create and promote more enthusiasm with enchanting objects. Often, these objects have fewer functions but they do them beautifully. He gives the example of the umbrella, where the handle glows when it is going to rain or a medicine bottle that chirps to remind you to take a pill. Simplicity and delight are the key to the engagement.

Simple, Fun Experiences
Taking a cue from David Rose, if we are to engage with the IoT then we need to focus on simplicity and fun. The Smart Crossing was a recent Cannes Lions winner for Smart Cars that did just that. To discourage pedestrians from crossing in front of the traffic they created a light where the red, stop person danced. Not only that, but the moves were created by real people in a booth nearby. Of course, everyone waited at the lights, entertained for a few minutes by a dancing person.

More Personality
Brad The Toaster is a more anthropomorphic incarnation. Though an artistic concept, rather than a real thing, it brings a personality to the problem of over consumption. Brad is one of many connected toasters that can’t be owned (he’s more like a cat in that respect). You can look after Brad and use him, but if he is neglected then he will simply give himself to someone else. This idea could be applied to other products like self-driving cars. Given that the vehicles we own spend most of their time parked up, it makes little sense to own a car . However, we have a strong emotional relationship with them. Even in a self-driving world where the car just appears when you want it, giving them up won’t be so simple. Perhaps, though, if they have personality more like Brad The Toaster then we’ll be more likely to switch to a simple rental model.

‘Clothes have Feelings Too’
Taking the Brad concept further, I’ve been developing an idea called The Internet of Clothes. In developed nations we buy too many clothes and wear very few of them. One solution is that your clothes will ask to be worn. They will Tweet you based on the weather, frequency of wear or occasion. And if you ignore them? They will contact a charity for recycling.

I hope that giving clothes a sense of personality it can help people make better use of the resource. There’s no reason why we can’t do the same for other IoT objects. At the simplest level, we can feel more engaged but at a deeper level, it’s also about building an anthropomorphic relationship. For us humans it makes the whole IoT easier to comprehend.

Facebook’s Dislike Button. What’s Not To Like?

Speaking at a recent event in California, Mark Zuckerberg suggested that the social network would be introducing a new button. He said, ‘”We have an idea that we’re going to be ready to test soon, and depending on how that does, we’ll roll it out more broadly”. Although the Facebook CEO didn’t name it as such, it has been branded the ‘Dislike’ button.

download

If it is implemented, this will be an interesting new step for Facebook. The current Like button, that first appeared in 2007, was famously the result of a hackathon. It was proposed as an ‘Awesome’ button. Realising that many post of cats and people’s children were less than awesome, it transformed into the Like that we know today.  The success of the button is both its binary simplicity and the fact that it is a positive acknowledgement of the post. Even when a post is more serious or tragic, the action of Liking is widely understood to be positive and supportive.

For Facebook, there is a need to move forwards. At a time when many young users are switching to Instagram and WhatsApp (both owned by Facebook), they need to innovate to encourage retention. The challenge of a Dislike button, though, comes from its very nature. It’s a negative action. In a Wired article, Brian Barrett suggested that it will create a negative atmosphere that will simply put people off posting. Given the personal nature of these networks, it’s easy to understand why users will be discouraged if disapproval is as simple as clicking a button.

The negativity of the Dislike button could, potentially run even deeper though. Unlike Reddit, one of the benefits of Facebook is that posts are not ranked. Once you have two options, Like and Dislike, there will be an inevitable sense of competitiveness on posts, discouraging yet more users.

I’m sure Facebook are aware of the challenges, but they will need to tread carefully. Posts and shares are the lifeblood of Facebook and that in turn is what drives their advertisers. So in the end, the success of a Dislike button will probably come down to money.

It’s All in The Wrist Action … Apple Pay and The Apple Watch

apple_pay_watch-580x387I was excited by the thought of Apple Pay on my Watch. There’s a (childish) appeal that I can pay for stuff just using the device on my wrist. And it looks as if I’m not the only one. In June, mobile analyst,  Benedict Evans (@benedictevans) Tweeted: ‘Apple Pay with a phone is still just taking something out of your pocket. Not transformative. With a watch it’s amazing. End of friction’. A report released in August from Writstly found that 80% of Watch users have paid with the system and 78% do so at least once a week. With such a high uptake, does that make Apple Pay a rip-roaring success? The answer is, probably not.

I am one of the 80% who have used Apple Pay on the Watch and it has been far from life changing. It is good enough, but far from the great experience that Apple has delivered elsewhere. Double clicking to ‘prime’ the card is fairly easy, although it’s effectively a two-handed operation. Tapping in to pay can be tricky at times. The biggest challenge is getting the angle right on the reader. They are generally set up on the right hand side and this is especially a problem on London’s transport network. If you wear your watch on your left then tapping in can be somewhat hit and miss. That’s not great on TfL where a nanosecond’s pause will cause havoc and loud tutting from other commuters. Another challenge is the availability in retailers. My UK experience is that very few outlets advertise Apple Pay. So for many shops it’s a case of tapping to see if it works. So even on the Watch there is still some friction.

In spite of the Wristly study, its difficult to know the true uptake of the payment system – we don’t know how many Apple Watches have been sold and there have only been a couple of broader studies in the US. One survey from InfoScout covering all Apple devices pointed towards a drop in payment adoption rates – from 15% in March 2015 to 13% in June. The second study was a Gallup Poll, which found that 65% of iPhone 6 users were aware of the payment system, but only 21% had used it. None of these show a comparison in take up with contactless cards, so there’s no baseline to gauge the success.

The Wristly study was a self-selecting sample of Watch users. It’s reasonable to assume that these are early adopters of the device who are likely to try out Apple Pay regardless of the experience. When it comes to a broader audience an experience that’s ‘good enough’ is probably not good enough to drive mass adoption. At the end of the day, Apple Pay is good attempt at mobile payment but it’s hard to see how it will achieve real scale. That said, I’m going to keep using Apple Pay on my Watch. Not because it’s any easier, but just because I can.