Be More Human

What is it that makes us human? Many of the characteristics that appear unique to or species also exist in nature. Higher order animals can demonstrate self-awareness, tools are widely used from crows to beavers and creativity is often shown, from bird song to puffer fish who make highly decorative patterns in the sand. The true definition of humanness is something that scientists and philosophers have struggled with for eons.  Cambridge psychology professor, Simon Baron Cohen, believes he has the answer – it is our ability to invent that sets us apart from the rest of the animal world.

In a new book, The Pattern-Seekers: A New Theory of Human Invention, Baron Cohen suggests that genetic development led to a distinct cognitive leap around 100,000 years ago that saw a ‘systemising mechanism’, resulting in human invention. Whilst birds may use a rock as a tool to access food, it is achieved through simple cause and effect. However, it lacks the foresight of true invention. Humans have a unique causal reasoning that sets us apart from other creatures. It allows us to think beyond immediate consequences to create inventions whether it be computer algorithms, smartphones or DNA sequencing.

For myself, I have been a strong believer that good ideas can come from anyone, thanks to our causal reasoning. The challenge, however, are the limitations that are put in place by work or education. There’s a perception that a scientist is not a creative thinker and an artist is not an engineer. The current emphasis on STEM (science, technology, engineering and maths) is not helpful either. It suggests that there is greater value in these areas and less in creative subjects. Yet, the process of invention is creative endeavour. New ideas result from making different and unusual connections, which is exactly what creativity is about. Not only that, but innovation is largely a collaborative process that brings in a range of skills and ways of thinking.

Not everyone is bounded by the limited definitions of scientist and artist which is why we are still able to invent. In education this is interdisciplinary approach more formalised through trends such as STEAM – STEM, with the addition of Arts. A creative approach is the way in which we will solve many of our current and future challenges. We may not all become the next Ada Lovelace, Hedy Lamar or Rosalind Franklin but there’s no question that the ability to invent or innovate exists within all of us.

Generation Emoji

When I saw the news that Apple would be releasing 217 new emojis into the world, I did what I always do: I asked my undergraduates what it meant to them. “We barely use them any more” they scoffed. Apparently, emojis are now only used by ‘middle aged people’ like their parents. “And they use them all wrong anyway” my cohort from generation Z added earnestly.

My work focuses on how people use technology, and I’ve been following the rise of the emoji for a decade. With 3,353 characters available and 5 billion sent each day, emojis are now a significant language system. When the emoji database is updated, it usually reflects the needs of the time. This latest update, for instance, features a new vaccine syringe and more same-sex couples.

But if my undergraduates are anything to go by, emojis are also a generational battleground. Like skinny jeans and side partings, the ‘laughing crying emoji‘, better known as 😂, fell into disrepute among the young in 2020 – just five years after being picked as the Oxford Dictionaries’ 2015 Word of the Year. For gen Z TikTok users, it’s millennials who are responsible for rendering many emojis utterly unusable – to the point that some in gen Z barely use emojis at all.

Research can help explain these spats over emojis. Because their meaning is interpreted by users, not dictated from above, emojis have a rich history of creative use and coded messaging. Apple’s 217 new emojis will be subjected to the same process of creative interpretation: accepted, rejected or repurposed by different generations based on pop culture currents and digital trends.

Face the facts

When emojis were first designed by Shigetaka Kurita in 1999, they were intended specifically for the Japanese market. But just over a decade later, the Unicode Consortium, sometimes described as “the UN for tech”, unveiled these icons to the whole world.

In 2011, Instagram tracked the uptake of emojis through user messages, watching how 🙂 eclipsed 🙂 in just a few years. Since that time, the Unicode Consortium now meets each year to consider new types of emoji, including emojis that support inclusivity. In 2015, a new range of skin colours was added to existing emojis. In 2021, the Apple operating system update will include mixed-race and same-sex couples, as well as men and women with beards.

The End of English?

Not everyone has been thrilled by the rise of the emoji. In 2018, a Daily Mail headline lamented that “Emojis are ruining the English language“, citing research by Google in which 94% of those surveyed felt that English was deteriorating, in part because of emoji use.

But such criticisms, which are sometimes levelled by older generations, tend to misinterpret emojis, which are after all informal and conversational, not formal and oratory. Studies have found no evidence that emojis have reduced overall literacy.

On the contrary, it appears that emojis actually enhance our communicative capabilities, including in language acquisition. Studies have shown how emojis are an effective substitute for gestures in non-verbal communication, bringing a new dimension to text. A 2013 study, meanwhile, suggested that emojis connect to the area of the brain associated with recognising facial expressions, making a 😀 as nourishing as a human smile. Given these findings, it’s likely that those who reject emojis actually impoverish their language capabilities.

Questions on the impact of emerging forms of media communication are not new. When the use of SMS became popular amongst teenagers there was a suggestion that highly abbreviated words, or ‘text speak’, may be harmful to literacy. A research study did not find any significant negative impact and in children, the use of abbreviated or phonetic speech generally supported literacy and language development. In particular the research found that children who were initiating new forms of text speak demonstrated a strong grasp of the structure of language. Akin to Jazz music, it seems that knowing the rules allows them to be broken.

Although current research has considered the emotional and sociological effect, we can see from its usage that Emoji also brings a level of creativity. There are stories, novels, celebrity biographies and even The Bible written in Emoji. They have also been adopted by artists both as a medium of expression and as tool to bring meaning to existing visual art.

Creative criticism

It’s not only about which emojis are used, there are also different, confusing meanings for specific generations. Although the Unicode Consortium has a definition for each icon, including the 217 Apple are due to release, out in the wild they often take on new meanings. Many emojis have more than one meaning: a literal meaning, and a suggested one, for instance. Subversive, rebellious meanings are often created by the young: today’s gen Z.

The aubergine 🍆 is a classic example of how an innocent vegetable has had its meaning creatively repurposed by young people. The brain 🧠 is an emerging example of the innocent-turned-dirty emoji canon, which already boasts a large corpus.

And it doesn’t stop there. With gen Z now at the forefront of emerging digital culture, the emoji encyclopaedia is developing new ironic and sarcastic double meanings. It’s no wonder that older generations can’t keep up, and keep provoking outrage from younger people who consider themselves to be highly emoji-literate.

Emojis remain powerful means of emotional and creative expression, even if some in gen Z claim they’ve been made redundant by misuse. This new batch of 217 emojis will be adopted across generations and communities, with each staking their claim to different meanings and combinations. The stage is set for a new round of intergenerational mockery.

A version of this article first appeared in The Conversation on 25.2.21.

What’s New for 2021?

What happened last year might be best summed up by a quote from Lenin, ‘there are decades when nothing happens and there are weeks when decades happen’. In the initial few months of the pandemic McKinsey found that video calling for work more than doubled, in what they described as a five-year technology leap. Zoom, the clear winner for video platforms, reported a jump in paid users from 10m in December 2019 to 200m by March 2020. Teams and Google Meet also saw large user increases and Slack’s paid customer base doubled in 2020.

More Tools for Digital Work and Personal Lives

With the significant increase in online working, there has been much discussion as to whether this shift represents a permanent change. Previously, companies have seen office-based jobs as a form of compliance, with working from home often regarded as a less productive option. The experience of 2020 has demonstrated the compliance argument is largely false. You don’t need to be sitting at an office desk to send emails or to join meetings. Observers suggest that post-pandemic, up to 20% of face-to-face work will permanently move online and many other jobs will be done through a hybrid online/office model. Even with some movement back into physical offices, 2021 will continue to see a growth in online work-related tools. One trend is the use of plug-ins that enhance the online presenter experience for video conferences and meetings. A good example is mmhmm (chosen, because you can say the brand name with your mouthful). The software replaces the camera in Zoom or Google Meet with the presenter, slides and videos managed in a single, integrated screen. It avoids the ‘can anyone see my screen’ situation, offering a much slicker presenter experience.

Changes in 2020 weren’t just about work. Ofcom data revealed that UK personal video calling jumped from 34% of users to over 70%, with a majority made on Facebook’s platforms, WhatsApp and Messenger. The app downloads reported by Apple and Google are also telling. In 2020 Zoom, TikTok and Disney+ were the most downloaded apps, highlighting our need to stay connected and be entertained. Amongst their most recommended apps were Endel (stress reduction) and Loona (sleep management), indicating an unsurprising trend for digitally-based well-being.

User-driven Innovation

Most of us faced a rapid learning curve with online working, both getting to grips with the technology, but also finding the most effective ways in which to use it. That was often a process of individual discovery, supported by a greater sharing of life-hack solutions. It meant that in 2020 we all became innovators. Accenture’s future consultancy, Fjord, identified this trend as Do It Yourself Innovation highlighting bicycle repair pop-ups and online work-out platforms as examples of this. This innovation trend has also led bourgeoning hyperlocal businesses exemplified by artisanal food production such as micro-bakeries. It is an area that will likely buck the downward trend in physical retail. Although initially driven by necessity, DIY innovation offers considerable potential in the coming year. Digital platforms are providing opportunities to monetize innovative or creative endeavours. One example is TikTok’s partnership with Shopify. This is a significant development now that the platform has matured beyond lip sync and dance challenges.

Changing Cities

More working from home has led to a shift in commuting patterns alongside a need for less infection-risky public transport (not to mention the urgent need to reduce carbon footprints). The drop in physical retail has also raised significant questions of the role of city centres. In many places there was an explosion of cycling, not just for commuting but also for pleasure, along with increased sales of electric bikes and scooters. Even in the UK, where electric scooters are largely not yet street legal, the retailer Halfords reported a three-fold increase in sales. The country also trialled electric scooter hire schemes in some cities. The evidence from both public transport use and housing purchase shows a move away from busy city centres to more localised approaches. Somewhat prophetically in early 2020, the mayor of Paris has proposed a concept called “ville du quart d’heure the quarter of an hour city – in which all the main amenities are available within a 15 minute walk or cycle ride. Many of these behaviour shifts appear permanent, so expect to see a rise in sales of electric personal vehicles alongside more localised, specialised retail in 2021.

High Anxiety

Inevitably, the high level of video calling in 2020 brought its own specific problems. Broadly referred to as Zoom Anxiety there are considerable negative consequences of staring at a screen for hours. With fewer non-verbal signals there is much greater anxiety. Studies also found high levels of stress associated with the technology challenges that most of us experienced. 2020 also highlighted a digital divide. Moving meetings or education online reveals a host of inequalities in terms of connectivity, technology and even working/living spaces. Ofcom, for example, reported that over 50% of 75+ do not use the internet. That is a worrying sign in aging population, where isolation leads to poor mental health and increased mortality rates. In 2021 governments, technology providers, businesses and educators will need to take significant steps if they want to prevent further inequalities from the digital divide. 

AI – The Good, The Bad and The Ugly

2020 also saw the inevitable march of artificial intelligence. Although we are currently at the machine learning, or weak AI stage, the last year saw many new examples of the potential that the technology has to offer. The Deep Mind AI found the solution to a long-standing conundrum on protein folding. Whilst that’s a specific application, a good demonstration of the possibilities for broader use were Adobe Photoshop’s Neural Filters. Although face swapping and ageing have been available in social media platforms for a while, Photoshop’s filters applied them with greater sophistication to high resolution images. Inevitably AI also had its fair share of blame. Its use to predict A level results led to a debacle in which the UK Prime Minister blamed it on a ‘rogue algorithm’. And herein lies the challenge for AI. There was nothing rogue about the algorithm, it functioned as it was programmed to do based on the parameters and the data that was provided. Technology will often be blamed for human problems, but as we move forward in the next year, there needs to be a broader understanding of the bias that is built into all algorithms and data*. Further challenges of AI were highlighted by Channel 4 with their alternative Christmas message for 2020. They created a deep fake version of The Queen to deliver a manipulated Xmas speech that demonstrated some of the dangers associated with the spread of these technologies.

The Dopamine Affect

Whilst the challenge of digital device addiction has been recognised for some years, the Netflix documentary, The Social Dilemma brought this to the fore. That was especially pertinent in a year when we spent more time online than ever before. There are many facets to the digital addiction challenge, but it can be summed up by the Like. These social affirmations generate small hits of dopamine that build addiction in much that same way as recreational drugs. It puts users in a constant state of low-level anxiety whereby they are continually seeking more dopamine hits – more likes. This addictive behaviour is monetized by the social media platforms who package it for advertisers to a point in which the user becomes the product. Understandably concerned, Facebook wrote a rebuttal to some of the points made in the documentary. They accused the programme of lacking nuance and scapegoating social media for wider societal problems. Arguably, if Facebook was that concerned about these issues they might simply remove or limit the Like button. Author and academic, Shoshanna Zuboff was interviewed in the Social Dilemma. Her book The Age of Surveillance Capitalism offers a more detailed argument on the challenges that social media create. 2021 will undoubtedly see the debate continue on the responsibility of the social media platforms, a need for greater moderation and continued calls to break up Facebook’s properties. It looks like it will be a year in which our relationship with technology will move forward and be questioned in equal measure.

* I would thoroughly recommend reading Hannah Fry’s excellent book Hello World. It gives a good understanding of how algorithms are made. And if you want to know more about data bias, have a look at the Caroline Priado-Perez’s Invisible Women and Safia Umoja Noble’s Algorithms of Oppression

Does Length Matter? Twitter’s 10K Character Dilemma

Twitter recently suggested that they might increase the length of Tweets to 10,000 characters. Unsurprisingly it created something of a Twitter storm. Social media users are passionate about their networks and rarely like change. It also happened when they changed the ‘faves’ star to hearts. But what will longer Tweets mean? Some commentators suggested that it will make the social media site no different to any other blogging platform. That points to the challenge that Twitter has an identify crisis. It doesn’t know what it is any longer. Celebrities and their audience have mostly left Twitter to go to Instagram. Perhaps they are simply driven by narcissism but it’s very telling that four of the top ten Instagram accounts are from the Kardashian clan. Twitter though, seems have become the place for politicians’ indiscretions, journalists Tweeting their own articles and the middle class moaning at brands over service failures.

That is Twitter’s broad problem. Over the last year its growth has slowed down considerably and had just over 300m active users in 2015. Well below expectations. Compare that to WhatsApp, the messaging platform is rapidly approaching 1 billion users. Since its IPO, Twitter has seen a fall in its share price, so it needs raise revenue (and investor confidence). For the mico-blogging site, that means bringing in more advertising, but it has not managed to deliver the expected revenues. Although it has grown, their advertising remains a bit-part player to Facebook’s highly successful offering. In part, it’s because they lack the reach of their competitor, but the key to Facebook’s ad success has been to create a walled garden and keep the users within the site. Twitter is trying a number of formats to address this issue. They recently launched a Conversational Ad format that with call to action options.In a similar vein, longer Tweets means that users should (in theory) spend more time in the channel. And that’s good for advertising.

But what about the users? The complaints about the changes are in part, a reflection that their audience cares about Twitter. Ultimately though, social media sites must evolve. Twitter has regularly added new features – from the (user driven) hashtag to their recent Moments. However, I think the problem for longer Tweets is that it goes against the prevailing trend. We are moving to shorter, message-based content.

Snapchat is a good example where social media this is going. The ten-second life of pictures and videos has caught the imagination of 200m+ users. The FT reported in Sept 2015 that the app had 6 billion video views per day – that’s a 3-fold increase in 7 months and rapidly approaching Facebook’s figure of 8 billion views per day. The fact is that from content to our attention spans, everything’s getting shorter (as a Microsoft study found). Certainly Twitter has to evolve but the answer probably doesn’t lie with longer Tweets.

Why the Internet of Things Needs More Personality

It seems as though everything is becoming connected. It’s not just smartwatches from the likes of Apple or Samsung. It’s also cars, homes, health, industry and agriculture. We have connected babies (well onesies), WiFi sniffing cats and even (the slightly pointless) a connected yoga mat. That’s all very well, but the mere existence of technology does not equal adoption. Cue Cat is my favourite example of a large technology investment with no user take-up. When it comes to the IoT Michael Humphrey writing in Forbes summed it up well – we have an ‘enthusaism gap’.

Clearly, going from innovation to adoption is not easy. Bill Buxton talks about The Long Nose of Innovation. Development happens over many decades until we create a truly usable product. The computer mouse and smartphone touch screens are two examples. How could we apply the long nose to the IoT? Some people suggest it will reach true innovation when it becomes invisible and we don’t know it’s there. That might be true in part, but I think there is a flip side – we need to create more enthusiasm by making the IoT more visible and giving objects a personality.

Things That Tweet
The micro-blogging channel has been put to good use, not just by people but also Tweeting objects. We have Mars Curiosity (@marscuriosity), the Crossrail Tunneling machine, Big Bertha (@BerthaDigsCR99) and there’s Tom Coates’ Tweeting house (@houseofcoates). Fun? Yes. But it seems to go deeper than that. @houseofcoates has 1400 followers (slightly more than I do), and some of them get into conversations with the house (and very occasionally, it replies).

Enchanted Objects
MIT Lab scientist, David Rose, harks back to the days of beautifully crafted artifacts that fulfilled just specific tasks. He worries that the future of most objects will be little more than a black slab of glass without any enchantment (and without personality). He is on a mission to create and promote more enthusiasm with enchanting objects. Often, these objects have fewer functions but they do them beautifully. He gives the example of the umbrella, where the handle glows when it is going to rain or a medicine bottle that chirps to remind you to take a pill. Simplicity and delight are the key to the engagement.

Simple, Fun Experiences
Taking a cue from David Rose, if we are to engage with the IoT then we need to focus on simplicity and fun. The Smart Crossing was a recent Cannes Lions winner for Smart Cars that did just that. To discourage pedestrians from crossing in front of the traffic they created a light where the red, stop person danced. Not only that, but the moves were created by real people in a booth nearby. Of course, everyone waited at the lights, entertained for a few minutes by a dancing person.

More Personality
Brad The Toaster is a more anthropomorphic incarnation. Though an artistic concept, rather than a real thing, it brings a personality to the problem of over consumption. Brad is one of many connected toasters that can’t be owned (he’s more like a cat in that respect). You can look after Brad and use him, but if he is neglected then he will simply give himself to someone else. This idea could be applied to other products like self-driving cars. Given that the vehicles we own spend most of their time parked up, it makes little sense to own a car . However, we have a strong emotional relationship with them. Even in a self-driving world where the car just appears when you want it, giving them up won’t be so simple. Perhaps, though, if they have personality more like Brad The Toaster then we’ll be more likely to switch to a simple rental model.

‘Clothes have Feelings Too’
Taking the Brad concept further, I’ve been developing an idea called The Internet of Clothes. In developed nations we buy too many clothes and wear very few of them. One solution is that your clothes will ask to be worn. They will Tweet you based on the weather, frequency of wear or occasion. And if you ignore them? They will contact a charity for recycling.

I hope that giving clothes a sense of personality it can help people make better use of the resource. There’s no reason why we can’t do the same for other IoT objects. At the simplest level, we can feel more engaged but at a deeper level, it’s also about building an anthropomorphic relationship. For us humans it makes the whole IoT easier to comprehend.

Facebook’s Dislike Button. What’s Not To Like?

Speaking at a recent event in California, Mark Zuckerberg suggested that the social network would be introducing a new button. He said, ‘”We have an idea that we’re going to be ready to test soon, and depending on how that does, we’ll roll it out more broadly”. Although the Facebook CEO didn’t name it as such, it has been branded the ‘Dislike’ button.

download

If it is implemented, this will be an interesting new step for Facebook. The current Like button, that first appeared in 2007, was famously the result of a hackathon. It was proposed as an ‘Awesome’ button. Realising that many post of cats and people’s children were less than awesome, it transformed into the Like that we know today.  The success of the button is both its binary simplicity and the fact that it is a positive acknowledgement of the post. Even when a post is more serious or tragic, the action of Liking is widely understood to be positive and supportive.

For Facebook, there is a need to move forwards. At a time when many young users are switching to Instagram and WhatsApp (both owned by Facebook), they need to innovate to encourage retention. The challenge of a Dislike button, though, comes from its very nature. It’s a negative action. In a Wired article, Brian Barrett suggested that it will create a negative atmosphere that will simply put people off posting. Given the personal nature of these networks, it’s easy to understand why users will be discouraged if disapproval is as simple as clicking a button.

The negativity of the Dislike button could, potentially run even deeper though. Unlike Reddit, one of the benefits of Facebook is that posts are not ranked. Once you have two options, Like and Dislike, there will be an inevitable sense of competitiveness on posts, discouraging yet more users.

I’m sure Facebook are aware of the challenges, but they will need to tread carefully. Posts and shares are the lifeblood of Facebook and that in turn is what drives their advertisers. So in the end, the success of a Dislike button will probably come down to money.

Why Google Needs Brillo, Their OS for the IoT

With Google’s I/O announcement of Brillo, things are hotting up for operating systems to run the Internet of Things (IoT). We are witnessing a considerable growth of connected objects – from watches to cars to homes. Some of these are from established manufacturers but low-cost, rapid development means that there are an increasing number of startups delivering new devices. With such a broad range of smart objects the real challenge of the IoT is how to make them a fragmented landscape work together.

Google believes that Brillo is the answer (the irony of the similarity to my name is not lost on me). They announced an operating system that is largely Android based with an additional communications layer called Weave. The over arching premise is a consistent experience. Senior VP, Sundar Pichai said in his announcement that with “any Android device [connected to] a device based on Brillo or Weave, a user will see the same thing no matter what.”

The company is already busy in the connected world – they own Android, which powers a majority of the world’s smartphones and has built Android Gear for wearable devices. Google purchased Nest, the connected home system, last year and for the future, their driverless car development will naturally connect to the IoT. The development of complete operating system makes sense for Google.

However, what underpins most of their strategy is their search engine, and with it, paid advertising. Android, for example, puts their search at the heart of mobile. Although smartphones will be the core device for the IoT, the proliferation of connected objects means Google need to ensure their search giant status is future proof.

The success is not guaranteed for Google. Look at the challenges they’ve had in other developments such as social media to see that the power of Google does not always result in uptake. And there are many challengers in connecting the IoT. Major players including Samsung, Microsoft, Cisco and mobile chip manufacturer, ARM have all made moves in this area. There are also a growing number of start-ups and open source projects such as Contiki, Riot and Onion.io. Perhaps most interesting project is IFTTT (‘if this then that’). Many people will know it as a tool for cross posting on social media, but IFTTT offers much more than that. It uses ‘recipes’ to create a codeless method of connecting across channels and devices such as Nest, Phillips Hue or Fitbit. With millions of recipes already running on their apps, the company has a head start on Google supported by a $35m VC funding round in 2014.

Brillo was just one of a number of interesting announcements at Google I/O, there is no question that the operating system has added to the increased interest (and possibly hype) around our rapidly developing world of connected objects.

What Have Hackathons Ever Done for Us?

I don’t get the point of the hack days (or hackathons or whatever they’re called this week) that brands or ad agencies organise. I’ve been to a few and my experience is that they produce very little. Has a viable product or service ever been delivered as a result of a hack day? Not that I know of.

The lack of real innovation is hardly surprising. A typical agency hackathon seems to consist of mostly people from the marketing team and a couple of put-upon developers, who are expected to do a month’s coding in a few hours. Maybe the hackathons made famous by Facebook delivered something useful, but I believe that the brand or agency sessions are largely a PR exercise. At best they might deliver the grain of an idea. There’s nothing wrong with a PR exercise, but there needs to be an element of realism to acknowledge that they are unlikely to deliver innovation.

Moan over, noMaker Monday Transparent smallw for a shameless plug …. We’re trying a different approach to hack days called Maker Monday. Instead of a day or two stuck in a room, it’s a regular monthly event that brings together creatives and technologists to develop long-term projects that deliver creativity or solutions to problems. The first one was held in May in Birmingham. Backed by BCU with funding from the EU, we’ve managed to blag some kit (Arduinos, sensors, Raspberry Pi’s, Oculus Rift and even a 3D printer). We’ve got access to an open innovation space called Birmingham Open Media (BOM) so collaborators can work on projects in their own time.

Each monthly session will be presented by an expert in their field – we’ve got people doing VR, holographic projection, Raspberry Pi, EEG inputs and gesture control. In addition to a short presentation, they will also run a workshop on their specialism. We have artists working with technology lined up to come to the event (and of course, free beer and pizza).

The inaugural event focused developing ideas to deliver for an innovation week in November. There were a number of projects including a speaking keyboard for autistic children, a holographic interactive sculpture and clothes that automatically offer themselves to charity if they’re not worn (you can find more project concepts here).

We’re hoping that the regularity of Maker Monday will create more meaningful results than a hack day. The spread between creative and technologists is pretty even, but as an open innovation event, anyone is welcome (even people from outside Birmingham). The advantage of the monthly approach is that people can collaborate and develop their skills where needed. Maker Monday is free, but tickets need to be reserved (see our Eventbrite page for details).

The next event is at BOM on Monday 29th June at 5.30pm.

See our Tumblr page (http://makermondaybrum.tumblr.com/) for project details or Tweet us @maker_monday

Teenagers, Facebook and The Rise of Visual Messaging

“It’s Dead to Us. Facebook is something we all got in middle school because it was cool but now is seen as an awkward family dinner party we can’t really leave.” That’s how a 19 year-old American student described his generation’s relationship with the social media site in a widely circulated blog. This is not really a revelation. His views were evidenced by a teenage trend away from Facebook that was first identified by Pew Research in 2013 (and confirmed by the social media site themselves). In October 2014, a study by GlobalWebIndex found that Facebook’s user base grew 2% in the previous six months. The low growth is hardly surprising when you consider their user base is close to saturation point. However, the significant stat from the study was that teens were using the channel much less. 37% of young respondents said that they were ‘bored’ with the social network. Over the same period Tumblr saw an increased use of 120%. Popular with teens (and ad agency folk), its uptake has been driven by the humble ‘gif’. The ancient web-format has gained a new lease of life with highly sharable animated gifs of cats and celebrities.

Facebook has been aware of their teenage problem for sometime. They understand that young, early adopters are fickle when it comes to their digital channel choices. And thanks to mass smartphone adoption, that switch is happening faster than ever. There has been, for example a shift in messaging from SMS to What’sApp. The teen messaging channel of choice has quickly grown to over 700m users – nearly 3 times Twitters’ active user base. Fundamentally, teenage audiences are most active in messaging channels – and they’ll go where it is easiest, cheapest but above all, they’ll go where their friends are. A few years ago, they were using BBM. Before that, MSN was popular. It’s interesting to see, therefore, that the one Facebook product that remains relevant is their messaging app. A GWI study found that social messaging use grew by 50% in 2014, across all age groups.

Whilst messaging is still the driver of teenage online activity, the significant change has been the growth visual messaging. For today’s teens, pictures are better than words. This new found popularity of has been driven by smarphone cameras and apps such as Snapchat. GWI found that the picture app grew 57% – the fastest of any messaging app. UK teens especially love Snapchat, with 39% of them saying they use it compared to 15% globally (GWI). There’s an element of teenage rebellion about Snapchat. Part of attraction is that their parents (who are all on Facebook these days) don’t see the point of it. However Snapchat is also a bona-fide messaging app. Whilst it has gained a reputation as a place for ‘sexting’, it is an unwarranted tag. A 2014 University of Washington study found that the behaviour represented only 1.6% of users. The main use for Snapchat are is not to share amazing portraits or beautiful sunset pictures, but to share quick snaps with added comments or scribbles.

The real winner in the visual message channels though, is Instagram. Sure, it’s good for showing nice filtered photos, but spurred on by hashtags, selfies and numerous celebrity accounts, it has become the channel of choice for teenagers. By the end of 2014 it had overtaken Twitter’s user base and it continues to grow. Understanding the teen challenges, Facebook has been pretty shrewd in addressing them. When they bought Instagram for $1bn in 2012, observers thought it was an excessive sum for a company with just 13 people. In hindsight, given the level of uptake, that price seems like a bargain. After sniffing around Snapchat for a while (who reportedly turned them down), Facebook ended up buying What’sApp for $19bn in 2014. Facebook are aware that ultimately, no site is safe from a mass exodus of their users. Just look at the fate of Friendster or Myspace (and BBM or MSN for that matter). However, if Facebook are simply going to buy their most popular competitors, then the chances are, they’ll still be going in a few years time.