Linkedin Archives — Carrington Malin

July 10, 2021
nine-deepfake-video-predictions-1280x688.jpg

The UAE Council for Digital Wellbeing and the UAE National Programme for Artificial Intelligence this week published a Deepfake Guide to help raise social awareness about the technology. But what are deepfakes and are they all bad? Here are my top 9 deepfake video predictions!

Deepfake videos first hit the headlines six years ago when Stanford University created a model that could change the facial expressions of famous people in video. In 2017, University of Washington researchers released a fake video of former President Barack Obama making a speech using an artificial intelligence neural network model. It wasn’t long before consumers could create their own deepfake videos using a variety of tools, including deepfakesweb.com’s free online deepfake video application. Deepfake videos have now become commonplace circulating on social media of presidents, actors, pop stars and many other famous people.

So, what’s all the fuss about? Simply, that fakes of any nature can be used to deceive people or organisations with malicious intent or for ill-gotten gains. This could include cyber-bullying, fraud, defamation, revenge-porn or simply misuse of video for profit. It perhaps comes as no surprise that a 2019 study found that 96 per cent of all deepfake videos online were pornographic.

However, using technology to produce fakes is not a new thing. The swift rise of the photocopier in the 1970s and 80s allowed office workers all over the world to alter and reproduce copies of documents, letters and certificates. The ease with which printed information could be copied and altered, prompted changes in laws, bank notes, business processes and the use of anti-counterfeit measures such as holograms and watermarks.

Like photocopying, deepfake video technology is getting better and better at what it does as time goes on, but at a much faster speed of development. This means that the cutting edge of deepfake technology is likely to remain ahead of AI systems developed to detect fake video for a long time to come.

Any technology can be used for good or evil. However, in a few short years deepfake technology has got itself a terrible reputation. So, what is it good for? My take is that deepfake video technology – or synthetic video for the commercially-minded – is just one aspect of artificial intelligence that is going to change the way that we use video, but it will be an important one. Here are my top nine deepfake video predictions.

1. Deepfake tech is going to get easier and easier to use

It’s now quite easy to create a deepfake video using the free and paid-for consumer apps that are already in the public domain. However, as the world learns to deal with deepfake video, the technology will eventually be embedded into more and more applications, such as your mobile device’s camera app.

2. Deepfake technology’s reputation is going to get worse

There’s an awful lot of potential left for deepfake scandal! The mere fact that developers are creating software that can detect deepfake video, means that the small percentage of deepfake video that can not be identified as fake may be seen as having a virtual rubber stamp of approval! And believing that a influential deepfake video is authentic is where the problem starts.

3. Policymakers are going to struggle to regulate usage

Artificial intelligence is testing policymakers’ ability to develop and implement regulation like no other force before it. The issues are the most obvious when deepfakes are used for criminal activitiy (the courts are already having to deal with deepfake video). In the near future, regulators are also going to have to legislate on the rise of ‘legitimate’ use, seamlessly altering video for education, business, government, politics and other spheres.

4. One:one messaging

One of the most exciting possibilities is how deepfake modelling might be used to create personalised one:one messaging. Today, it’s possible for you to create a video of you voicing a cute animation via an iPhone. Creating and sending a deepfake video of your real self will soon be as easy as sending a Whatsapp message. If that sounds too frivolous, imagine that you’re stuck in a meeting and want to send a message to your five year-old.

5. Personalisation at scale

As the technology becomes easier to use and manipulate, and as the processing power becomes available to automate that process further, we’re going to be able to create extremely lifelike deepfake videos – or synthetic videos, if you rather – at scale. London-based Synthesia is already testing personalised AI video messages. That will open the doors for marketers to personalise a new generation of video messages at scale and deliver a whole new experience to consumers. Imagine if every new Tesla owner received a personal video message from Elon Musk (well, ok, imagine something else then!).

6. Deepfakes on the campaign trail

As marketers get their hands on new tools to create personalised video messages for millions, then there may be no stopping political parties from doing so too. Has your candidate been banned from social media? No problem! Send out a personalise appeal for support directly to your millions of supporters! In fact, this is one use that I could see being banned outright before it even gets started.

7. Video chatbots

There are already a number of developers creating lifelike synthetic video avatars for use as customer service chatbots, including Soul Machines and Synthesia. As AI generated avatars become more lifelike, the lines between different types of video avatars and AI altered deepfake videos are going to blur. The decisions on what platform, what AI technology, what video experience and what type of voice to add, are going to be based on creative preferences or brand goals, not technology.

8. Deepfake entertainment

Although some deepfake videos can be entertaining, their novelty value already seems to be fading. In the future, whether a deepfake is entertaining or not will depend on the idea and creativity behind it. We seem to be headed for an some kind of extended reality music world, where music, musicians, voices, characters and context are all interchangeable, manipulated by increasingly sophisticated technology. The Korean music industry is already investing heavily in virtual pop stars and mixed reality concerts. Deepfake representations will not be far behind. After all, they’re already reading the news! The Chinese national news service (Xinhua) has been using an AI news anchor for the past two years.

9. Your personal AI avatar

In 2019, Biz Stone co-founder of Twitter and Lars Buttler, CEO of San Francisco-based The AI Foundation, announced that they were working on a new technology that would allow anyone to create an AI avatar of themselves. The AI avatar would look like them, talk like them and act like them, autonomously. In comparison, creating personal avatars using deepfake technology (i.e. manipulating already existing video) could be a lot easier to do. It remains to be seen how long it will take before we have the capability to have our own autonomous AI avatars, but creating our own personal AI video chatbot using deepfake tech is just around the corner!

I hope that you liked my top deepfake video predictions! But, what do you think? Will AI altered deepfakes and AI generated video avatars soon compete for our attention? Or will one negate the need for the other? And how long do you think it will be before consumers are over-targeted by personalised AI generated video? Be the first to comment below and win a tube of Pringles!

This article was first posted on Linkedin. If you’re interested in this sort of thing, I also wrote about deepfakes for The National a couple of years ago. You can find that article here.


January 10, 2021
artificial-intelligence-ethical-concerns-2020.jpg

My New Year’s Linkedin poll about changes in how people feel about their ethical concerns regarding AI doesn’t prove much, but it does show that 2020 did little to ease those concerns.

Opinions and our level of understanding about artificial intelligence can vary a great deal from person to person. For example, I consider myself a bit of a technophile and an advocate of many technologies including AI, with a higher than average level of understanding. However, I harbour many concerns about the ethical application, usage and the lack of governance for some AI technologies. My knowledge doesn’t stop me having serious concerns, nor do those concerns stop me from seeing the benefits of technology applied well. I also expect my views on the solutions to ethical issues to differ from others. AI ethics is a complex subject.

So, my intention in running this limited Linkedin poll over the past week (96 people responded) was not to analyse the level of concern that people feel about AI, nor the reasons behind it, but simply whether the widepread media coverage about AI during the pandemic had either heightened or alleviated people’s concerns.

The results of the poll show that few people (9%) felt that their ethical concerns about AI were alleviated during 2020. Meanwhile, a significant proportion (38%) felt that 2020’s media coverage had actually heightened their ethical concerns about AI. We can’t guess the level of concern among the third and largest group – the 53% that voted 2020 ‘hasn’t changed anything’ – however, it’s clear that 2020 media coverage about AI brought no news to alleviate any concerns they might have either.

Artificial intelligence ethical concerns poll 2020

Media stories about the role of AI technologies in responding to the coronavirus pandemic began to appear early on in 2020, with governments, corporations and NGOs providing examples of where AI was being put to work and how it was benefiting citizens, customers, businesses, health systems, public services and society in general. Surely, this presented a golden opportunity for proponents of AI to build trust in its applications and technologies?

Automation and AI chat bots allowed private and public sector services, including healthcare systems, to handle customer needs as live person-to-person communications became more difficult to ensure. Meanwhile, credit was given to AI for helping to speed up data analysis, research and development to find new solutions, treatments and vaccines to protect society against the onslaught of Covid-19. Then there was the wave of digital adoption by retail companies (AI powered or not) in an effort to provide digital, contactless access to their services, boosting consumer confidence in services and increasing usage of online ordering and contactless payments.

On the whole, trust in the technology industry remains relatively high compared to other industries, but, nevertheless, trust is being eroded and it’s not a surprise that new, less understood and less regulated technologies such as AI are fueling concerns. Fear of AI-driven job losses is a popular concern, but so are privacy, security and data issues. However, many people around the world are broadly positive about AI, in particular those in Asia. According to Pew Research Center, two thirds or more of people surveyed in India, Japan, Singapore, South Korea and Taiwan say that AI has been a good thing for society.

Since the beginning of the pandemic, AI’s public image has had wins and losses. For example, research from Amedeus found that 56% of Indians believe new technologies will boost their confidence in travel. Meanwhile, a study of National Health Service (NHS) workers in London found that although 70% believed that AI could be useful, 80% of participants believed that there could be serious privacy issues associated with its use. However, despite a relatively high level of trust in the US for government usage of facial recognition, the Black Lives Matter protests of 2020 highlighted deep concerns, prompting Amazon, IBM and Microsoft to halt the sale of facial recognition to police forces.

Overall, I don’t think that AI has been seen as the widespread buffer to the spread of Covid-19 as it, perhaps, could have turned out to be. Renowned global AI expert Kai-Fu Lee commented in a webinar last month that AI wasn’t really prepared to make the decisive difference in combating the spread of the new coronavirus. With no grand victory over Covid-19 to attribute to AI, its role over the past year it’s understandable that there was no grand victory for AI’s public image either. Meanwhile, all the inconvenient questions about AI’s future and the overall lack of clear policies that fuel concerns about AI remain, some even attracting greater attention during the pandemic.

This article was first posted on Linkedin.


December 24, 2020
Donald-Trump-missing-from-Google-year-in-search.jpg

High volumes of U.S. election Google search terms during 2020 is no surprise, but the absence of Donald Trump from Google Trends U.S. 2020 Year in Search is one!

There tend to be few surprises in Google’s ‘Year in Search’ trends report, which are routinely dominated by celebrities, entertainment and sports. The Internet has, after all, become an essential utility for many and the first point of enquiry for any information need. However, in U.S. election years, some of that public attention naturally turns to politics.

Each election year, as one would expect, there are more searches for U.S. politicians and presidential candidates and their running mates in particular. ‘Sarah Palin’, the late John McCain’s running mate in his 2008 presidential election campaign was listed as Google’s fastest growing global search term, beating out searches for ‘Beijing 2008’ Summer Olympics. Presidential candidates drive high search volumes during election years and normally feature in Google’s Year in Search top ten lists.

Obama’s election campaigns

In his debut presidential election year Barack Obama dominated U.S. Google searches, with Obama becoming the fastest rising search term and the volume of searches eclipsing most other search terms including McCain, Palin and Democrat vice-presidential candidate Joe Biden.

Famed for the success of his 2008 election social media campaign, Obama was also a big spender with Google spending an estimated $7.5 million with the Internet giant, or about 45 percent of his campaigns total digital ad spending. Obama again rose to high volumes during 2012 presidential campaign Google searches, outranking Mitt Romney in search volumes.

All in all, in terms of online campaigning, Obama was a hard act to follow.

Enter Donald Trump

One year before the 2016 U.S. presidential election, Donald Trump’s search volumes were not only trending, but topping Barack Obama’s famous 2008 presidential campaign! During some week’s Trump’s search volume even topped Obama’s 2008 record by 4-5 times.

Trump versus Obama search (Vox)

Donald Trump appeared top of Google’s 2016 Year in Search tables, ranking as the number one search in the People category, followed by his opponent Hilary Clinton in second place. Neither Trump’s, nor Clinton’s running mates appeared in the top ten list though.

2020 presidential election year

In Google’s 2020 U.S. Year of Search trends report, one name is conspicuous by its absence in the People category: Donald Trump.

As is usually the case during none-election years, Trump and other politicians were largely absent from the top ten rankings in Google’s 2017, 2018 and 2019 Year of Search reports. Although first lady, Melania Trump, ranked high in search volume during 2017, mainly due to publicity around her first official engagements in January of that year.

No alt text provided for this image

However, the top ten list of the most searched for people in 2020, according to Google, doesn’t include Donald Trump. The now president-elect Joe Biden is the clear winner in Google’s list, ranking first in the People category. Kamala Harris, the Democratic vice-presidential candidate ranks fourth, after North Korea’s leader Kim Jong Un and U.K. prime minister Boris Johnson.

Even music artist Kanye West, who ran his own independent 2020 United States presidential election campaign, ranked 9th in the year’s top People searches.

Trump not a popular search term in 2020?

I’m not a big Trump fan and I’m not trying to seed yet another conspiracy theory, but doesn’t that seem a bit odd? There’s certainly been no shortage of Trump press coverage this year. Spikes in Google search volumes don’t imply any change in sentiment alone. So, neither Trump’s popularity as a president nor the election results would automatically reduce search volumes. Key word searches are typically prompted by news media or social media coverage, in both of which Trump has seen in ample measure throughout 2020.

Google’s own data on Google Trends doesn’t seem to support either Joe Biden’s top search volume ranking or Donald Trump’s absence. Throughout the past 52 weeks, Trump Google search volumes have exceeded Biden’s every week in, according to one Google Trends search. However, results seem to be inconsistent, in an identical search a few hours earlier Trump searches exceeded Biden’s in 40 out of 52 weeks. In that Google Trends query, there were eight 8 weeks in which Biden searches exceeded Trump’s and four weeks where search volumes were roughly the same for each. Either way, Trump searches overall exceeded Biden’s.

From the data visible via Google Trends, it seems highly improbable that Biden’s overall 2020 search volume exceeded Trumps. In order for this to happen, this would require that the few weeks prior to election day would have to have driven more search volume for Biden than during most of the year combined for Trump. This is not what the Google Trends charts show (including the chart above).

I’m sure that there’s a logical explanation, isn’t there now Google?

This article was first posted on Linkedin.


December 3, 2020
Will-your-marketing-plan-help-win-1280x640.jpg

You may have worked long and hard on your marketing plan, but how well does it support ongoing communication with your internal stakeholders?

The phrase ‘hearts and minds’ was first used by a French general during the French Indochina-Chinese border rebellion in the 19th century. It’s been used as a military strategy ever since, making emotional or intellectual appeals to the other side, in recognition of the fact that military superiority does not always provide the best or the swiftest victory in armed conflict. And, like many other military concepts, the ‘hearts and minds’ strategy was appropriated by marketing and communications strategists long ago!

However, marketing is by no means alone in borrowing from military strategy. Human resources has proved to be another big user of ‘hearts and minds’ and there are good reasons for this: budget and organisational dynamics. HR never receives the budget that it feels it deserves and so is forced to choose its battles carefully. Meanwhile, communications that put forth company ambitions, messages and cultural achievements simply fall flat if they are widely disputed in hushed tones around the water cooler. If internal stakeholders don’t believe and feel emotionally involved in plans, policies and practices, they’re far less likely to unite behind your cause.

And so it is with the internal communications from any department, marketing included.

Much of marketing’s internal communications routinely focuses on approvals and successes – i.e. the milestones at the beginning and at the end of any marketing campaign. Once the big bang of the final presentation is over and approvals are secured, participation of other stakeholders can fade away rapidly. Marketing plans, strategies and budgets are rightly presented as business cases for the careful consideration of decision makers. Far less effort tends to be invested in making sure that plans are easy to understand, highly useable and appeal to the ‘hearts and minds’ of other departments.

‘People just don’t understand marketing’

A common complaint of marketing heads the world over is that their work is so little understood by the rest of the organisation. There is scarce appreciation for all the work that goes into research, product positioning, creative concepts, or running effective campaigns. As a result, marketing successes are not always met with the thunderous applause that the marketing team believes is due! However, if your full year of internal communications consists of approvals and successes, then surely this is to be expected?

Accelerated by digital transformation and the breaking down of information silos, marketing and communications today not only maps to almost every part of the organisation, but also now shares data with it. All the more reason to have key internal stakeholders not only invested in approval and success milestones, but also emotionally and intellectually invested in the strategy and marketing activities themselves.

So then, am I trying to tell you that everyone in your organisation should be constantly referring to your marketing plan? No, but I am saying that your marketing plan should be a thoughtfully crafted communications tool that informs and supports marketing’s internal narrative throughout the year.

It should be something that helps frame marketing leadership’s communications with senior management, department heads, internal stakeholders, business partners and agencies. For this, your plan should be structured in a way that makes it easy-to-use, a valuable reference, useful to abstract from, and relevant to your wider audience of stakeholders.

Review your plan like it’s ‘external’

Ideally, your marketing plan will be pyramidal in structure – or a pyramid of pyramids – that presents key goals, findings and strategies towards the start and cascades more detail afterward. Ideally too, those top-level goals and strategies will be written in a self-explanatory way that is easily understandable by non-marketing professionals. If you strive to make your goals and strategies memorable and to clearly show relevance to the other functions in the organisation, so much the better. Anything that helps promote greater understanding of your goals, challenges and strategies has got to be a good thing, right?

A useful way to review your marketing plan is to imagine that you’ve written it for an external audience. Marketing content for external audiences normally goes through a very different process to internal communications. There tends to be a great deal of scrutiny of key messages and what perceptions will be formed by customers, partners, the media and other key audiences. The form, style, colour and simplicity of external communications are brainstormed, ideated, iterated, tested and optimised. In contrast, internal communications are often deemed as good enough if they are honest, free of typos and don’t over-commit!

Your annual marketing plan is a core document for marketing planning, budgeting and approvals. However, it’s a valuable communications exercise, helping to frame marketing’s internal messaging for the year. The more effectively your plan communicates your goals, plans and strategies, the more key points both marketing and non-marketing stakeholders are going to understand, retain and refer to later. Beyond the simple benefit of ensuring that everyone’s reading from the same manual, you may find that focusing a little more on ‘hearts and minds’ could even turn your internal critics into advocates. And wouldn’t that be something?

This article was first posted on Linkedin.

Also read: Is your marketing plan presentation the best it can be?


August 11, 2020
Will-AI-replace-human-creativity.jpg

Will AI replace human creativity? Or help them take creativity to the next level? It could simply depend on how we choose to use it.

Those that know me well will know that I have become obsessed with how artificial intelligence will impact brands, communications and consumers. Last week, I was inspired by an article by The Drum‘s Brands Editor Jen Faull, which explored the current state of AI in creative work and asks the question “Will artificial intelligence replace human creatives?”

It’s a great question to ask, because no one really knows the answer. Rephrase the question slightly and ask “could artificial intelligence replace human creatives?’ and I’d argue that the answer is, most definitely, yes (obviously, leaving aside the question of “when?”). Is AI destined to take over the creative brief entirely and replace human creatives and creative processes? I’d say that, at the end of the day, this is largely going to be up to us to decide.

The meteoric rise of so called artificial intelligence – which, these days, is used synonymously with the many applications, systems and devices powered by machine learning – is as impressive as it is scary. And, as with most up and coming technologies, it’s often very difficult to differentiate the reality from the hype.

Will AI replace creatives?

By all accounts, AI is by no means ready to fill our creative boots. We can train AI systems to learn things from data sets, analyse trends, make recommendations and actually create outputs of different kinds, including “creative work”. However, AI hasn’t yet been able to even convincingly mimic the complexities of human thought and creativity. Some would argue that it is only a matter of time before that data too is assimilated. Imagine an AI system trained on the experiences, thoughts and dreams of the planet’s top 100 advertising creative professionals? It could happen, just not quite yet.

Today, AI systems have been used to produce original creative advertising work with, at best, moderate success. However, AI is much better at targeting, deploying and optimising advertising assets. There are also an increasingly wide range of tools becoming available to inform, analyse, optimise and fast-track creative projects. As AI voice becomes ubiquitous, using those tools is going to become more intuitive and seamless – and so better to assist creative development.

Inspired by the article on The Drum, I posted some further questions on Linkedin last week – “Will AI fast-track the training and development of creative professionals? Or will AI’s efficiency strangle that essential pipeline of new creative talent that would have traditionally developed up through the ranks?”

‘You can’t box creativity’

A variety of advertising, marketing and technology professionals responded in comments and via messaging. You can read all the comments in full on my post from last week here. Meanwhile, it could be useful to summarise some key points here. Although there was consensus that AI is nowhere near ready to take over human creative work, I was interested to find that there were also some quite divergent opinions.

From some, there was certainty that AI could not and will not replace human creatives. Sherif El Ghamrawy at Photovision Plus believes that “there will always be certain things that remain uniquely human that no machine will ever be able to truly replicate”, citing emotion and imagination as key differentiators. Ramesh Naidu Garikamokkala at PAGO Analytics agrees that AI is not going to replace the role of our emotions.

Ibrahim Lahoud of Brand Lounge also seems to be in agreement with this, sharing that AI could fast-track training and development of creatives, but that’s where he draws the line. “AI can create a logo where human creatives will create a brand. AI can analyze shapes and colors where humans can read emotions.”

Jad Hindy at MRM/McCann noted (via messaging) that you can’t box creativity or confine it within a standard process. He says “ideation can’t be AI-ed, but the creation of assets can.”

Some of the futurists out there, do believe that AI could replace human creativity sometime in the future. Although, as Steven Gare of AI Blockchain Service puts it “defining AI in this context is pure speculation at this time”.

Most professionals agree that AI does promise to both empower and change the creative process, including career development. Lahoud’s take is that “AI will not replace creatives, but will rather be an incredibly powerful assistive tool that will act as an extension to their boiling minds”. Kassem Nasser, American University of Beirut, agrees and says that AI “is a technology that will open new challenges and opportunities to our minds not replace them.”

Meanwhile, Robert McGovern at Horizontal Digital notes that new AI tools could help with brainstorming, idea generation and connecting different concepts together, plus fast-tracking research work.

‘Think of AI as an exoskeleton for brains!’

In my mind, how creative professions – and creative industries as a whole – adapt to the arrival of AI and other new technologies is going to play the deciding role in determining whether we are empowered to create greater things or get used to accepting what AI creates for us. A point well made by Gowri Selka from Volantsys Analytics Inc., “it is critical for humans from all backgrounds of career to gain new skills and leverage these technologies to their benefit.”

For sure, the clock is ticking. AI and related technologies are developing at a pace that we’ve never experienced before. Like it or not, change is absolutely the only constant that we can look forward to. As Jürn-Christian Hocke at Select World urges, “we have to think about the new dealt cards NOW.” And says, creatives must learn what creativity and creative careers will look like in the future.

“Think of AI as an exoskeleton for brains!” is Lahoud’s advice for creatives. And, I think, he’s hit the nail on the head here.

As the capabilities of AI continue to grow, the creative process may look less and less like the process of old. However, whether this process remains human centric, is going to depend on how we frame AI’s future role. If AI is to super-charge human creativity, it’s up to creative professionals to take firm hold of the controls and remain at the very centre of the creative process. Time to suit up!

This story was originally published on Linkedin

Read the Arabic language version here: نظرة على مستقبل الإبداع


July 9, 2020
Dubai-RTA-drone-test-1200.jpg

The roll-out of Dubai’s new drone regulations and Dubai Sky Dome initiative will be watched closely by policy makers, aviation regulators and smart city planners worldwide.

Last Saturday, His Highness Sheikh Mohammed bin Rashid Al Maktoum, Vice President and Prime Minister of the UAE, in his capacity as Ruler of Dubai, announced the issue of a Law No. 04 of 2020, outlining new regulations governing drone activity in the emirate. The new law promises to make Dubai a commercially-friendly environment for drone services, manufacturing and innovation.

It is well publicised that ecommerce, logistics firms and drone delivery startups have been trialling drone delivery services for years in Asia, Europe and in the U.S., but governments have moved slowly to solve regulatory issues.

There is also a need for airspace to be managed safely, securely and effectively for drone usage, both controlling the flight paths of drones in city areas and ensuring that they don’t interfere with civil and military aviation. The complexities of managing drone traffic, public and private liabilities, and the number of different stakeholders that must be involved and coordinated with, has led some regulatory processes to simply grind to a halt.

Dubai now seems to be ahead of the game. The new Dubai drone law paves the way for Dubai Department of Civil Aviation (DCAA) to implement its ‘Dubai Sky Dome’ initiative, which aims to create a virtual airspace infrastructure and ecosystem for commercial drone use. It appears that an awful lot of work has gone into finding solutions for the practicalities of drone airspace management and supporting a drone ecosystem.

The new law could allow Dubai to leapfrog the global competition and kick-start a whole new industry that is right at the forefront of innovation. The Dubai Sky Dome looks set to underpin the flying taxi services planned by the RTA, allow commercial drone delivery services and establish Dubai as an ideal location for global drone ventures to test, trial and launch their products and services.

In addition to encouraging local and global drone startup ventures to establish themselves in Dubai, we can also expect the Dubai Sky Dome to become somewhat of a Petri dish for global policy makers in aviation, smart cities and R&D. Let’s see how fast it grows!

This story first appeared on Linkedin


January 6, 2020
2020-marketing-planning.jpg

Now that the New Year has arrived, I’m not about to tell you how to develop your 2020 marketing plan. I’m guessing that this is, at least, completed in draft and perhaps already approved and has been used for other 2020 briefing and planning. However, could you improve your marketing plan’s presentation?

Although you may well have worked long and hard on your marketing plan, you may still be in the process of improving it before sharing a final version with your wider internal audience. Perhaps you intended to add a few tweaks over the holidays, or maybe you’re creating a shorter version of your plan in slide format to help communicate your plan internally. Whatever you choose to do, it’s important to have a marketing plan ready that is easy to understand for internal stakeholders across your organisation. We’re all ‘in marketing’ these days, so making the effort to improve your marketing plan presentation is time well spent!

Continue reading this story on Linkedin.


December 30, 2019
will-AI-take-your-job.jpg

Will AI take your job? Of course not, but that’s hardly the right question.

The semantics used by the technology industry about AI and its impact on jobs have started to grate on me a bit. The future of work is changing faster than ever before and it will drive many new opportunities and new career paths. In the short term, the reality is that a lot of people will lose their jobs, but that’s something no technology leader wants to be quoted as saying, in particular when they could be holding forth on our bright AI-powered future.

IBM CEO Ginni Rometty said – a couple of years ago now – that AI will impact 100 percent of current jobs, which, of course, is now common sense. AI’s impact on jobs is also a complex subject and its dangerous to try to sum it up in one simple concept. However, by and large, that’s what many tech leaders are doing, with “AI won’t take your job” as the reassuring umbrella message that the whole drive towards AI adoption seems to fly under. The answer is both straightforward and misleading. No, AI won’t take your job, anymore than a gun will shoot you: that requires a human.

The fly in the tech industry’s ointment is that their customers are not always ‘on message’. Many large employers have already commented over the past year that one of the benefits that AI brings to them is the ability to do more with less staff, some even going further and stating plainly that the technologies are allowing them to cut volumes of staff.

There are now a growing number of studies that highlight huge changes in the number of current jobs that will be phased-out due to the introduction of automation. In October, a report on the banking sector from Wells Fargo & Co. estimated 200,000 job cuts across the US banking industry over the next decade, including many customer service functions. Often, the big numbers in such reports are necessarily ‘fuzzy’. Statistics often include jobs that employers will phase out by head-count freezes, jobs that will no longer be specified for new operations, plus actual redundancies.

Forecasts for the elimination of certain jobs are embraced by the technology industry as evidence that the nature of work is changing and that old jobs must die in order for new, technology-enabled jobs to be created. One can already see from Linkedin’s top emerging jobs lists for 2019, that specialist roles in artificial intelligence development, robotics, data science and data security are all fast-growing. This is the crux of the now commonplace – but, as yet, unsubstantiated – argument that AI will create more jobs than it eliminates.

How much of ‘the future’s so bright’ narrative is used by the tech industry to distract us from the here and now? On conference platforms all over the world, big tech typically urges employers to focus about how AI can enhance productivity, help define new business models and benefit customers, and not to simply save costs by replacing workers. However, for any business that aims to be competitive in our global economy, must look at ways to cut costs as well as ways to increase efficiencies. As more AI-powered solutions are developed that reduce the need for human workers, more jobs are cut.

Food delivery platform Zomato announced that it was laying off some 600 people in September, claiming that most of these jobs will be automated following continued investment in technology systems.

Earlier in the year budget airline AirAsia confirmed that it had closed nine call centres as a result of its AI chatbot customer service project. No redundancies were mentioned and it’s assumed that most, if not all, call centres were outsourced.

Banks all over the world have used automation to cut countless thousands of jobs over the past ten years and AI will allow them to cut thousands more.

Meanwhile, global economic analysis firm Oxford Economics estimates that automation will eliminate up to 20 million manufacturing jobs worldwide by 2030.

According to the 2019 Harvey Nash / KPMG CIO Survey, one third of CIOs say their companies plan to replace more than 20 percent of job roles with AI/automation within 5 years, although 69 percent also believe new job roles will compensate for those lost. Many agree that the new technology-powered job roles created will compensate for current jobs lost. This also, clearly, means different things to different people. A new data science role may sound great when you’re at college or, perhaps, already involved in digital data, but not so much if you’re a call centre agent with 10 years’ experience who’s just been let go.

So, to me at least, it seems disingenuous for technology leaders to hide behind technicalities, calling out warnings of job losses as a result of AI as being misinformed, unjustified or not presenting the entire picture. Cost savings are a powerful driver of AI adoption and, for many organisations, those savings will be made by cutting jobs. There’s room for the tech industry to be a little more honest about that.

This story first appeared on Linkedin.


December 22, 2019
ai-search-popularity-2.png

The answer to that probably depends on where you live, where you’re from and what you do for a living.

For those of us following developments in emerging technologies closely, it might seem like the past year has been the year in which news, discussion and debate about artificial intelligence (AI) has come to the fore. Deepfakes, AI surveillance, facial recognition, smart robotics, chatbots and social media bots have all been in the news, with some associated with some highly controversial issues. There’s also been plenty of debate about the impact of AI on political campaigning, data privacy, human rights, jobs, skills and, needless to say, the steady flow of industry messages about business efficiency.

However, the truth is that the amount of attention that AI has received depends very much on which part of the world you live in and what you do for a living. One of the most interesting takeaways from looking at AI-related searches on Google over the past year is that many global search volumes for terms related to artificial intelligence haven’t changed that much, but the differences in interest shown from country-to-country is striking.

No prizes for guessing that China is among the countries that shows the most interest in artificial intelligence. Google Trends awards a score of 62 out of 100 for its search volume, despite the fact that Google’s services remain blocked for most Internet users in the country.

India (45), Pakistan (65) and the UAE’s (53) volumes of Google searches for artificial intelligence all compare favourably with China’s high level of interest. Although, for some reason Google Trends credits Zimbabwe (100) with being the country most interested in AI.

In Europe, the United Kingdom (15) and Ireland (17) are among the countries most interested in artificial intelligence, roughly on a par with the Netherlands (18), Switzerland (15), US (17), Australia (19) and New Zealand (15), while behind Canada (20) and South Africa (27).

Meanwhile, much of the world seems to be focused on other things. For those populations on the other side of the great digital divide, that’s perfectly understandable. Most of South America, Africa and a significant area of Asia appears to remain a backwater in terms of interest in AI. However, quite a number of European countries appear below 10 on Google Trend’s 0 to 100 scale, including France, Italy, Spain, Poland and others.

So, as we eagerly consume news and comment about how AI is going to change our world and herald sweeping changes that affect every aspect of our lives, it’s perhaps as well to remember that these changes won’t be uniform across the globe and, for many, artificial intelligence is going to seem largely irrelevant for a long time to come.

Note; figures from Google Trends 0 to 100 scale for search volume seem to change frequently, but the ranking of high to low volumes remain largely the same.

This story first appeared on Linkedin


November 2, 2019
ai-brand-avatar.jpg

Do brands need AI avatars of themselves? Last week at London’s One Young World Summit, Biz Stone co-founder of Twitter and Lars Buttler, CEO of San Francisco-based The AI Foundation, announced a new concept they called ‘personal media’ and claimed that artificial intelligence is the future of social change. The Foundation is working on new technology that Buttler says will allow anyone to create an AI avatar of themselves, which would look like them, talk like them and act like them. Empowered by AI avatars, people will then be able to, potentially, have billions of conversations at the same time.

So, what does this new kind of AI communications mean for brands?

Continue reading this story on Linkedin