media Archives — Carrington Malin

December 22, 2019
ai-search-popularity-2.png

The answer to that probably depends on where you live, where you’re from and what you do for a living.

For those of us following developments in emerging technologies closely, it might seem like the past year has been the year in which news, discussion and debate about artificial intelligence (AI) has come to the fore. Deepfakes, AI surveillance, facial recognition, smart robotics, chatbots and social media bots have all been in the news, with some associated with some highly controversial issues. There’s also been plenty of debate about the impact of AI on political campaigning, data privacy, human rights, jobs, skills and, needless to say, the steady flow of industry messages about business efficiency.

However, the truth is that the amount of attention that AI has received depends very much on which part of the world you live in and what you do for a living. One of the most interesting takeaways from looking at AI-related searches on Google over the past year is that many global search volumes for terms related to artificial intelligence haven’t changed that much, but the differences in interest shown from country-to-country is striking.

No prizes for guessing that China is among the countries that shows the most interest in artificial intelligence. Google Trends awards a score of 62 out of 100 for its search volume, despite the fact that Google’s services remain blocked for most Internet users in the country.

India (45), Pakistan (65) and the UAE’s (53) volumes of Google searches for artificial intelligence all compare favourably with China’s high level of interest. Although, for some reason Google Trends credits Zimbabwe (100) with being the country most interested in AI.

In Europe, the United Kingdom (15) and Ireland (17) are among the countries most interested in artificial intelligence, roughly on a par with the Netherlands (18), Switzerland (15), US (17), Australia (19) and New Zealand (15), while behind Canada (20) and South Africa (27).

Meanwhile, much of the world seems to be focused on other things. For those populations on the other side of the great digital divide, that’s perfectly understandable. Most of South America, Africa and a significant area of Asia appears to remain a backwater in terms of interest in AI. However, quite a number of European countries appear below 10 on Google Trend’s 0 to 100 scale, including France, Italy, Spain, Poland and others.

So, as we eagerly consume news and comment about how AI is going to change our world and herald sweeping changes that affect every aspect of our lives, it’s perhaps as well to remember that these changes won’t be uniform across the globe and, for many, artificial intelligence is going to seem largely irrelevant for a long time to come.

Note; figures from Google Trends 0 to 100 scale for search volume seem to change frequently, but the ranking of high to low volumes remain largely the same.

This story first appeared on Linkedin


December 21, 2019
UAE-Pepper-Robot.jpg

The UAE’s all abuzz about AI! We’re exposed to more and more news about artificial intelligence, or AI, these days – from stories about talking robots to deepfake videos of celebrities and self-driving cars. AI has become a buzzword and popular interest has been steadily growing over the past few years. However, it would be a mistake to assume that everyone shares the same level of interest, learning about new technologies and increasing their understanding of AI at the same rate. After all, more than 40 percent of the world’s population isn’t even connected to the Internet yet.

Here in the UAE, the media seems to provide residents with a daily diet of news about artificial intelligence. This is no accident. Although it’s never a perfect replica, the media is a reflection of society and interest in AI-related topics has grown as business, government and education investment in AI has scaled-up.

Continue reading this story on The National


December 6, 2019
Donald-Trump-deepfake-video.jpg

Deepfake videos are becoming more abundant and increasingly difficult to spot.

Deepfake videos are back in the news again this week as China criminalised their publication without a warning to viewers. California also recently introduced an anti-deepfake law in an attempt to prevent such content from influencing the US elections in 2020.

Deepfakes are videos that make fake content look convincingly real, produced by software using machine learning algorithms. Videos like this started to pop up online a few years ago and since then, regulators around the world are scrambling to prevent the spread of the malicious content. While deepfake laws mean different things in different jurisdictions, what has changed to make deepfakes an urgent priority for policymakers? And will such laws be sufficient to keep pace with the spread of fake information?

First, there is the sheer quantity. The number of deepfake videos is growing fast as new technology makes it easier to create them. Deeptrace, an Amsterdam-based cybersecurity company, found the occurrence of deepfake videos on the web increased 84 per cent from December 2018 to July this year. The company identified 14,698 deepfakes online during this time.

In 2018, internet media group Buzzfeed grabbed attention with a video it dubbed “a public service announcement”: a deepfake video of US president Barack Obama “speaking” about fake news, voiced by American actor Jordan Peele. At first glance, the video appeared authentic, but on closer inspection it was clear to see the video had been manipulated.

Racking up nearly 7 million views on YouTube to date, the Buzzfeed stunt was a stark warning about the dangers of deepfakes — where anyone can appear to say anything. While the results so far have been more crude and relatively easy to identify, future deepfake videos are likely to be much harder for the human eye to identify as fake. The artificial intelligence (AI) used to make deepfakes is getting better, making it more and more difficult to distinguish a deepfake video from an original. In fact, machine learning algorithms already allow deepfake applications to mimic facial movements that are virtually undetectable as fake to human viewers.

This combination of easy-to-use deepfake software and the increasing sophistication of those applications, means that we’ll see the overall quality of deepfakes increase and we’re soon likely to see tens of thousands of different deepfakes, perhaps hundreds of thousands. Experts believe that technology to make deepfake videos that seem to be perfectly real will be widely available within a year.

So, how will we be able to tell what’s real and what’s fake?

When we see a video news report of a head of state, politician, doctor or subject matter expert saying something, how will we be able to trust that it’s authentic? This is now the subject of concern for leaders in business, technology, government and non-governmental organisations.

Undetectable deepfakes have the potential to mislead the media and the general public and so impact every aspect of business, government and society. As the risk of malicious deepfakes increases, it could represent a threat to everyone from celebrities to lawmakers, and from scientists to schoolchildren, and perhaps even the world’s legal systems.

Original videos can be manipulated in order that spokespeople say things that undermine their credibility. Likewise, inadvisable remarks made by public figures can be edited out or video evidence of a crime removed.

What’s more, the deepfake revolution is just beginning. As new technologies continue to develop, it is thought to be only a matter of years before it will be possible to create deepfakes in real-time, opening up opportunities for bad actors to deceive global audiences and manipulate public perceptions in a few moments. With a few more years of technology development, it’s also conceivable that it will become possible to create deepfakes at scale, altering video to deliver different versions of it to different audiences.

In today’s digital world, it’s not necessary that deepfakes fool mainstream media to have a significant impact. With nearly 5 billion videos watched on YouTube per day and another 8 billion through Facebook, deepfake producers have an eager global audience that is not yet accustomed to questioning whether trending videos are real or fake.

Facebook and Google are both developing AI to automatically detect deepfakes. But this technology currently lags far behind the development of deepfake tech itself. Until anti-deepfake software catches up, it’s likely the average internet user may have no way of knowing if a video is real or fake.

As scary as the future may sound, the most dangerous time for deepfakes may actually be the present.

This story was first published by The National


October 31, 2019
one-young-world-summit-2019.jpg

Twitter co-founder Biz Stone and Lars Buttler, chief executive of San Francisco-based The AI Foundation, introduced a new concept of ‘personal media’, enabled by artificial intelligence at last week’s One Young World Summit in London. The company is developing technology to allow anyone to create an artificial version of themselves to represent their interests anytime, anywhere. These personal avatars will look, sound and act like their creators.

According to the Stone and Buttler, just as the world moved from the mass media era to the social media era, it will now begin to move into the age of ‘personal media’.

Continue reading this story on The National.


February 27, 2018
arab-news-27-February-2018.jpg

The Wall Street Journal (WSJ) is seeking more partnerships globally as it endeavours to create a sustainable digital business and a share of global online voice. In the Middle East, it signed a partnershipw with Abu Dhabi Media-owned publication Al-Ittihad.

The Arab News asked for my take on why WSJ is trying to develop such partnerships and I made the point that the paper will be counting on deals like this to help push their own digital platform in front of more new eyeballs globally.

Read the full article here.


November 28, 2017
fake-content-bots-1200.jpg

Donald Trump’s habit of repeatedly ‘crying wolf’ over media stories has played a big part in pushing ‘fake news’ into the public consciousness this year. However, fake news is a real problem. In fact, fake content of all kinds is a problem: and one that is going to reach enormous proportions over the next few years, misleading consumers, journalists, students, voters and other seekers of truth in the process. A content war is coming and fake content is already starting to carry the big guns.

Global research and advisory firm Garter predicts that by 2020, “most people in mature economies will consume more false information than true information”. This onslaught of fake content is being enabled by artificial intelligence (AI).

Continue reading this story on the Spot On blog.