AI regulation Archives — Carrington Malin

January 10, 2021
artificial-intelligence-ethical-concerns-2020.jpg

My New Year’s Linkedin poll about changes in how people feel about their ethical concerns regarding AI doesn’t prove much, but it does show that 2020 did little to ease those concerns.

Opinions and our level of understanding about artificial intelligence can vary a great deal from person to person. For example, I consider myself a bit of a technophile and an advocate of many technologies including AI, with a higher than average level of understanding. However, I harbour many concerns about the ethical application, usage and the lack of governance for some AI technologies. My knowledge doesn’t stop me having serious concerns, nor do those concerns stop me from seeing the benefits of technology applied well. I also expect my views on the solutions to ethical issues to differ from others. AI ethics is a complex subject.

So, my intention in running this limited Linkedin poll over the past week (96 people responded) was not to analyse the level of concern that people feel about AI, nor the reasons behind it, but simply whether the widepread media coverage about AI during the pandemic had either heightened or alleviated people’s concerns.

The results of the poll show that few people (9%) felt that their ethical concerns about AI were alleviated during 2020. Meanwhile, a significant proportion (38%) felt that 2020’s media coverage had actually heightened their ethical concerns about AI. We can’t guess the level of concern among the third and largest group – the 53% that voted 2020 ‘hasn’t changed anything’ – however, it’s clear that 2020 media coverage about AI brought no news to alleviate any concerns they might have either.

Artificial intelligence ethical concerns poll 2020

Media stories about the role of AI technologies in responding to the coronavirus pandemic began to appear early on in 2020, with governments, corporations and NGOs providing examples of where AI was being put to work and how it was benefiting citizens, customers, businesses, health systems, public services and society in general. Surely, this presented a golden opportunity for proponents of AI to build trust in its applications and technologies?

Automation and AI chat bots allowed private and public sector services, including healthcare systems, to handle customer needs as live person-to-person communications became more difficult to ensure. Meanwhile, credit was given to AI for helping to speed up data analysis, research and development to find new solutions, treatments and vaccines to protect society against the onslaught of Covid-19. Then there was the wave of digital adoption by retail companies (AI powered or not) in an effort to provide digital, contactless access to their services, boosting consumer confidence in services and increasing usage of online ordering and contactless payments.

On the whole, trust in the technology industry remains relatively high compared to other industries, but, nevertheless, trust is being eroded and it’s not a surprise that new, less understood and less regulated technologies such as AI are fueling concerns. Fear of AI-driven job losses is a popular concern, but so are privacy, security and data issues. However, many people around the world are broadly positive about AI, in particular those in Asia. According to Pew Research Center, two thirds or more of people surveyed in India, Japan, Singapore, South Korea and Taiwan say that AI has been a good thing for society.

Since the beginning of the pandemic, AI’s public image has had wins and losses. For example, research from Amedeus found that 56% of Indians believe new technologies will boost their confidence in travel. Meanwhile, a study of National Health Service (NHS) workers in London found that although 70% believed that AI could be useful, 80% of participants believed that there could be serious privacy issues associated with its use. However, despite a relatively high level of trust in the US for government usage of facial recognition, the Black Lives Matter protests of 2020 highlighted deep concerns, prompting Amazon, IBM and Microsoft to halt the sale of facial recognition to police forces.

Overall, I don’t think that AI has been seen as the widespread buffer to the spread of Covid-19 as it, perhaps, could have turned out to be. Renowned global AI expert Kai-Fu Lee commented in a webinar last month that AI wasn’t really prepared to make the decisive difference in combating the spread of the new coronavirus. With no grand victory over Covid-19 to attribute to AI, its role over the past year it’s understandable that there was no grand victory for AI’s public image either. Meanwhile, all the inconvenient questions about AI’s future and the overall lack of clear policies that fuel concerns about AI remain, some even attracting greater attention during the pandemic.

This article was first posted on Linkedin.


December 6, 2019
Donald-Trump-deepfake-video.jpg

Deepfake videos are becoming more abundant and increasingly difficult to spot.

Deepfake videos are back in the news again this week as China criminalised their publication without a warning to viewers. California also recently introduced an anti-deepfake law in an attempt to prevent such content from influencing the US elections in 2020.

Deepfakes are videos that make fake content look convincingly real, produced by software using machine learning algorithms. Videos like this started to pop up online a few years ago and since then, regulators around the world are scrambling to prevent the spread of the malicious content. While deepfake laws mean different things in different jurisdictions, what has changed to make deepfakes an urgent priority for policymakers? And will such laws be sufficient to keep pace with the spread of fake information?

First, there is the sheer quantity. The number of deepfake videos is growing fast as new technology makes it easier to create them. Deeptrace, an Amsterdam-based cybersecurity company, found the occurrence of deepfake videos on the web increased 84 per cent from December 2018 to July this year. The company identified 14,698 deepfakes online during this time.

In 2018, internet media group Buzzfeed grabbed attention with a video it dubbed “a public service announcement”: a deepfake video of US president Barack Obama “speaking” about fake news, voiced by American actor Jordan Peele. At first glance, the video appeared authentic, but on closer inspection it was clear to see the video had been manipulated.

Racking up nearly 7 million views on YouTube to date, the Buzzfeed stunt was a stark warning about the dangers of deepfakes — where anyone can appear to say anything. While the results so far have been more crude and relatively easy to identify, future deepfake videos are likely to be much harder for the human eye to identify as fake. The artificial intelligence (AI) used to make deepfakes is getting better, making it more and more difficult to distinguish a deepfake video from an original. In fact, machine learning algorithms already allow deepfake applications to mimic facial movements that are virtually undetectable as fake to human viewers.

This combination of easy-to-use deepfake software and the increasing sophistication of those applications, means that we’ll see the overall quality of deepfakes increase and we’re soon likely to see tens of thousands of different deepfakes, perhaps hundreds of thousands. Experts believe that technology to make deepfake videos that seem to be perfectly real will be widely available within a year.

So, how will we be able to tell what’s real and what’s fake?

When we see a video news report of a head of state, politician, doctor or subject matter expert saying something, how will we be able to trust that it’s authentic? This is now the subject of concern for leaders in business, technology, government and non-governmental organisations.

Undetectable deepfakes have the potential to mislead the media and the general public and so impact every aspect of business, government and society. As the risk of malicious deepfakes increases, it could represent a threat to everyone from celebrities to lawmakers, and from scientists to schoolchildren, and perhaps even the world’s legal systems.

Original videos can be manipulated in order that spokespeople say things that undermine their credibility. Likewise, inadvisable remarks made by public figures can be edited out or video evidence of a crime removed.

What’s more, the deepfake revolution is just beginning. As new technologies continue to develop, it is thought to be only a matter of years before it will be possible to create deepfakes in real-time, opening up opportunities for bad actors to deceive global audiences and manipulate public perceptions in a few moments. With a few more years of technology development, it’s also conceivable that it will become possible to create deepfakes at scale, altering video to deliver different versions of it to different audiences.

In today’s digital world, it’s not necessary that deepfakes fool mainstream media to have a significant impact. With nearly 5 billion videos watched on YouTube per day and another 8 billion through Facebook, deepfake producers have an eager global audience that is not yet accustomed to questioning whether trending videos are real or fake.

Facebook and Google are both developing AI to automatically detect deepfakes. But this technology currently lags far behind the development of deepfake tech itself. Until anti-deepfake software catches up, it’s likely the average internet user may have no way of knowing if a video is real or fake.

As scary as the future may sound, the most dangerous time for deepfakes may actually be the present.

This story was first published by The National


November 8, 2019
WEF-technology-policy.jpg

One of the topics discussed at this week’s annual World Economic Forum Global Future Council in Dubai was how governments are coping with setting the rules for disruptive technologies. Given the now daily news coverage on the divergent opinions regarding the threats and opportunities brought by artificial intelligence (AI), it should come as no surprise that policymakers are coming under extreme pressure to do better.

It’s not simply the scale of the challenge, with government, industry and other institutions having to consider a wider range of technologies and technology implications than ever before. It is the sheer pace of change driving an unrelenting cycle of innovation and disruption, and, as AI becomes more commonplace, the pace of innovation and disruption will only further accelerate.

Continue reading this story on The National.


September 6, 2019
emotional-ai-and-gdpr.jpg

The increasing spread of emotional AI, brings with it new concerns about privacy, personal data rights and, well, freedom of expression (although, in a sense, perhaps, not thought about much in the past). The data that the emotion recognition captures could be considered to be biometric data, just like a scan of your fingerprint. However, there is little legislation globally that specifically addresses the capture, usage rights and privacy considerations of biometric data. Until GDPR.

Continue reading this story on Linkedin