December 2019 — Carrington Malin

Blogs, writing, published articles, media interviews and other news
December 30, 2019
will-AI-take-your-job.jpg

Will AI take your job? Of course not, but that’s hardly the right question.

The semantics used by the technology industry about AI and its impact on jobs have started to grate on me a bit. The future of work is changing faster than ever before and it will drive many new opportunities and new career paths. In the short term, the reality is that a lot of people will lose their jobs, but that’s something no technology leader wants to be quoted as saying, in particular when they could be holding forth on our bright AI-powered future.

IBM CEO Ginni Rometty said – a couple of years ago now – that AI will impact 100 percent of current jobs, which, of course, is now common sense. AI’s impact on jobs is also a complex subject and its dangerous to try to sum it up in one simple concept. However, by and large, that’s what many tech leaders are doing, with “AI won’t take your job” as the reassuring umbrella message that the whole drive towards AI adoption seems to fly under. The answer is both straightforward and misleading. No, AI won’t take your job, anymore than a gun will shoot you: that requires a human.

The fly in the tech industry’s ointment is that their customers are not always ‘on message’. Many large employers have already commented over the past year that one of the benefits that AI brings to them is the ability to do more with less staff, some even going further and stating plainly that the technologies are allowing them to cut volumes of staff.

There are now a growing number of studies that highlight huge changes in the number of current jobs that will be phased-out due to the introduction of automation. In October, a report on the banking sector from Wells Fargo & Co. estimated 200,000 job cuts across the US banking industry over the next decade, including many customer service functions. Often, the big numbers in such reports are necessarily ‘fuzzy’. Statistics often include jobs that employers will phase out by head-count freezes, jobs that will no longer be specified for new operations, plus actual redundancies.

Forecasts for the elimination of certain jobs are embraced by the technology industry as evidence that the nature of work is changing and that old jobs must die in order for new, technology-enabled jobs to be created. One can already see from Linkedin’s top emerging jobs lists for 2019, that specialist roles in artificial intelligence development, robotics, data science and data security are all fast-growing. This is the crux of the now commonplace – but, as yet, unsubstantiated – argument that AI will create more jobs than it eliminates.

How much of ‘the future’s so bright’ narrative is used by the tech industry to distract us from the here and now? On conference platforms all over the world, big tech typically urges employers to focus about how AI can enhance productivity, help define new business models and benefit customers, and not to simply save costs by replacing workers. However, for any business that aims to be competitive in our global economy, must look at ways to cut costs as well as ways to increase efficiencies. As more AI-powered solutions are developed that reduce the need for human workers, more jobs are cut.

Food delivery platform Zomato announced that it was laying off some 600 people in September, claiming that most of these jobs will be automated following continued investment in technology systems.

Earlier in the year budget airline AirAsia confirmed that it had closed nine call centres as a result of its AI chatbot customer service project. No redundancies were mentioned and it’s assumed that most, if not all, call centres were outsourced.

Banks all over the world have used automation to cut countless thousands of jobs over the past ten years and AI will allow them to cut thousands more.

Meanwhile, global economic analysis firm Oxford Economics estimates that automation will eliminate up to 20 million manufacturing jobs worldwide by 2030.

According to the 2019 Harvey Nash / KPMG CIO Survey, one third of CIOs say their companies plan to replace more than 20 percent of job roles with AI/automation within 5 years, although 69 percent also believe new job roles will compensate for those lost. Many agree that the new technology-powered job roles created will compensate for current jobs lost. This also, clearly, means different things to different people. A new data science role may sound great when you’re at college or, perhaps, already involved in digital data, but not so much if you’re a call centre agent with 10 years’ experience who’s just been let go.

So, to me at least, it seems disingenuous for technology leaders to hide behind technicalities, calling out warnings of job losses as a result of AI as being misinformed, unjustified or not presenting the entire picture. Cost savings are a powerful driver of AI adoption and, for many organisations, those savings will be made by cutting jobs. There’s room for the tech industry to be a little more honest about that.

This story first appeared on Linkedin.


December 22, 2019
ai-search-popularity-2.png

The answer to that probably depends on where you live, where you’re from and what you do for a living.

For those of us following developments in emerging technologies closely, it might seem like the past year has been the year in which news, discussion and debate about artificial intelligence (AI) has come to the fore. Deepfakes, AI surveillance, facial recognition, smart robotics, chatbots and social media bots have all been in the news, with some associated with some highly controversial issues. There’s also been plenty of debate about the impact of AI on political campaigning, data privacy, human rights, jobs, skills and, needless to say, the steady flow of industry messages about business efficiency.

However, the truth is that the amount of attention that AI has received depends very much on which part of the world you live in and what you do for a living. One of the most interesting takeaways from looking at AI-related searches on Google over the past year is that many global search volumes for terms related to artificial intelligence haven’t changed that much, but the differences in interest shown from country-to-country is striking.

No prizes for guessing that China is among the countries that shows the most interest in artificial intelligence. Google Trends awards a score of 62 out of 100 for its search volume, despite the fact that Google’s services remain blocked for most Internet users in the country.

India (45), Pakistan (65) and the UAE’s (53) volumes of Google searches for artificial intelligence all compare favourably with China’s high level of interest. Although, for some reason Google Trends credits Zimbabwe (100) with being the country most interested in AI.

In Europe, the United Kingdom (15) and Ireland (17) are among the countries most interested in artificial intelligence, roughly on a par with the Netherlands (18), Switzerland (15), US (17), Australia (19) and New Zealand (15), while behind Canada (20) and South Africa (27).

Meanwhile, much of the world seems to be focused on other things. For those populations on the other side of the great digital divide, that’s perfectly understandable. Most of South America, Africa and a significant area of Asia appears to remain a backwater in terms of interest in AI. However, quite a number of European countries appear below 10 on Google Trend’s 0 to 100 scale, including France, Italy, Spain, Poland and others.

So, as we eagerly consume news and comment about how AI is going to change our world and herald sweeping changes that affect every aspect of our lives, it’s perhaps as well to remember that these changes won’t be uniform across the globe and, for many, artificial intelligence is going to seem largely irrelevant for a long time to come.

Note; figures from Google Trends 0 to 100 scale for search volume seem to change frequently, but the ranking of high to low volumes remain largely the same.

This story first appeared on Linkedin


December 21, 2019
UAE-Pepper-Robot.jpg

The UAE’s all abuzz about AI! We’re exposed to more and more news about artificial intelligence, or AI, these days – from stories about talking robots to deepfake videos of celebrities and self-driving cars. AI has become a buzzword and popular interest has been steadily growing over the past few years. However, it would be a mistake to assume that everyone shares the same level of interest, learning about new technologies and increasing their understanding of AI at the same rate. After all, more than 40 percent of the world’s population isn’t even connected to the Internet yet.

Here in the UAE, the media seems to provide residents with a daily diet of news about artificial intelligence. This is no accident. Although it’s never a perfect replica, the media is a reflection of society and interest in AI-related topics has grown as business, government and education investment in AI has scaled-up.

Continue reading this story on The National


December 19, 2019
facial-recognition-tech.jpg

The potential for emotion recognition is huge, but scientists at the university argue the technology is still too new to be reliable

A growing number of employers are requiring job candidates to complete video interviews that are screened by artificial intelligence (AI) to determine whether they move on to another round. However, many scientists claim that the technology is still in its infancy and cannot be trusted. This month, a new report from New York University’s AI Now Institute goes further and recommends a ban on the use of emotion recognition for important decisions that impact people’s lives and access to opportunities.

Emotion recognition systems are a subset of facial recognition, developed to track micro-expressions on people’s faces and aim to interpret their emotions and intent. Systems use computer vision technologies to track human facial movements and use algorithms to map these expressions to a defined set of measures. These measures allow the system to identify typical facial expressions and so infer what human emotions and behaviours are being exhibited.

The potential for emotion recognition is huge. According to Indian market intelligence firm Mordor Intelligence, emotion recognition has already become a $12 billion (Dh44bn) industry and is expected to grow rapidly to exceed $90bn per year by 2024. The field has drawn the interest of big tech firms such as Amazon, IBM and Microsoft, startups around the world and venture capitalists.

Advertisers want to know how consumers respond to their advertisements, retail stores want to know how shoppers feel about their displays, law enforcement authorities want to know how suspects react to questioning, and the list of customers goes on. Both business and government entities want to harness the promise of emotion recognition.

As businesses the world over look to AI to improve processes, increase efficiency and reduce costs, it should come as no surprise that AI is already being applied at scale for recruitment processes. Automation has the strongest appeal when an organisation has a volume of repetitive tasks and large volumes of data to process, and both issues apply to recruitment. Some 80 per cent of Fortune 500 firms now use AI technologies for recruitment.

Emotion recognition has been hailed as a game-changer by some members of the recruitment industry. It aims to identify non-verbal behaviours in videos of candidate interviews, while speech analysis tracks key words and changes in tone of voice. Such systems can track hundreds of thousands of data points for analysis from eye movements to what words and phrases are used. Developers claim that such systems are able to screen out the top candidates for any particular job by identifying candidate knowledge, social skills, attitude and level of confidence – all in a matter of minutes.

As with the adoption of many new AI applications, cost savings and speed are the two core drivers of AI-enabled recruitment. Potential savings for employers include time spent on screening candidates, the numbers of HR staff required to manage recruitment and another safeguard against the costly mistake of hiring the wrong candidate for a position. Meanwhile, the message for candidates is that AI can aid better job placement, ensuring that their new employer is a good fit for them.

However, the consensus among scientific researchers is the algorithms developed for emotion recognition lack a solid scientific foundation. Critics claim that it is premature to rely on AI to accurately assess human behaviour, primarily since most systems are built on widespread assumptions not independent research.

Emotion recognition was the focus of a report published earlier this year by a group of researchers from the Association for Psychological Science. The researchers spent two years reviewing more than 1,000 studies on facial expression and emotions. The study found that how people communicate their emotions varies significantly across cultures and situations, and across different people within a single situation. The report concluded that, for the time being, our understanding of the link between facial expression and emotions is tenuous at best.

Unintentional bias has become the focus of growing scrutiny from scientists, technology developers and human rights activists.

Many algorithms used by global businesses have already been found to have bias related to age, gender, race and other factors, due to the assumptions made whilst programming them and the type of data that has been used to feed machine learning. Last year, Amazon shut down an AI recruiting platform after finding that it discriminated against women.

One thing is for sure: regardless of the potential merits of emotion recognition and whether it prevents or promotes your chances of being offered a job, it is likely to remain the subject of debate for some time to come.

This story was first published by The National


December 6, 2019
Donald-Trump-deepfake-video.jpg

Deepfake videos are becoming more abundant and increasingly difficult to spot.

Deepfake videos are back in the news again this week as China criminalised their publication without a warning to viewers. California also recently introduced an anti-deepfake law in an attempt to prevent such content from influencing the US elections in 2020.

Deepfakes are videos that make fake content look convincingly real, produced by software using machine learning algorithms. Videos like this started to pop up online a few years ago and since then, regulators around the world are scrambling to prevent the spread of the malicious content. While deepfake laws mean different things in different jurisdictions, what has changed to make deepfakes an urgent priority for policymakers? And will such laws be sufficient to keep pace with the spread of fake information?

First, there is the sheer quantity. The number of deepfake videos is growing fast as new technology makes it easier to create them. Deeptrace, an Amsterdam-based cybersecurity company, found the occurrence of deepfake videos on the web increased 84 per cent from December 2018 to July this year. The company identified 14,698 deepfakes online during this time.

In 2018, internet media group Buzzfeed grabbed attention with a video it dubbed “a public service announcement”: a deepfake video of US president Barack Obama “speaking” about fake news, voiced by American actor Jordan Peele. At first glance, the video appeared authentic, but on closer inspection it was clear to see the video had been manipulated.

Racking up nearly 7 million views on YouTube to date, the Buzzfeed stunt was a stark warning about the dangers of deepfakes — where anyone can appear to say anything. While the results so far have been more crude and relatively easy to identify, future deepfake videos are likely to be much harder for the human eye to identify as fake. The artificial intelligence (AI) used to make deepfakes is getting better, making it more and more difficult to distinguish a deepfake video from an original. In fact, machine learning algorithms already allow deepfake applications to mimic facial movements that are virtually undetectable as fake to human viewers.

This combination of easy-to-use deepfake software and the increasing sophistication of those applications, means that we’ll see the overall quality of deepfakes increase and we’re soon likely to see tens of thousands of different deepfakes, perhaps hundreds of thousands. Experts believe that technology to make deepfake videos that seem to be perfectly real will be widely available within a year.

So, how will we be able to tell what’s real and what’s fake?

When we see a video news report of a head of state, politician, doctor or subject matter expert saying something, how will we be able to trust that it’s authentic? This is now the subject of concern for leaders in business, technology, government and non-governmental organisations.

Undetectable deepfakes have the potential to mislead the media and the general public and so impact every aspect of business, government and society. As the risk of malicious deepfakes increases, it could represent a threat to everyone from celebrities to lawmakers, and from scientists to schoolchildren, and perhaps even the world’s legal systems.

Original videos can be manipulated in order that spokespeople say things that undermine their credibility. Likewise, inadvisable remarks made by public figures can be edited out or video evidence of a crime removed.

What’s more, the deepfake revolution is just beginning. As new technologies continue to develop, it is thought to be only a matter of years before it will be possible to create deepfakes in real-time, opening up opportunities for bad actors to deceive global audiences and manipulate public perceptions in a few moments. With a few more years of technology development, it’s also conceivable that it will become possible to create deepfakes at scale, altering video to deliver different versions of it to different audiences.

In today’s digital world, it’s not necessary that deepfakes fool mainstream media to have a significant impact. With nearly 5 billion videos watched on YouTube per day and another 8 billion through Facebook, deepfake producers have an eager global audience that is not yet accustomed to questioning whether trending videos are real or fake.

Facebook and Google are both developing AI to automatically detect deepfakes. But this technology currently lags far behind the development of deepfake tech itself. Until anti-deepfake software catches up, it’s likely the average internet user may have no way of knowing if a video is real or fake.

As scary as the future may sound, the most dangerous time for deepfakes may actually be the present.

This story was first published by The National