Articles Archives — Page 5 of 8 — Carrington Malin

December 19, 2019
facial-recognition-tech.jpg

The potential for emotion recognition is huge, but scientists at the university argue the technology is still too new to be reliable

A growing number of employers are requiring job candidates to complete video interviews that are screened by artificial intelligence (AI) to determine whether they move on to another round. However, many scientists claim that the technology is still in its infancy and cannot be trusted. This month, a new report from New York University’s AI Now Institute goes further and recommends a ban on the use of emotion recognition for important decisions that impact people’s lives and access to opportunities.

Emotion recognition systems are a subset of facial recognition, developed to track micro-expressions on people’s faces and aim to interpret their emotions and intent. Systems use computer vision technologies to track human facial movements and use algorithms to map these expressions to a defined set of measures. These measures allow the system to identify typical facial expressions and so infer what human emotions and behaviours are being exhibited.

The potential for emotion recognition is huge. According to Indian market intelligence firm Mordor Intelligence, emotion recognition has already become a $12 billion (Dh44bn) industry and is expected to grow rapidly to exceed $90bn per year by 2024. The field has drawn the interest of big tech firms such as Amazon, IBM and Microsoft, startups around the world and venture capitalists.

Advertisers want to know how consumers respond to their advertisements, retail stores want to know how shoppers feel about their displays, law enforcement authorities want to know how suspects react to questioning, and the list of customers goes on. Both business and government entities want to harness the promise of emotion recognition.

As businesses the world over look to AI to improve processes, increase efficiency and reduce costs, it should come as no surprise that AI is already being applied at scale for recruitment processes. Automation has the strongest appeal when an organisation has a volume of repetitive tasks and large volumes of data to process, and both issues apply to recruitment. Some 80 per cent of Fortune 500 firms now use AI technologies for recruitment.

Emotion recognition has been hailed as a game-changer by some members of the recruitment industry. It aims to identify non-verbal behaviours in videos of candidate interviews, while speech analysis tracks key words and changes in tone of voice. Such systems can track hundreds of thousands of data points for analysis from eye movements to what words and phrases are used. Developers claim that such systems are able to screen out the top candidates for any particular job by identifying candidate knowledge, social skills, attitude and level of confidence – all in a matter of minutes.

As with the adoption of many new AI applications, cost savings and speed are the two core drivers of AI-enabled recruitment. Potential savings for employers include time spent on screening candidates, the numbers of HR staff required to manage recruitment and another safeguard against the costly mistake of hiring the wrong candidate for a position. Meanwhile, the message for candidates is that AI can aid better job placement, ensuring that their new employer is a good fit for them.

However, the consensus among scientific researchers is the algorithms developed for emotion recognition lack a solid scientific foundation. Critics claim that it is premature to rely on AI to accurately assess human behaviour, primarily since most systems are built on widespread assumptions not independent research.

Emotion recognition was the focus of a report published earlier this year by a group of researchers from the Association for Psychological Science. The researchers spent two years reviewing more than 1,000 studies on facial expression and emotions. The study found that how people communicate their emotions varies significantly across cultures and situations, and across different people within a single situation. The report concluded that, for the time being, our understanding of the link between facial expression and emotions is tenuous at best.

Unintentional bias has become the focus of growing scrutiny from scientists, technology developers and human rights activists.

Many algorithms used by global businesses have already been found to have bias related to age, gender, race and other factors, due to the assumptions made whilst programming them and the type of data that has been used to feed machine learning. Last year, Amazon shut down an AI recruiting platform after finding that it discriminated against women.

One thing is for sure: regardless of the potential merits of emotion recognition and whether it prevents or promotes your chances of being offered a job, it is likely to remain the subject of debate for some time to come.

This story was first published by The National


December 6, 2019
Donald-Trump-deepfake-video.jpg

Deepfake videos are becoming more abundant and increasingly difficult to spot.

Deepfake videos are back in the news again this week as China criminalised their publication without a warning to viewers. California also recently introduced an anti-deepfake law in an attempt to prevent such content from influencing the US elections in 2020.

Deepfakes are videos that make fake content look convincingly real, produced by software using machine learning algorithms. Videos like this started to pop up online a few years ago and since then, regulators around the world are scrambling to prevent the spread of the malicious content. While deepfake laws mean different things in different jurisdictions, what has changed to make deepfakes an urgent priority for policymakers? And will such laws be sufficient to keep pace with the spread of fake information?

First, there is the sheer quantity. The number of deepfake videos is growing fast as new technology makes it easier to create them. Deeptrace, an Amsterdam-based cybersecurity company, found the occurrence of deepfake videos on the web increased 84 per cent from December 2018 to July this year. The company identified 14,698 deepfakes online during this time.

In 2018, internet media group Buzzfeed grabbed attention with a video it dubbed “a public service announcement”: a deepfake video of US president Barack Obama “speaking” about fake news, voiced by American actor Jordan Peele. At first glance, the video appeared authentic, but on closer inspection it was clear to see the video had been manipulated.

Racking up nearly 7 million views on YouTube to date, the Buzzfeed stunt was a stark warning about the dangers of deepfakes — where anyone can appear to say anything. While the results so far have been more crude and relatively easy to identify, future deepfake videos are likely to be much harder for the human eye to identify as fake. The artificial intelligence (AI) used to make deepfakes is getting better, making it more and more difficult to distinguish a deepfake video from an original. In fact, machine learning algorithms already allow deepfake applications to mimic facial movements that are virtually undetectable as fake to human viewers.

This combination of easy-to-use deepfake software and the increasing sophistication of those applications, means that we’ll see the overall quality of deepfakes increase and we’re soon likely to see tens of thousands of different deepfakes, perhaps hundreds of thousands. Experts believe that technology to make deepfake videos that seem to be perfectly real will be widely available within a year.

So, how will we be able to tell what’s real and what’s fake?

When we see a video news report of a head of state, politician, doctor or subject matter expert saying something, how will we be able to trust that it’s authentic? This is now the subject of concern for leaders in business, technology, government and non-governmental organisations.

Undetectable deepfakes have the potential to mislead the media and the general public and so impact every aspect of business, government and society. As the risk of malicious deepfakes increases, it could represent a threat to everyone from celebrities to lawmakers, and from scientists to schoolchildren, and perhaps even the world’s legal systems.

Original videos can be manipulated in order that spokespeople say things that undermine their credibility. Likewise, inadvisable remarks made by public figures can be edited out or video evidence of a crime removed.

What’s more, the deepfake revolution is just beginning. As new technologies continue to develop, it is thought to be only a matter of years before it will be possible to create deepfakes in real-time, opening up opportunities for bad actors to deceive global audiences and manipulate public perceptions in a few moments. With a few more years of technology development, it’s also conceivable that it will become possible to create deepfakes at scale, altering video to deliver different versions of it to different audiences.

In today’s digital world, it’s not necessary that deepfakes fool mainstream media to have a significant impact. With nearly 5 billion videos watched on YouTube per day and another 8 billion through Facebook, deepfake producers have an eager global audience that is not yet accustomed to questioning whether trending videos are real or fake.

Facebook and Google are both developing AI to automatically detect deepfakes. But this technology currently lags far behind the development of deepfake tech itself. Until anti-deepfake software catches up, it’s likely the average internet user may have no way of knowing if a video is real or fake.

As scary as the future may sound, the most dangerous time for deepfakes may actually be the present.

This story was first published by The National


November 28, 2019
Tim-Berners-Lee-save-the-Web.jpg

British computer scientist and inventor of the web has just launched a global plan to save it.

Sir Tim Berners-Lee on Monday launched the World Wide Web Foundation’s ‘Contract For The Web’, a plan that lays out a set of core principles and a road map for business, government and individuals to follow. He says such a concept is urgently needed to save the web and help prevent humanity sliding into a new age of ‘digital dystopia’.  So, why is it that Berners-Lee thinks the world wide web needs saving?

Regardless of how much time you may personally spend browsing the web, it is now fundamental to life as we know it. Over the past 30 years, the world wide web has truly changed the world, helping to democratise access to information and education, accelerating global scientific progress, enabling the development of numerous other digital technologies, becoming the biggest driver of the global economy and even helping humanity to develop a broader understanding of itself.

The vision of the academic, scientific and technology communities that built the web was always to create an open and neutral system, available to all. However, according to Berners-Lee, the web now comes under increasing attack from government and commercial interests, threatening its neutrality, freedom and universal access.

In some respects, this has always been the case. In the 1990s, some governments were more enthusiastic than others in allowing public access to the internet to begin with. Many government officials and politicians had concerns about the perceived dangers of the web, such as use by organised crime, terrorists, political activists or publishers of pornography.

Business has also long influenced usage of the web and has been a deciding factor in the rate of internet adoption. The capitalist nature of the internet’s global roll-out is a primary reason why more than half of the world’s population is still offline. For those fortunate enough to be online, their web experience is heavily influenced by huge investment promoting commercial interests.

So, given that the web has already been shaped by long standing commercial and government interests, what’s the impetus that has led to the heightened concern and the Web Foundation’s global action plan?

Berners-Lee maintains that never before has the web’s power for good been under more threat and that it has now reached a tipping point from which it could very well descend into dystopia. Firstly, with 46 per cent of the world’s population still unable to access the internet, he believes the digital divide threatens to be one of the greatest sources of equality in our time.

Speaking at the internet Governance Forum in Berlin this week, Berners-Lee put his case forward for the web as a force that truly serves humanity, but was also at pains to impress upon delegates the complex challenges that we face require action across the gamut of business, government, society and individual internet users.

The concept for the plan was first introduced by Berners-Lee five years ago as a ‘Magna Carta for the web’. Since then, there can be no doubt that new dangers continue to present themselves.

From foreign powers accused of trying to manipulate the 2016 US election via social media, to numerous data breaches exposing the private data of millions of internet users, through to the epidemic of fake news and fake content, and the rise in hate speech online, the web has never had the power to negatively affect so many. Importantly, dangers like these know no national boundaries and arguably make the internet less safe, even in those countries that have long championed online freedoms and protection.

The global, borderless nature of the web means that laws such as the European Union’s General Data Protection Regulation can only ever provide part of the solution to ensure it works for all. For this reason, Berners-Lee believes that public participation is critical, if we are to have the web that we want.

In addition to providing a framework of guiding principles for business and government, the Web Foundation’s plan urges internet users to be active participants in shaping the web, including the content and systems made available through it. Individuals can help build strong communities that respect civil discourse and human dignity, and be active citizens of the web, creating awareness among their peers regarding threats, supporting positive initiatives and holding regulators to account.

In common with other pressing global issues such as climate change and increasing social inequalities, it may be up to humanity to save itself.

This story was first published by The National


November 22, 2019
singapore-AI-strategy.jpg

With the US and China dominating artificial intelligence development, what chances do smaller nations have?

Over the past two years, a national artificial intelligence (AI) strategy has come to be seen as a pre-requisite for digital competitiveness and an essential pillar of national governance for the Fourth Industrial Revolution. So, Singapore unveiling a new, updated national AI strategy last week has received global attention.

In common with the UAE, Singapore was one of the first countries to announce a national AI strategy, back in 2017. The new one, unveiled by the Deputy Prime Minister Heng Swee Keat on the last day of Singapore’s FinTech Festival last week, is holistic and zeros in on some specific national goals. Importantly, it also leverages investments already made by the government in education, technology development, infrastructure and innovation.

Developed by the Smart Nation Digital Government Office (SNDGO), the AI strategy not only identifies key areas that can be enabled by AI and the necessary resources to support nation-wide AI adoption, but also aims to set out Singapore’s stall as a leading global hub for the development, testing and export of AI applications. Recently ranked by the think tank Oliver Wyman Forum as the city most ready for AI, Singapore’s play for a greater role in the development of commercial and government AI systems has many things going for it.

Against the backdrop of the China-US trade war, Singapore is geographically and politically well placed to encourage both Chinese and American investment in AI ventures, at a time when cross-border foreign direct investment and venture capital between the two AI powerhouses is at its lowest level since 2014. Meanwhile, the combination of the country’s willingness to implement AI and the small size of the nation itself, make it an ideal testbed for AI developers to try-out their solutions before exporting them to larger countries, where implementation may face more obstacles and have higher costs.

Singapore’s strategy identifies key enablers for AI innovation and adoption, including the development of talent, data infrastructure and creating a progressive and trusted environment for AI. However, crucially, it also picks five core development projects designed to bring early benefits, plus create opportunities for local innovation and investment. By choosing AI-enabled projects that both address national challenges and deliver a visible impact on society and the economy, Singapore is also preparing the proof of concept for its goal of becoming a global hub for the development of AI technologies.

It’s no coincidence that the UAE, Finland and Singapore all first committed to national AI strategies in 2017, alongside large nations such as Canada and China, but well ahead of most of the world. All three countries have populations under 10 million, have relatively large economies and have been able to stay ahead of the technology curve.

The forward-looking policy and smaller size of these countries has helped to make embracing new technologies faster and more achievable than for many larger countries with bigger budgets, often allowing them to leapfrog global competitors.

Finland, Singapore and the UAE were all early pioneers of e-government, helping to develop new digital government processes. They were all also early adopters of new mobile standards and consumer services including mobile broadband.

So, it makes perfect sense that smaller digital-savvy countries should be able to take leadership positions in the fast-developing world of AI.

It is now well-known that the UAE was the first country in the world to bring AI decision-making into government at a cabinet level, naming His Excellency Omar Sultan Al Olama Minister of State for Artificial Intelligence in October 2017. In April of this year, the cabinet approved the UAE’s AI Strategy 2031.

The UAE has also made strategic investments in a number of new ventures to ensure that the UAE becomes not only an early adopter, but also a leading producer of AI applications. Last week Abu Dhabi National Oil Company (Adnoc), one of the world’s largest oil production companies, announced a joint venture with UAE AI group G42 to create artificially intelligent applications for the energy sector.

Other high profile AI investments in the UAE include a world-class AI research institute in its capital, the world’s first dedicated artificial intelligence university and Chinese AI provider SenseTime’s plans to open a Europe, Middle East and Africa research and development centre in Abu Dhabi.

Singapore’s new national AI strategy makes a convincing case for prioritising the development of a homegrown AI industry, in line with the country’s core strengths and challenges. The UAE has its own set of strengths and challenges, and these too, provide a golden opportunity for it to become one of the world’s leading AI producers.

This story was first published by The National


November 13, 2019
Googleplex-building.jpg

It should come as no surprise that Google, one of the largest AI developers in the world, this week announced it a partnership agreement with Ascension, the second largest healthcare system in the US. The deal will gain Google access to the health records of millions of Americans across 21 states.

What though has proved to be a surprise to the media, American public and other stakeholders is that the partnership (code-named “Project Nightingale”) began last year in secret and without communication with doctors or patients, reported the Wall Street Journal.

Continue reading this story on The National


November 8, 2019
WEF-technology-policy.jpg

One of the topics discussed at this week’s annual World Economic Forum Global Future Council in Dubai was how governments are coping with setting the rules for disruptive technologies. Given the now daily news coverage on the divergent opinions regarding the threats and opportunities brought by artificial intelligence (AI), it should come as no surprise that policymakers are coming under extreme pressure to do better.

It’s not simply the scale of the challenge, with government, industry and other institutions having to consider a wider range of technologies and technology implications than ever before. It is the sheer pace of change driving an unrelenting cycle of innovation and disruption, and, as AI becomes more commonplace, the pace of innovation and disruption will only further accelerate.

Continue reading this story on The National.


November 2, 2019
ai-brand-avatar.jpg

Do brands need AI avatars of themselves? Last week at London’s One Young World Summit, Biz Stone co-founder of Twitter and Lars Buttler, CEO of San Francisco-based The AI Foundation, announced a new concept they called ‘personal media’ and claimed that artificial intelligence is the future of social change. The Foundation is working on new technology that Buttler says will allow anyone to create an AI avatar of themselves, which would look like them, talk like them and act like them. Empowered by AI avatars, people will then be able to, potentially, have billions of conversations at the same time.

So, what does this new kind of AI communications mean for brands?

Continue reading this story on Linkedin


October 31, 2019
one-young-world-summit-2019.jpg

Twitter co-founder Biz Stone and Lars Buttler, chief executive of San Francisco-based The AI Foundation, introduced a new concept of ‘personal media’, enabled by artificial intelligence at last week’s One Young World Summit in London. The company is developing technology to allow anyone to create an artificial version of themselves to represent their interests anytime, anywhere. These personal avatars will look, sound and act like their creators.

According to the Stone and Buttler, just as the world moved from the mass media era to the social media era, it will now begin to move into the age of ‘personal media’.

Continue reading this story on The National.


October 24, 2019
blockchain-smart-farming.jpg

One of the most memorable lines from sci-fi comedy The Hitchhiker’s Guide to the Galaxy is the introduction to its tea-drinking, central protagonist Arthur Dent. The original radio script of Douglas Adams says of Dent: “He no more knows his destiny than a tea leaf knows the history of the East India Company.”

Tea was a recurring theme in Adams’ work, not least of all because it provided a point of contrast. What could be more far removed from stories of space travel, robots and artificial intelligence than the humble tea leaves?

Continue reading this story on The National.