The National Archives — Carrington Malin

March 13, 2020
amazon-prime-drone-delivery.jpg

A pandemic could be a tipping point for a technology we’ve been promised for nearly a decade.

China’s use of drones and robots in its fight to contain Covid-19 is now well publicised, with medical authorities and security forces employing autonomous agents to limit human contact and so the spread of the virus.

However, e-commerce companies and other technology firms have been talking about the promise of commercial drone services for some years now, where drones can deliver prepared food, groceries, medicine and other online purchases directly to the consumer in a matter of minutes.

In reality, while the technology is largely ready, commercial drone delivery – a new market that, according to Markets and Markets, could be worth more than $27 billion (Dh99bn) by 2030 — has long been held up by lack of government policy and regulation. Could the demands of fighting Covid-19 actually fast-track the introduction of drones to the masses?

The benefits of drone delivery have already been clearly demonstrated by China’s response to the coronavirus emergency. In an effort to both increase the speed of medical deliveries and remove unnecessary human contact, drones have been used widely in the country’s virus-stricken provinces.

Chinese firms, such as e-commerce giant JD.com, drone delivery start-up Antwork, drone maker MicroMultiCopter and China’s largest private courier company SF Express, have all deployed drones to help deliver medical supplies and transport medical samples for analysis. By using drones, healthcare authorities are assured of faster and “contactless delivery”.

China has also used drones fitted with thermal cameras to scan crowds and identify those that may be in need of medical treatment; drones have been carrying sanitisers to spray densely populated communities; and police security drones have reminded city pedestrians to wear protective face masks. Chinese drone manufacturer DJI mounted loudspeakers on its drones to help police disperse crowds in public places.

Predictably, China’s use of drones has also sparked new concerns about electronic surveillance and infringements on human rights from private tech companies.

However, for those watching the development of drone delivery services, it appears the vast majority of drone usage during China’s health emergency has been state-sponsored and limited in scope. Although drones have been used for emergency food delivery by government authorities, commercial services haven’t moved beyond pilot projects.

Food and shopping delivery trials have been conducted by AntWork, Alibaba, JD.com and others, but, despite the new public demand for contactless delivery, no fully commercial services have yet been rolled out. Delivery app Meituan Dianping began delivering grocery orders in China using autonomous vehicles last month, as part of a contactless delivery initiative, but has not yet been able to launch its planned drone delivery service.

Elsewhere in the world, drone delivery trials have been taking place for years. Amazon began talking about plans for e-commerce deliveries using unmanned aerial vehicles (UAVs) in 2013 and launched its Prime Air aerial delivery system brand in 2016, making its first drone delivery to a customer in the city of Cambridge, England. However, despite announcements over the past year that Prime Air would be ready to begin commercial services in the UK within a matter of months, no service has been rolled out yet.

Irish drone delivery firm Manna plans to launch a new food delivery pilot this month, servicing a suburb of about 30,000 South Dublin residents. American ice cream brand Ben & Jerry’s, UK food delivery firm Just Eat, and Irish restaurant chain Camile Thai have all signed up to take part in the pilot project.

Meanwhile, Alphabet-owned Wing, which last year became the first Federal Aviation Administration certified air carrier, has been testing drone delivery more extensively in Australia, Finland and the US, completing over 80,000 flights. Since then, logistics giant UPS has also received US government approval to operate a drone carrier in preparation for making commercial, medical and industrial deliveries.

As governments across the globe look for ways to prevent the spread of a new highly contagious virus, fast-tracking contactless delivery options seems to make enormous sense. Contactless deliveries of all kinds could prove to be a vital tool to limit exposure to infected individuals, deliver essential medicine and food to high-risk locations and allow people to self-isolate, while still being able to get their groceries.

No doubt, logistics and e-commerce industries are hoping the coronavirus crisis brings more government focus to solving these regulatory obstacles.

This story was first published in The National.


February 6, 2020
google-meena-chatbot.jpg

The tech giant’s new chatbot could make AI-powered communication more conversational and even more profitable

Since the launch of Apple’s Siri a decade ago, more than 1.5 billion virtual assistants have been installed on smartphones and other devices. There can be few electronics users who don’t recognise the enormous promise of conversational AI. However, our seemingly hard of hearing virtual assistants and awkward artificial intelligence chatbot conversations have also proven the technology’s limitations.

Anyone who uses AI assistants is sure to experience frequent misunderstandings, irrelevant answers and way too many ‘I don’t know’ responses, while many corporate chatbots simply serve up pre-defined bits of information whether you ask for them to or not. So, while we have seen massive advances in natural language processing (NLP) during recent years, human-to-AI conversations remain far from ‘natural’.

But that may soon change.

Last week, a team from Google published an academic paper on ‘Meena’, an open-domain chatbot developed on top of a huge neural network and trained on about 40 billion words of real social media conversations. The result, Google says, is that Meena can chat with you about just about anything and hold a better conversation than any other AI agent created to-date.

One of the things that Google’s development team has been working on is how to increase the chatbot’s ability to hold multi-turn conversations, where a user’s follow-up questions are considered by AI in context of the whole conversations so far. The team’s solution has been to build the chatbot on a neural network, a set of algorithms modeled loosely on the way the human brain works, which is designed to recognise patterns in data. This neural network was then trained on large volumes of data to create 2.6 billion parameters, which inform those algorithms and so improve Meena’s conversation quality.

Creating conversational computer applications that can pass for human intelligence has been a core theme for both computer science and science fiction since the fifties. Alan Turing, the famous British World War II codebreaker and one of the founding fathers of AI theory, developed a test to measure if a computer system can exhibit intelligent behaviour indistinguishable from that of a human in 1950. Since then, the Turing Test has been somewhat of a Holy Grail for computer scientists and technology developers.

However, Google’s quest to develop a superior chatbot is far from academic. The global AI chatbot market offers one of the best examples for how AI can drive revenue for businesses. Business and government organisations worldwide are investing in chatbots, in an effort to enhance customer service levels, decrease costs and open up new revenue opportunities. According to research company Markets and Markets, the global market for conversational AI solutions is forecast to grow from $4.2 billion (Dh15.4bn) in 2019 to $15.7bn by the year 2024.

Chatbot solutions built for large enterprises have the ability to carry on tens of thousands of conversations simultaneously, drawing on millions of data points. Global advisory firm Gartner Group has found AI chatbots used for customer service can lead to reductions in customer calls, email and other enquiries by up to 70 per cent.

All this industry growth and customer service success is taking place despite the innumerable issues that users encounter when trying to have customer service conversations with AI chatbots. As consumers, we are now conditioned to dealing with technology that doesn’t quite work. If the benefits outweigh the frustration, we’re happy to work around the problem. We rephrase our questions when a chatbot can’t interpret our request or choose from the options offered, rather than try to solicit further information. Or, if we feel the conversation is just too much effort for the reward, we just give up.

The latent opportunity for virtual customer assistants is that they could play an active role in defining needs and preferences in the moment, whilst in conversation with the customer, helping to create highly personalised services. Today, programmers have to limit the options that customer service chatbots offer or too many conversations result in dead-ends, unmet requests and frustrated customers. So, choices offered to customers by chatbots, are often as simple as A, B or C.

If developers can increase a chatbot’s ability to hold a more natural human conversation, then chatbots may have the opportunity to solicit more actionable data from customer conversations, resolve a wider range of customer issues automatically and identify additional revenue opportunities in an instant.

Given how fast the chatbot technology market is growing, the payback from enabling AI chatbots to bring customer conversations to a more profitable conclusion could register in the billions of dollars.

This story was first published in The National.


February 4, 2020
davos-wef-2020.jpg

The WEF worked with more than 100 companies and tech experts to develop a new framework for assessing risk and AI.

Companies are implementing new technologies faster than ever in the race to remain competitive, often without understanding the inherent risks.

In response to a growing need to raise awareness about the risks associated with artificial intelligence, the World Economic Forum, together with the Centre for the Fourth Industrial Revolution Network Fellows from Accenture, BBVA, IBM and Suntory Holdings, worked with more than 100 companies and technology experts over the past year to create the ‘Empowering AI Toolkit’. Developed with the structure of a company board meeting in mind, the toolkit provides a framework for mapping AI policy to company objectives and priorities.

Developed with the structure of a company board meeting in mind, the toolkit provides a framework for mapping artificial intelligence policy to company objectives and priorities.

Any board director reading through WEF’s Empowering AI Toolkit will find it valuable not because it delivers any silver bullets, but because it can provide much-needed context and direction to AI policy discussions – without having to hire expensive consultants.

The new framework identifies seven priorities, like brand strategy and cybersecurity, to be considered from an ethics, risk, audit and governance point of view. The toolkit was designed to mimic how board committees and organisations typically approach ethics, policy and risk.

Artificial intelligence promises to solve some of the most pressing issues faced by society, from ensuring fairer trade and reducing consumer waste, to predicting natural disasters and providing early diagnosis for cancer patients. But scandals such as big data breaches, exposed bias in computer algorithms and new solutions that threaten jobs can destroy brands and stock prices and irreparably damage public trust.

Facebook’s 2018 Cambridge Analytica data crisis opened the world’s eyes to the risks of trusting the private sector with detailed personal data. The fact that an otherwise unknown London analytics company had drawn data on 50 million Facebook users without their permission not only drew public backlash, it sent Facebook’s market value plunging $50 billion within a week of the episode being reported.

In addition to Facebook’s Cambridge Analytica woes, there have been a number of high-profile revelations that artificial intelligence systems used by both government and business have applied hidden bias when informing decisions that affect people’s lives. These include a number of cases where algorithms used by big companies in recruitment have been biased based on the race or gender of job candidates.

There is some awareness that new technologies can wreak havoc if not used carefully – but there isn’t enough. And it can challenge corporate boards to predict where a pitfall may present itself on a company’s path to becoming more tech-savvy.

Despite all the warning signs, there remains an “it can’t happen here” attitude. Customer experience company Genesys recently asked more than five thousand employers in six countries about their opinions about AI and found that 54 per cent were not concerned about the unethical use of AI in their companies.

Many corporations have established AI working groups, ethics boards and special committees to advise on policy, risks and strategy. A new KPMG survey found that 44 per cent of businesses surveyed claimed to have implemented an AI code of ethics and another 30 per cent said that they are working on one.Since AI is an emerging technology, new risks are emerging too. Any company could use a road map.

One of today’s biggest AI risks for corporations is the use of, as WEF calls them, ‘inscrutable black box algorithms’. Simply put, most algorithms work in a manner only understood by the programmers who developed them. These algorithms are often considered to be valuable intellectual property, further reinforcing the need to keep their inner-workings a secret and thus removed from scrutiny and governance.

There are already a number of collaborations, groups and institutes that are helping to address some of these issues. The non-profit coalition Partnership on AI, founded by tech giants Amazon, DeepMind, Facebook, Google, IBM and Microsoft, was established to research best practices to ensure that AI systems serve society. Last year, Harvard Kennedy School’s Belfer Center for Science and International Affairs convened the inaugural meeting of The Council on the Responsible Use of Artificial Intelligence, bringing together stakeholders from government, business, academia and society to examine policymaking for AI usage.

However, the speed and ubiquitous nature of artificial intelligence mean that even accurately defining certain risks remains a challenge. Even the best policies must allow for change. The good news is that WEF’s new AI toolkit is available free-of-charge and so could prove to be of immediate value to commercial policymakers the world over.

This story was first published in The National.


January 17, 2020
takent-wars-and-national-policy.jpg

The UAE is developing a sophisticated and far-reaching range of initiatives to attract 21st century skills.

In 2015, Klaus Schwab, the executive chairman of the World Economic Forum, coined the term ‘Fourth Industrial Revolution’ to describe our connected industrial society and its increasing reliance on intelligent information systems.

As with previous industrial periods, this revolution will have a profound impact on our world, not least of all changing the nature of work and our relationship with it. However, in the short term, many of the dynamics will appear familiar, such as the increasing demand for specialist skills that serve new, upcoming industries and the competition among employers to hire those skills.

His Highness Sheikh Mohammad Bin Rashid Al Maktoum, Vice President and Prime Minister of the UAE and Ruler of Dubai, on Sunday launched two new initiatives supporting the National AI Strategy to build capacity in AI talent. Announced at a retreat organised for AI experts by the National Programme for Artificial Intelligence, the new initiatives are part of a far-reaching policy to ensure the long-term availability of talent at many levels, to help ensure the country’s competitiveness in the Fourth Industrial Revolution.

In light of fierce global competition among nations for leadership positions in the Fourth Industrial Revolution and the fluid state of the global AI talent pool, winning our new talent wars will require more than simply outbidding competitors. Today’s policymakers must recognise that they need to attract both home-grown and international talent, leverage human resources that are located around the world and create ways of building long-term relationships that will continue to support the availability of talent. It’s all about building talent ecosystems, rather than simply planning to acquire more people with the right skills.

The UAE government recognised the scale of the talent challenge early on and has been developing a wide range of initiatives to attract, train and develop talent, nationally, regionally across the Arab world and globally.

But what did the previous industrial revolutions teach us? The workforce requirements of the first three changed our planet forever. In pre-industrial societies, more than 80 per cent of the population lived in rural areas. Drawn by the promise of jobs in new industries, people flocked from the countryside to towns and cities. By the year 1850, more people in the United Kingdom lived in cities than rural areas and by 1920, a majority of Americans lived in cities, too. The mass movement of people resulted in far-reaching economic, geographic and social changes that have made our world what it is today.

The changes that the Fourth Industrial Revolution will bring are also destined to shape the future of human existence. Artificial intelligence is set to transform the nature of nearly every single one of today’s existing jobs, eliminate job roles that currently employ millions of people and create millions of new jobs, including many roles that have not yet even been imagined. Furthermore, the pace of change is accelerating, powered by faster technology development and so putting more pressure on business, economic, political and government systems than ever before.

Critically, for the global competitiveness of both business and nations themselves, the supply of talent to fuel the development and implementation of artificial intelligence systems is in short supply. It’s a highly dynamic pool of talent that is changing rapidly, following different rules to past waves of tech-related talent and it includes people that are more independent of industry and location.

At a UAE government level, an AI Programme has been created in partnership with Kellogg College at Oxford University to train UAE nationals and help them accelerate the delivery of the national AI strategy. The first batch of 94 participants graduated in April 2019.

On a regional level, the One Million Arab Coders programme launched in 2017 incentivises Arab youth at large to acquire new skills, graduating 22,000 programmers in its first year. In 2019, several new modules were added to the curricular, including an ‘Introduction to AI’ module. The UAE also launched a One Million Jordanian Coders’ Initiative in Jordan and a One Million Uzbek Coders’ initiative in Uzbekistan.

Meanwhile, in the country’s tertiary education system, a number of AI education programmes, degree courses and research centres have been introduced to UAE colleges and universities over the past couple of years. In October, the UAE announced the world’s first graduate AI university — Mohamed bin Zayed University of Artificial Intelligence. The research-based academic institution offers fully paid scholarships for masters and PhD courses starting September 2020.

The two new initiatives launched this week add further appeal to aspiring AI talent. The AI Talent Hunt programme will create an AI laboratory drawing together national and global expertise to solve real world issues, while a competitive AI Challenge Programme will be rolled out in partnership with Microsoft.

In the race to attract 21st century skills, the UAE is already engaging talent at multiple levels and has begun to build a reputation as an enabler of talent, rather than simply a destination. This effort, combined with its goals to become a global hub for AI research and entrepreneurism, could well encourage much sought-after talent to stay in the UAE, or, at least, keep coming back.

This story was first published on The National


January 10, 2020
Ethiopian-coffee-supply-chain.jpg

New ‘Thank My Farmer’ app will help coffee drinkers further support the farmers who grow their beans

The global coffee industry is now worth some $200 billion (Dh734.64bn) a year, yet the average income for coffee farmers has not changed in two decades. This is according to UK advocacy group Fairtrade. Meanwhile, most coffee drinkers are blissfully unaware that for every $3 to $5 cup of coffee they buy, the original coffee producer may actually make less than 1 cent.

Consumers, as always, wield all the power. They not only play a pivotal role in pushing for higher service standards, but also higher standards of corporate, social and environmental responsibility. In response, many companies have invested in making it easier for consumers to learn more about the products they are buying and the production process. However, transparency has proven difficult to deliver for many food products, including commodities such as coffee and tea.

So, how does a socially conscious consumer make informed choices about what coffee or tea they buy? How does one have any certainty about the sustainability of farming practices or the impact of their purchase on the farmers themselves?

A new blockchain initiative unveiled at this week’s Consumer Electronics Show (CES) 2020 in Las Vegas may shine a light on the way forward. Farmer Connect, an independent ecosystem of coffee farmers and the coffee industry, and IBM rolled out ‘Thank My Farmer’, a mobile app that allows consumers to view information drawn from a network of farmers, traders, roasters and brands.

Built using IBM’s blockchain food safety solution, the new app helps close the gap between a consumer’s coffee purchase and the farmer who grew the coffee beans. Using blockchain to ensure the integrity and security of the data, IBM Food Trust allows all coffee industry partners to share food information, creating a more transparent and trustworthy global food supply chain.

According to founder and president of Farmer Connect, David Behrends, the aim is to humanise each coffee drinker’s relationship with their daily cup of coffee. The app will allow consumers to play an active role in sustainability governance by supporting coffee farmers associated with their coffee brands. The app also gives consumers an opportunity to donate funds directly to farmers around the world, or to help fund sustainability projects in the farmers’ local communities.

Many in the food industry have been developing blockchain solutions to help make the supply chain traceable and more transparent. Last year, the 180-year-old tea producer Assam Company and US-based technology firm SmartFarms unveiled plans to develop a blockchain solution to trace tea leaves from the farm to the cup, together with a consumer app that would also allow consumers to thank farm workers directly.

Agricultural commodities typically pass through many intermediaries before being offered to consumers as a packaged product. For example, smaller coffee and tea farmers in developing nations may sell crops to larger producers, before produce changes hands between a number of exporters, importers, traders, roasters, distributors and retailers. The complexity of the supply chain and lack of technology at the source makes it prohibitively difficult to inform the consumer exactly how and where coffee was farmed.

In our globalised economy, commodity prices may change with seasonal highs and lows in consumption, especially good or poor crop harvests in the countries where its grown, currency exchange rates and new import tariffs. Coffee prices enjoyed a high of $3.06 per pound in 2011, but have been unstable ever since, falling by more than 40 per cent over the past three years to a low of less than $1 per pound last June.

Although prices have increased during the past few months, the extended instability of global coffee prices has left farmers in Africa, Asia and South America struggling to stay in business. Consequently, many farmers simply aren’t able to pay the wages to workers that they would like to.

Under normal circumstances, a coffee or tea drinker in Europe or the US may only be aware of the product brand and price, or perhaps the commodity prices reported in the news. Consumers may feel the impact via changes in product pricing, but few are normally aware of how swings in commodity prices affect the farmers and farm workers.

Fair trade labelling has helped some promote transparency and benefited fair trade farmers. However, it doesn’t solve the issue of how to bring transparency to global supply chains.

If these new initiatives using blockchain are successful, then consumers may soon be offered more certainty that the food brands they buy not only taste good, but are good for the farmers too.

This story was first published on The National.


December 21, 2019
UAE-Pepper-Robot.jpg

The UAE’s all abuzz about AI! We’re exposed to more and more news about artificial intelligence, or AI, these days – from stories about talking robots to deepfake videos of celebrities and self-driving cars. AI has become a buzzword and popular interest has been steadily growing over the past few years. However, it would be a mistake to assume that everyone shares the same level of interest, learning about new technologies and increasing their understanding of AI at the same rate. After all, more than 40 percent of the world’s population isn’t even connected to the Internet yet.

Here in the UAE, the media seems to provide residents with a daily diet of news about artificial intelligence. This is no accident. Although it’s never a perfect replica, the media is a reflection of society and interest in AI-related topics has grown as business, government and education investment in AI has scaled-up.

Continue reading this story on The National


December 19, 2019
facial-recognition-tech.jpg

The potential for emotion recognition is huge, but scientists at the university argue the technology is still too new to be reliable

A growing number of employers are requiring job candidates to complete video interviews that are screened by artificial intelligence (AI) to determine whether they move on to another round. However, many scientists claim that the technology is still in its infancy and cannot be trusted. This month, a new report from New York University’s AI Now Institute goes further and recommends a ban on the use of emotion recognition for important decisions that impact people’s lives and access to opportunities.

Emotion recognition systems are a subset of facial recognition, developed to track micro-expressions on people’s faces and aim to interpret their emotions and intent. Systems use computer vision technologies to track human facial movements and use algorithms to map these expressions to a defined set of measures. These measures allow the system to identify typical facial expressions and so infer what human emotions and behaviours are being exhibited.

The potential for emotion recognition is huge. According to Indian market intelligence firm Mordor Intelligence, emotion recognition has already become a $12 billion (Dh44bn) industry and is expected to grow rapidly to exceed $90bn per year by 2024. The field has drawn the interest of big tech firms such as Amazon, IBM and Microsoft, startups around the world and venture capitalists.

Advertisers want to know how consumers respond to their advertisements, retail stores want to know how shoppers feel about their displays, law enforcement authorities want to know how suspects react to questioning, and the list of customers goes on. Both business and government entities want to harness the promise of emotion recognition.

As businesses the world over look to AI to improve processes, increase efficiency and reduce costs, it should come as no surprise that AI is already being applied at scale for recruitment processes. Automation has the strongest appeal when an organisation has a volume of repetitive tasks and large volumes of data to process, and both issues apply to recruitment. Some 80 per cent of Fortune 500 firms now use AI technologies for recruitment.

Emotion recognition has been hailed as a game-changer by some members of the recruitment industry. It aims to identify non-verbal behaviours in videos of candidate interviews, while speech analysis tracks key words and changes in tone of voice. Such systems can track hundreds of thousands of data points for analysis from eye movements to what words and phrases are used. Developers claim that such systems are able to screen out the top candidates for any particular job by identifying candidate knowledge, social skills, attitude and level of confidence – all in a matter of minutes.

As with the adoption of many new AI applications, cost savings and speed are the two core drivers of AI-enabled recruitment. Potential savings for employers include time spent on screening candidates, the numbers of HR staff required to manage recruitment and another safeguard against the costly mistake of hiring the wrong candidate for a position. Meanwhile, the message for candidates is that AI can aid better job placement, ensuring that their new employer is a good fit for them.

However, the consensus among scientific researchers is the algorithms developed for emotion recognition lack a solid scientific foundation. Critics claim that it is premature to rely on AI to accurately assess human behaviour, primarily since most systems are built on widespread assumptions not independent research.

Emotion recognition was the focus of a report published earlier this year by a group of researchers from the Association for Psychological Science. The researchers spent two years reviewing more than 1,000 studies on facial expression and emotions. The study found that how people communicate their emotions varies significantly across cultures and situations, and across different people within a single situation. The report concluded that, for the time being, our understanding of the link between facial expression and emotions is tenuous at best.

Unintentional bias has become the focus of growing scrutiny from scientists, technology developers and human rights activists.

Many algorithms used by global businesses have already been found to have bias related to age, gender, race and other factors, due to the assumptions made whilst programming them and the type of data that has been used to feed machine learning. Last year, Amazon shut down an AI recruiting platform after finding that it discriminated against women.

One thing is for sure: regardless of the potential merits of emotion recognition and whether it prevents or promotes your chances of being offered a job, it is likely to remain the subject of debate for some time to come.

This story was first published by The National


December 6, 2019
Donald-Trump-deepfake-video.jpg

Deepfake videos are becoming more abundant and increasingly difficult to spot.

Deepfake videos are back in the news again this week as China criminalised their publication without a warning to viewers. California also recently introduced an anti-deepfake law in an attempt to prevent such content from influencing the US elections in 2020.

Deepfakes are videos that make fake content look convincingly real, produced by software using machine learning algorithms. Videos like this started to pop up online a few years ago and since then, regulators around the world are scrambling to prevent the spread of the malicious content. While deepfake laws mean different things in different jurisdictions, what has changed to make deepfakes an urgent priority for policymakers? And will such laws be sufficient to keep pace with the spread of fake information?

First, there is the sheer quantity. The number of deepfake videos is growing fast as new technology makes it easier to create them. Deeptrace, an Amsterdam-based cybersecurity company, found the occurrence of deepfake videos on the web increased 84 per cent from December 2018 to July this year. The company identified 14,698 deepfakes online during this time.

In 2018, internet media group Buzzfeed grabbed attention with a video it dubbed “a public service announcement”: a deepfake video of US president Barack Obama “speaking” about fake news, voiced by American actor Jordan Peele. At first glance, the video appeared authentic, but on closer inspection it was clear to see the video had been manipulated.

Racking up nearly 7 million views on YouTube to date, the Buzzfeed stunt was a stark warning about the dangers of deepfakes — where anyone can appear to say anything. While the results so far have been more crude and relatively easy to identify, future deepfake videos are likely to be much harder for the human eye to identify as fake. The artificial intelligence (AI) used to make deepfakes is getting better, making it more and more difficult to distinguish a deepfake video from an original. In fact, machine learning algorithms already allow deepfake applications to mimic facial movements that are virtually undetectable as fake to human viewers.

This combination of easy-to-use deepfake software and the increasing sophistication of those applications, means that we’ll see the overall quality of deepfakes increase and we’re soon likely to see tens of thousands of different deepfakes, perhaps hundreds of thousands. Experts believe that technology to make deepfake videos that seem to be perfectly real will be widely available within a year.

So, how will we be able to tell what’s real and what’s fake?

When we see a video news report of a head of state, politician, doctor or subject matter expert saying something, how will we be able to trust that it’s authentic? This is now the subject of concern for leaders in business, technology, government and non-governmental organisations.

Undetectable deepfakes have the potential to mislead the media and the general public and so impact every aspect of business, government and society. As the risk of malicious deepfakes increases, it could represent a threat to everyone from celebrities to lawmakers, and from scientists to schoolchildren, and perhaps even the world’s legal systems.

Original videos can be manipulated in order that spokespeople say things that undermine their credibility. Likewise, inadvisable remarks made by public figures can be edited out or video evidence of a crime removed.

What’s more, the deepfake revolution is just beginning. As new technologies continue to develop, it is thought to be only a matter of years before it will be possible to create deepfakes in real-time, opening up opportunities for bad actors to deceive global audiences and manipulate public perceptions in a few moments. With a few more years of technology development, it’s also conceivable that it will become possible to create deepfakes at scale, altering video to deliver different versions of it to different audiences.

In today’s digital world, it’s not necessary that deepfakes fool mainstream media to have a significant impact. With nearly 5 billion videos watched on YouTube per day and another 8 billion through Facebook, deepfake producers have an eager global audience that is not yet accustomed to questioning whether trending videos are real or fake.

Facebook and Google are both developing AI to automatically detect deepfakes. But this technology currently lags far behind the development of deepfake tech itself. Until anti-deepfake software catches up, it’s likely the average internet user may have no way of knowing if a video is real or fake.

As scary as the future may sound, the most dangerous time for deepfakes may actually be the present.

This story was first published by The National


November 28, 2019
Tim-Berners-Lee-save-the-Web.jpg

British computer scientist and inventor of the web has just launched a global plan to save it.

Sir Tim Berners-Lee on Monday launched the World Wide Web Foundation’s ‘Contract For The Web’, a plan that lays out a set of core principles and a road map for business, government and individuals to follow. He says such a concept is urgently needed to save the web and help prevent humanity sliding into a new age of ‘digital dystopia’.  So, why is it that Berners-Lee thinks the world wide web needs saving?

Regardless of how much time you may personally spend browsing the web, it is now fundamental to life as we know it. Over the past 30 years, the world wide web has truly changed the world, helping to democratise access to information and education, accelerating global scientific progress, enabling the development of numerous other digital technologies, becoming the biggest driver of the global economy and even helping humanity to develop a broader understanding of itself.

The vision of the academic, scientific and technology communities that built the web was always to create an open and neutral system, available to all. However, according to Berners-Lee, the web now comes under increasing attack from government and commercial interests, threatening its neutrality, freedom and universal access.

In some respects, this has always been the case. In the 1990s, some governments were more enthusiastic than others in allowing public access to the internet to begin with. Many government officials and politicians had concerns about the perceived dangers of the web, such as use by organised crime, terrorists, political activists or publishers of pornography.

Business has also long influenced usage of the web and has been a deciding factor in the rate of internet adoption. The capitalist nature of the internet’s global roll-out is a primary reason why more than half of the world’s population is still offline. For those fortunate enough to be online, their web experience is heavily influenced by huge investment promoting commercial interests.

So, given that the web has already been shaped by long standing commercial and government interests, what’s the impetus that has led to the heightened concern and the Web Foundation’s global action plan?

Berners-Lee maintains that never before has the web’s power for good been under more threat and that it has now reached a tipping point from which it could very well descend into dystopia. Firstly, with 46 per cent of the world’s population still unable to access the internet, he believes the digital divide threatens to be one of the greatest sources of equality in our time.

Speaking at the internet Governance Forum in Berlin this week, Berners-Lee put his case forward for the web as a force that truly serves humanity, but was also at pains to impress upon delegates the complex challenges that we face require action across the gamut of business, government, society and individual internet users.

The concept for the plan was first introduced by Berners-Lee five years ago as a ‘Magna Carta for the web’. Since then, there can be no doubt that new dangers continue to present themselves.

From foreign powers accused of trying to manipulate the 2016 US election via social media, to numerous data breaches exposing the private data of millions of internet users, through to the epidemic of fake news and fake content, and the rise in hate speech online, the web has never had the power to negatively affect so many. Importantly, dangers like these know no national boundaries and arguably make the internet less safe, even in those countries that have long championed online freedoms and protection.

The global, borderless nature of the web means that laws such as the European Union’s General Data Protection Regulation can only ever provide part of the solution to ensure it works for all. For this reason, Berners-Lee believes that public participation is critical, if we are to have the web that we want.

In addition to providing a framework of guiding principles for business and government, the Web Foundation’s plan urges internet users to be active participants in shaping the web, including the content and systems made available through it. Individuals can help build strong communities that respect civil discourse and human dignity, and be active citizens of the web, creating awareness among their peers regarding threats, supporting positive initiatives and holding regulators to account.

In common with other pressing global issues such as climate change and increasing social inequalities, it may be up to humanity to save itself.

This story was first published by The National


November 22, 2019
singapore-AI-strategy.jpg

With the US and China dominating artificial intelligence development, what chances do smaller nations have?

Over the past two years, a national artificial intelligence (AI) strategy has come to be seen as a pre-requisite for digital competitiveness and an essential pillar of national governance for the Fourth Industrial Revolution. So, Singapore unveiling a new, updated national AI strategy last week has received global attention.

In common with the UAE, Singapore was one of the first countries to announce a national AI strategy, back in 2017. The new one, unveiled by the Deputy Prime Minister Heng Swee Keat on the last day of Singapore’s FinTech Festival last week, is holistic and zeros in on some specific national goals. Importantly, it also leverages investments already made by the government in education, technology development, infrastructure and innovation.

Developed by the Smart Nation Digital Government Office (SNDGO), the AI strategy not only identifies key areas that can be enabled by AI and the necessary resources to support nation-wide AI adoption, but also aims to set out Singapore’s stall as a leading global hub for the development, testing and export of AI applications. Recently ranked by the think tank Oliver Wyman Forum as the city most ready for AI, Singapore’s play for a greater role in the development of commercial and government AI systems has many things going for it.

Against the backdrop of the China-US trade war, Singapore is geographically and politically well placed to encourage both Chinese and American investment in AI ventures, at a time when cross-border foreign direct investment and venture capital between the two AI powerhouses is at its lowest level since 2014. Meanwhile, the combination of the country’s willingness to implement AI and the small size of the nation itself, make it an ideal testbed for AI developers to try-out their solutions before exporting them to larger countries, where implementation may face more obstacles and have higher costs.

Singapore’s strategy identifies key enablers for AI innovation and adoption, including the development of talent, data infrastructure and creating a progressive and trusted environment for AI. However, crucially, it also picks five core development projects designed to bring early benefits, plus create opportunities for local innovation and investment. By choosing AI-enabled projects that both address national challenges and deliver a visible impact on society and the economy, Singapore is also preparing the proof of concept for its goal of becoming a global hub for the development of AI technologies.

It’s no coincidence that the UAE, Finland and Singapore all first committed to national AI strategies in 2017, alongside large nations such as Canada and China, but well ahead of most of the world. All three countries have populations under 10 million, have relatively large economies and have been able to stay ahead of the technology curve.

The forward-looking policy and smaller size of these countries has helped to make embracing new technologies faster and more achievable than for many larger countries with bigger budgets, often allowing them to leapfrog global competitors.

Finland, Singapore and the UAE were all early pioneers of e-government, helping to develop new digital government processes. They were all also early adopters of new mobile standards and consumer services including mobile broadband.

So, it makes perfect sense that smaller digital-savvy countries should be able to take leadership positions in the fast-developing world of AI.

It is now well-known that the UAE was the first country in the world to bring AI decision-making into government at a cabinet level, naming His Excellency Omar Sultan Al Olama Minister of State for Artificial Intelligence in October 2017. In April of this year, the cabinet approved the UAE’s AI Strategy 2031.

The UAE has also made strategic investments in a number of new ventures to ensure that the UAE becomes not only an early adopter, but also a leading producer of AI applications. Last week Abu Dhabi National Oil Company (Adnoc), one of the world’s largest oil production companies, announced a joint venture with UAE AI group G42 to create artificially intelligent applications for the energy sector.

Other high profile AI investments in the UAE include a world-class AI research institute in its capital, the world’s first dedicated artificial intelligence university and Chinese AI provider SenseTime’s plans to open a Europe, Middle East and Africa research and development centre in Abu Dhabi.

Singapore’s new national AI strategy makes a convincing case for prioritising the development of a homegrown AI industry, in line with the country’s core strengths and challenges. The UAE has its own set of strengths and challenges, and these too, provide a golden opportunity for it to become one of the world’s leading AI producers.

This story was first published by The National