AI policy Archives — Carrington Malin

April 21, 2023
cruise-chevy-2.png

The recent passing of a new autonomous vehicle (AV) law in Dubai highlights the emirate’s tenacious commitment to innovation. But there is much to do before we see driverless taxi services on the roads.

Dubai’s ambition to place itself at the forefront of autonomous transport is exciting because it is unprecedented. However, this very same lack of precedent means it cannot lean on the experiences of others to develop new regulations, technologies or infrastructure.

Dubai’s plans switched into high gear in 2021 when the Roads and Transport Authority (RTA) signed an agreement with General Motors’ autonomous vehicle company Cruise to operate its self-driving taxis and ride-hailing services. The agreement will make Dubai the first city outside the US to offer Cruise’s driverless taxi services, with a goal of putting 4,000 AVs on the road by 2030.

The Dubai Smart Mobility Strategy aims to convert 25 percent of total transportation journeys into trips via self-driving transportation by 2030, including driverless rail transport.

The new Law No.9 of 2023, passed last week by Sheikh Mohammed Bin Rashid Al Maktoum, vice president, prime minister and ruler of Dubai, is not the first legislation to support the emirate’s future driverless vehicle services sector. The law follows resolutions made by the emirate’s executive council, local legislation development led by the RTA and UAE laws issued at a federal level to allow temporary licensing of AV trials.

But on the legal front Dubai must innovate when it comes to regulation. There are simply no comprehensive laws or guidelines in place for the public use of autonomous vehicles anywhere else.

For example, the UK government plans to allow autonomous vehicles on the road by 2025. However, a closer look shows that British regulation so far only covers testing, insurance and liability. Further afield, the EU has implemented a framework for approving level 3 and 4 AVs but offers little detail on their operation on the road. Meanwhile, the world’s most extensive driverless vehicle trials, which are taking place in China and the US, have only been authorised via case-by-case permissions issued by authorities. This is also true of the UAE’s trials.

Given the emirate’s ambitious roadmap, Dubai’s partnership with Cruise is a smart move. The US company was one of two operators to receive a permit to offer paid driverless taxi services in California in 2021. Early last year it began offering services within designated areas of San Francisco, but only between the hours of 22:00 and 06:00. The company now operates about 300 robotaxis across San Francisco, Austin and Phoenix.

However, no new technology is without its teething problems. US media have reported a variety of complaints related to the San Francisco trial, including immobile cars blocking traffic, multiple vehicles stopping in the middle of the road and incorrect signalling. In March a Cruise robotaxi bumped into the rear of a San Francisco bus, prompting an urgent software recall across its entire fleet of AVs.

While Dubai’s modern road infrastructure and digital traffic management are both big pluses for future AV services, the city still has its own unique set of physical and digital characteristics, operational needs and system integration requirements to consider. City-specific behavioural factors also apply relating to motorists, pedestrians and passengers. So, notwithstanding Cruise’s past two years of service trials in California and the five years of testing before that, only so much can be taken for granted as the US AV firm and the RTA work together on Dubai’s driverless services.

At this point in time, the RTA is perhaps the only public transport authority in the world that is developing an ecosystem to put 4,000 driverless taxis on the road in one city.

This is why innovation, not best practice, must drive Dubai’s autonomous vehicle plans.

This article first appeared in Arabian Gulf Business Insight.


April 16, 2023
AGBI-story-1280x719.png

Sam Altman is expected to meet policymakers in Dubai, as part of his world tour, but we’re only one big scandal away from a global crackdown.

The OpenAI CEO is expected to visit Dubai as part of his 16 stop global tour in May-June to meet with customers, developers and policymakers Since Altman’s visit follows the Elon Musk-backed open letter to halt additional development and training of LLMs like GPT and Italy’s banning of ChatGPT at the end of March, the question of AI regulation is, no doubt, being quickly pushed up regulators’ agendas.

Arabian Gulf Business Insight (AGBI) asked me why he is making this world tour and why now is the right time to talk to policymakers. In short, time is of the essence!

Italy’s ChatGPT ban over concerns about data privacy, lack of age restrictions and ChatGPT’s potential to misinform people at scale, provides a clear signal that OpenAI needs to open up channels with regulators worldwide to ensure that they feel they understand ChatCPT and the company’s plans a little better. Other regulators have these same concerns and it’s a significant challenge for regulators to keep abreast of how this fast-moving technology will affect existing laws, rights and data regulations

If OpenAI expects to keep releasing new more powerful versions, it needs to help set expectations now. So, it would be natural to expect there to be  dialogue between OpenAI and regulators, with OpenAI sharing what regulators can expect from its platforms, and regulators sharing their needs and concerns.

The more regulators feel ill informed or that laws are being ignored, the greater the risk of further bans. As with any new, little understood, technology, we’re only one big scandal away from a crackdown.”

As with any new, little understood technology, we’re only one big scandal away from a crackdown. So, it’s well worth OpenAI’s time to put some work now into keeping regulators informed.

You can read UAE-based journalist Megha Merani‘s full story in AGBI here


April 14, 2021
brazils-national-artificial-intelligence-strategy-1280x587.jpg

Brazil’s national artificial intelligence strategy – a Estratégia Brasileira de Inteligência Artificial or EBIA in Portuguese – was published in the Diário Oficial da União, the government’s official gazette, last week. The publication of the Brazilian national AI strategy follows a year of public consultation (via an online platform), seeking recommendations from the technology industry and academic experts and benchmarking (2019-2020) and a year of planning and development. The strategy focuses on the policy, investment and initiatives necessary to stimulate innovation and promote the ethical development and application of AI technologies in Brazil, to include education and research and development.

As a country that has struggled with both racial equality and gender equality, it’s no surprise that ethical concerns and policies are made a priority by EBIA. Therefore, core to the strategy is that AI should not create or reinforce prejudices, putting the onus on the developers of artificial intelligence systems to follow ethical principles, meet regulatory requirements and ultimately the responsibility for how their systems function in the real world. Ethical principles will also be applied by the government in issuing tenders and contracts for solutions and services powered by AI. The strategy also embraces the OECD’s five principles for a human-centric approach to AI.

Brazil's national artificial intelligence strategy chart

It’s important when reviewing the new EBIA to take into account the Brazilian Digital Transformation Strategy (2018),or E-Digital, which puts in place some foundational policy relevant to AI. E-Digital defines five key goals 1) promoting open government data availability; 2) promoting transparency through the use of ICT; 3) expanding and innovating the delivery of digital services; 4) sharing and integrating data, processes, systems, services and infrastructure; and 5) improving social participation in the lifecycle of public policies and services. This last goal was clearly embraced in the development of EBIA by including the year-long public consultation as part of the process.

More to follow on Brazil’s national artificial intelligence strategy…

Download A Estratégia Brasileira de Inteligência Artificial (EBIA) summary (PDF, Portuguese)

Also read about last year’s publication of the Indonesia National AI Strategy (Stranas KA) and Saudi Arabia’s National Strategy for Data & AI.


October 21, 2020
global-ai-summit-2020-riyadh.jpg

The Saudi national AI strategy was announced today at the virtual Global AI Summit by Saudi Data and Artificial Intelligence Authority (SDAIA) president Dr. Abdullah bin Sharaf Al-Ghamdi. The National Strategy for Data & AI (NSDAI) includes ambitious goals for skilling-up Saudi talent, growing the nation’s startup ecosystem and attaining global leadership in the AI space. It also aims to raise $20 billion in investment for data and AI initiatives.

Dr. Abdullah bin Sharaf Al-Ghamdi, President of the Saudi Data and Artificial Intelligence Authority (SDAIA) today gave a brief introduction to some of the key goals of Saudi Arabia’s national AI strategy, now named the National Strategy for Data & AI (NSDAI). Speaking at the inaugural Global AI Summit, he advised that Saudi Arabia has set ambitious targets for its national AI strategy, including a goal of attracting $20 billion in investments by 2030, both in foreign direct investment (FDI) and local funding for data and artificial intelligence initiatives.

As detailed by Dr. Al-Ghamdi, the Kindgom aims to rank among the top 15 nations for AI by 2030, it will train 20,000 data and AI specialists and experts and it will grow an ecosystem of 300 active data and AI startups. He also urged participants in the virtual event to challenge themselves, to think and work together, and to shape the future of AI together for the good of humanity.

Formed last year, with a mandate to drive the national data and AI agenda, the SDAIA developed a national AI strategy which was approved by King Salman bin Abdulaziz Al Saud in August 2020. No details of the National Strategy for Data & AI were shared until today.

According to an official SDAIA statement, the NSDAI will roll-out a multi-phase plan that both addresses urgent requirements for the next five years and contributes to Vision 2030 strategic development goals. In the short term, the strategy will aim to accelerate the use of AI in education, energy, government, healthcare and mobility sectors.

Saudi National Strategy for Data & AI goals
Source: Saudi Data and Artificial Intelligence Authority (SDAIA)

Six strategic areas have been identified in the NSDAI:

  • Ambition – positioning Saudi Arabia as a global leader and enabler for AI, with a goal of ranking among the first 15 countries in AI by 2030.
  • Skills – transforming the Saudi workforce and skilling-up talent, with a target of creating 20,000 AI and Data specialists and experts by 2030.
  • Policy & regulation – developing a world-class regulatory framework, including for the ethical use of data and AI that will underpin open data and economic growth.
  • Investment – attracting FDI and local investment into the data and AI sector, with a goal of securing a total of $20 billion (SAR 75b) in investments.
  • Research and innovation – the NSDAI will also drive the development of research and innovation institutions in data and AI, with an objective of the Kingdom ranking among the top 20 countries in the world for peer reviewed data and AI publications.
  • Digital ecosystem – the new national AI strategy also aims to drive the commercialization and industry application of data and AI, creating an ecosystem with at least 300 AI and data startups by the year 2030.

Over the past year, SDAIA has established three specialised centres of expertise: the National Information Center, the National Data Management Office and the National Center for AI. It has also begun building perhaps the largest government data cloud in the region, merging 83 data centres owned by over 40 Saudi government bodies. More than 80 percent of government datasets have so far been consolidated under a national data bank.

The formation of the SDAIA follows the adoption of the government’s ‘ICT Strategy 2023‘ in 2018, which aims to transform the kingdom into a digital and technological powerhouse. The government identified technology as a key driver for its Vision 2030 blueprint for economic and social reform. Digitisation and artificial intelligence are seen as key enablers of the wide-ranging reforms.

Artrificial intelligence, big data and IoT are also pivotal for the massive $500 billion smart city, Neom, announced by Saudi Crown Prince Mohammed bin Salman in 2017. Infrastructure work on the 26,000 square kilometre city began earlier this year.

Meanwhile, the authority has been using AI to identify opportunities for improving the Kingdom’s government processes, which may result in some $10 billion in government savings and additional revenues.

More than fifty government officials and global AI leaders are speaking at this week’s Global AI Summit, which takes place today and tomorrow. The online event coincides with the year of Saudi’s presidency of the G20.

Download the National Strategy for Data & AI Strategy Narrative – October 2020 (PDF)

Watch the NSDAI promotion video from the Global AI Summit (Youtube)

Updated 23 October 2020


January 17, 2020
takent-wars-and-national-policy.jpg

The UAE is developing a sophisticated and far-reaching range of initiatives to attract 21st century skills.

In 2015, Klaus Schwab, the executive chairman of the World Economic Forum, coined the term ‘Fourth Industrial Revolution’ to describe our connected industrial society and its increasing reliance on intelligent information systems.

As with previous industrial periods, this revolution will have a profound impact on our world, not least of all changing the nature of work and our relationship with it. However, in the short term, many of the dynamics will appear familiar, such as the increasing demand for specialist skills that serve new, upcoming industries and the competition among employers to hire those skills.

His Highness Sheikh Mohammad Bin Rashid Al Maktoum, Vice President and Prime Minister of the UAE and Ruler of Dubai, on Sunday launched two new initiatives supporting the National AI Strategy to build capacity in AI talent. Announced at a retreat organised for AI experts by the National Programme for Artificial Intelligence, the new initiatives are part of a far-reaching policy to ensure the long-term availability of talent at many levels, to help ensure the country’s competitiveness in the Fourth Industrial Revolution.

In light of fierce global competition among nations for leadership positions in the Fourth Industrial Revolution and the fluid state of the global AI talent pool, winning our new talent wars will require more than simply outbidding competitors. Today’s policymakers must recognise that they need to attract both home-grown and international talent, leverage human resources that are located around the world and create ways of building long-term relationships that will continue to support the availability of talent. It’s all about building talent ecosystems, rather than simply planning to acquire more people with the right skills.

The UAE government recognised the scale of the talent challenge early on and has been developing a wide range of initiatives to attract, train and develop talent, nationally, regionally across the Arab world and globally.

But what did the previous industrial revolutions teach us? The workforce requirements of the first three changed our planet forever. In pre-industrial societies, more than 80 per cent of the population lived in rural areas. Drawn by the promise of jobs in new industries, people flocked from the countryside to towns and cities. By the year 1850, more people in the United Kingdom lived in cities than rural areas and by 1920, a majority of Americans lived in cities, too. The mass movement of people resulted in far-reaching economic, geographic and social changes that have made our world what it is today.

The changes that the Fourth Industrial Revolution will bring are also destined to shape the future of human existence. Artificial intelligence is set to transform the nature of nearly every single one of today’s existing jobs, eliminate job roles that currently employ millions of people and create millions of new jobs, including many roles that have not yet even been imagined. Furthermore, the pace of change is accelerating, powered by faster technology development and so putting more pressure on business, economic, political and government systems than ever before.

Critically, for the global competitiveness of both business and nations themselves, the supply of talent to fuel the development and implementation of artificial intelligence systems is in short supply. It’s a highly dynamic pool of talent that is changing rapidly, following different rules to past waves of tech-related talent and it includes people that are more independent of industry and location.

At a UAE government level, an AI Programme has been created in partnership with Kellogg College at Oxford University to train UAE nationals and help them accelerate the delivery of the national AI strategy. The first batch of 94 participants graduated in April 2019.

On a regional level, the One Million Arab Coders programme launched in 2017 incentivises Arab youth at large to acquire new skills, graduating 22,000 programmers in its first year. In 2019, several new modules were added to the curricular, including an ‘Introduction to AI’ module. The UAE also launched a One Million Jordanian Coders’ Initiative in Jordan and a One Million Uzbek Coders’ initiative in Uzbekistan.

Meanwhile, in the country’s tertiary education system, a number of AI education programmes, degree courses and research centres have been introduced to UAE colleges and universities over the past couple of years. In October, the UAE announced the world’s first graduate AI university — Mohamed bin Zayed University of Artificial Intelligence. The research-based academic institution offers fully paid scholarships for masters and PhD courses starting September 2020.

The two new initiatives launched this week add further appeal to aspiring AI talent. The AI Talent Hunt programme will create an AI laboratory drawing together national and global expertise to solve real world issues, while a competitive AI Challenge Programme will be rolled out in partnership with Microsoft.

In the race to attract 21st century skills, the UAE is already engaging talent at multiple levels and has begun to build a reputation as an enabler of talent, rather than simply a destination. This effort, combined with its goals to become a global hub for AI research and entrepreneurism, could well encourage much sought-after talent to stay in the UAE, or, at least, keep coming back.

This story was first published on The National


December 19, 2019
facial-recognition-tech.jpg

The potential for emotion recognition is huge, but scientists at the university argue the technology is still too new to be reliable

A growing number of employers are requiring job candidates to complete video interviews that are screened by artificial intelligence (AI) to determine whether they move on to another round. However, many scientists claim that the technology is still in its infancy and cannot be trusted. This month, a new report from New York University’s AI Now Institute goes further and recommends a ban on the use of emotion recognition for important decisions that impact people’s lives and access to opportunities.

Emotion recognition systems are a subset of facial recognition, developed to track micro-expressions on people’s faces and aim to interpret their emotions and intent. Systems use computer vision technologies to track human facial movements and use algorithms to map these expressions to a defined set of measures. These measures allow the system to identify typical facial expressions and so infer what human emotions and behaviours are being exhibited.

The potential for emotion recognition is huge. According to Indian market intelligence firm Mordor Intelligence, emotion recognition has already become a $12 billion (Dh44bn) industry and is expected to grow rapidly to exceed $90bn per year by 2024. The field has drawn the interest of big tech firms such as Amazon, IBM and Microsoft, startups around the world and venture capitalists.

Advertisers want to know how consumers respond to their advertisements, retail stores want to know how shoppers feel about their displays, law enforcement authorities want to know how suspects react to questioning, and the list of customers goes on. Both business and government entities want to harness the promise of emotion recognition.

As businesses the world over look to AI to improve processes, increase efficiency and reduce costs, it should come as no surprise that AI is already being applied at scale for recruitment processes. Automation has the strongest appeal when an organisation has a volume of repetitive tasks and large volumes of data to process, and both issues apply to recruitment. Some 80 per cent of Fortune 500 firms now use AI technologies for recruitment.

Emotion recognition has been hailed as a game-changer by some members of the recruitment industry. It aims to identify non-verbal behaviours in videos of candidate interviews, while speech analysis tracks key words and changes in tone of voice. Such systems can track hundreds of thousands of data points for analysis from eye movements to what words and phrases are used. Developers claim that such systems are able to screen out the top candidates for any particular job by identifying candidate knowledge, social skills, attitude and level of confidence – all in a matter of minutes.

As with the adoption of many new AI applications, cost savings and speed are the two core drivers of AI-enabled recruitment. Potential savings for employers include time spent on screening candidates, the numbers of HR staff required to manage recruitment and another safeguard against the costly mistake of hiring the wrong candidate for a position. Meanwhile, the message for candidates is that AI can aid better job placement, ensuring that their new employer is a good fit for them.

However, the consensus among scientific researchers is the algorithms developed for emotion recognition lack a solid scientific foundation. Critics claim that it is premature to rely on AI to accurately assess human behaviour, primarily since most systems are built on widespread assumptions not independent research.

Emotion recognition was the focus of a report published earlier this year by a group of researchers from the Association for Psychological Science. The researchers spent two years reviewing more than 1,000 studies on facial expression and emotions. The study found that how people communicate their emotions varies significantly across cultures and situations, and across different people within a single situation. The report concluded that, for the time being, our understanding of the link between facial expression and emotions is tenuous at best.

Unintentional bias has become the focus of growing scrutiny from scientists, technology developers and human rights activists.

Many algorithms used by global businesses have already been found to have bias related to age, gender, race and other factors, due to the assumptions made whilst programming them and the type of data that has been used to feed machine learning. Last year, Amazon shut down an AI recruiting platform after finding that it discriminated against women.

One thing is for sure: regardless of the potential merits of emotion recognition and whether it prevents or promotes your chances of being offered a job, it is likely to remain the subject of debate for some time to come.

This story was first published by The National


December 6, 2019
Donald-Trump-deepfake-video.jpg

Deepfake videos are becoming more abundant and increasingly difficult to spot.

Deepfake videos are back in the news again this week as China criminalised their publication without a warning to viewers. California also recently introduced an anti-deepfake law in an attempt to prevent such content from influencing the US elections in 2020.

Deepfakes are videos that make fake content look convincingly real, produced by software using machine learning algorithms. Videos like this started to pop up online a few years ago and since then, regulators around the world are scrambling to prevent the spread of the malicious content. While deepfake laws mean different things in different jurisdictions, what has changed to make deepfakes an urgent priority for policymakers? And will such laws be sufficient to keep pace with the spread of fake information?

First, there is the sheer quantity. The number of deepfake videos is growing fast as new technology makes it easier to create them. Deeptrace, an Amsterdam-based cybersecurity company, found the occurrence of deepfake videos on the web increased 84 per cent from December 2018 to July this year. The company identified 14,698 deepfakes online during this time.

In 2018, internet media group Buzzfeed grabbed attention with a video it dubbed “a public service announcement”: a deepfake video of US president Barack Obama “speaking” about fake news, voiced by American actor Jordan Peele. At first glance, the video appeared authentic, but on closer inspection it was clear to see the video had been manipulated.

Racking up nearly 7 million views on YouTube to date, the Buzzfeed stunt was a stark warning about the dangers of deepfakes — where anyone can appear to say anything. While the results so far have been more crude and relatively easy to identify, future deepfake videos are likely to be much harder for the human eye to identify as fake. The artificial intelligence (AI) used to make deepfakes is getting better, making it more and more difficult to distinguish a deepfake video from an original. In fact, machine learning algorithms already allow deepfake applications to mimic facial movements that are virtually undetectable as fake to human viewers.

This combination of easy-to-use deepfake software and the increasing sophistication of those applications, means that we’ll see the overall quality of deepfakes increase and we’re soon likely to see tens of thousands of different deepfakes, perhaps hundreds of thousands. Experts believe that technology to make deepfake videos that seem to be perfectly real will be widely available within a year.

So, how will we be able to tell what’s real and what’s fake?

When we see a video news report of a head of state, politician, doctor or subject matter expert saying something, how will we be able to trust that it’s authentic? This is now the subject of concern for leaders in business, technology, government and non-governmental organisations.

Undetectable deepfakes have the potential to mislead the media and the general public and so impact every aspect of business, government and society. As the risk of malicious deepfakes increases, it could represent a threat to everyone from celebrities to lawmakers, and from scientists to schoolchildren, and perhaps even the world’s legal systems.

Original videos can be manipulated in order that spokespeople say things that undermine their credibility. Likewise, inadvisable remarks made by public figures can be edited out or video evidence of a crime removed.

What’s more, the deepfake revolution is just beginning. As new technologies continue to develop, it is thought to be only a matter of years before it will be possible to create deepfakes in real-time, opening up opportunities for bad actors to deceive global audiences and manipulate public perceptions in a few moments. With a few more years of technology development, it’s also conceivable that it will become possible to create deepfakes at scale, altering video to deliver different versions of it to different audiences.

In today’s digital world, it’s not necessary that deepfakes fool mainstream media to have a significant impact. With nearly 5 billion videos watched on YouTube per day and another 8 billion through Facebook, deepfake producers have an eager global audience that is not yet accustomed to questioning whether trending videos are real or fake.

Facebook and Google are both developing AI to automatically detect deepfakes. But this technology currently lags far behind the development of deepfake tech itself. Until anti-deepfake software catches up, it’s likely the average internet user may have no way of knowing if a video is real or fake.

As scary as the future may sound, the most dangerous time for deepfakes may actually be the present.

This story was first published by The National


November 8, 2019
WEF-technology-policy.jpg

One of the topics discussed at this week’s annual World Economic Forum Global Future Council in Dubai was how governments are coping with setting the rules for disruptive technologies. Given the now daily news coverage on the divergent opinions regarding the threats and opportunities brought by artificial intelligence (AI), it should come as no surprise that policymakers are coming under extreme pressure to do better.

It’s not simply the scale of the challenge, with government, industry and other institutions having to consider a wider range of technologies and technology implications than ever before. It is the sheer pace of change driving an unrelenting cycle of innovation and disruption, and, as AI becomes more commonplace, the pace of innovation and disruption will only further accelerate.

Continue reading this story on The National.


September 6, 2019
emotional-ai-and-gdpr.jpg

The increasing spread of emotional AI, brings with it new concerns about privacy, personal data rights and, well, freedom of expression (although, in a sense, perhaps, not thought about much in the past). The data that the emotion recognition captures could be considered to be biometric data, just like a scan of your fingerprint. However, there is little legislation globally that specifically addresses the capture, usage rights and privacy considerations of biometric data. Until GDPR.

Continue reading this story on Linkedin