Brazil’s national artificial intelligence strategy – aEstratégia Brasileira de Inteligência Artificial or EBIA in Portuguese – was published in the Diário Oficial da União, the government’s official gazette, last week. The publication of the Brazilian national AI strategy follows a year of public consultation (via an online platform), seeking recommendations from the technology industry and academic experts and benchmarking (2019-2020) and a year of planning and development. The strategy focuses on the policy, investment and initiatives necessary to stimulate innovation and promote the ethical development and application of AI technologies in Brazil, to include education and research and development.
As a country that has struggled with both racial equality and gender equality, it’s no surprise that ethical concerns and policies are made a priority by EBIA. Therefore, core to the strategy is that AI should not create or reinforce prejudices, putting the onus on the developers of artificial intelligence systems to follow ethical principles, meet regulatory requirements and ultimately the responsibility for how their systems function in the real world. Ethical principles will also be applied by the government in issuing tenders and contracts for solutions and services powered by AI. The strategy also embraces the OECD’s five principles for a human-centric approach to AI.
It’s important when reviewing the new EBIA to take into account the Brazilian Digital Transformation Strategy (2018),or E-Digital, which puts in place some foundational policy relevant to AI. E-Digital defines five key goals 1) promoting open government data availability; 2) promoting transparency through the use of ICT; 3) expanding and innovating the delivery of digital services; 4) sharing and integrating data, processes, systems, services and infrastructure; and 5) improving social participation in the lifecycle of public policies and services. This last goal was clearly embraced in the development of EBIA by including the year-long public consultation as part of the process.
More to follow on Brazil’s national artificial intelligence strategy…
The Saudi national AI strategy was announced today at the virtual Global AI Summit by Saudi Data and Artificial Intelligence Authority (SDAIA) president Dr. Abdullah bin Sharaf Al-Ghamdi. The National Strategy for Data & AI (NSDAI) includes ambitious goals for skilling-up Saudi talent, growing the nation’s startup ecosystem and attaining global leadership in the AI space. It also aims to raise $20 billion in investment for data and AI initiatives.
Dr. Abdullah bin Sharaf Al-Ghamdi, President of the Saudi Data and Artificial Intelligence Authority (SDAIA) today gave a brief introduction to some of the key goals of Saudi Arabia’s national AI strategy, now named the National Strategy for Data & AI (NSDAI). Speaking at the inaugural Global AI Summit, he advised that Saudi Arabia has set ambitious targets for its national AI strategy, including a goal of attracting $20 billion in investments by 2030, both in foreign direct investment (FDI) and local funding for data and artificial intelligence initiatives.
As detailed by Dr. Al-Ghamdi, the Kindgom aims to rank among the top 15 nations for AI by 2030, it will train 20,000 data and AI specialists and experts and it will grow an ecosystem of 300 active data and AI startups. He also urged participants in the virtual event to challenge themselves, to think and work together, and to shape the future of AI together for the good of humanity.
Formed last year, with a mandate to drive the national data and AI agenda, the SDAIA developed a national AI strategy which was approved by King Salman bin Abdulaziz Al Saud in August 2020. No details of the National Strategy for Data & AI were shared until today.
According to an official SDAIA statement, the NSDAI will roll-out a multi-phase plan that both addresses urgent requirements for the next five years and contributes to Vision 2030 strategic development goals. In the short term, the strategy will aim to accelerate the use of AI in education, energy, government, healthcare and mobility sectors.
Six strategic areas have been identified in the NSDAI:
Ambition – positioning Saudi Arabia as a global leader and enabler for AI, with a goal of ranking among the first 15 countries in AI by 2030.
Skills – transforming the Saudi workforce and skilling-up talent, with a target of creating 20,000 AI and Data specialists and experts by 2030.
Policy & regulation – developing a world-class regulatory framework, including for the ethical use of data and AI that will underpin open data and economic growth.
Investment – attracting FDI and local investment into the data and AI sector, with a goal of securing a total of $20 billion (SAR 75b) in investments.
Research and innovation – the NSDAI will also drive the development of research and innovation institutions in data and AI, with an objective of the Kingdom ranking among the top 20 countries in the world for peer reviewed data and AI publications.
Digital ecosystem – the new national AI strategy also aims to drive the commercialization and industry application of data and AI, creating an ecosystem with at least 300 AI and data startups by the year 2030.
Over the past year, SDAIA has established three specialised centres of expertise: the National Information Center, the National Data Management Office and the National Center for AI. It has also begun building perhaps the largest government data cloud in the region, merging 83 data centres owned by over 40 Saudi government bodies. More than 80 percent of government datasets have so far been consolidated under a national data bank.
The formation of the SDAIA follows the adoption of the government’s ‘ICT Strategy 2023‘ in 2018, which aims to transform the kingdom into a digital and technological powerhouse. The government identified technology as a key driver for its Vision 2030 blueprint for economic and social reform. Digitisation and artificial intelligence are seen as key enablers of the wide-ranging reforms.
Artrificial intelligence, big data and IoT are also pivotal for the massive $500 billion smart city, Neom, announced by Saudi Crown Prince Mohammed bin Salman in 2017. Infrastructure work on the 26,000 square kilometre city began earlier this year.
Meanwhile, the authority has been using AI to identify opportunities for improving the Kingdom’s government processes, which may result in some $10 billion in government savings and additional revenues.
More than fifty government officials and global AI leaders are speaking at this week’s Global AI Summit, which takes place today and tomorrow. The online event coincides with the year of Saudi’s presidency of the G20.
The fluid state of the global AI talent pool and fierce international competition among nations for leadership positions in the Fourth Industrial Revolution mean that winning new talent wars will require more than simply outbidding competitors.
Today’s policymakers must recognise that they need to attract both home-grown and international AI talent, leverage human resources that are located around the world and create ways of building long-term relationships that will continue to support the availability of talent. It’s all about building talent ecosystems, rather than simply planning to acquire more people with the right skills.
Should emotion recognition be banned? A growing number of employers are requiring job candidates to complete video interviews that are screened by artificial intelligence (AI) to determine whether they move on to another round. However, many scientists claim that the technology is still in its infancy and cannot be trusted. This month, a new report from New York University’s AI Now Institute goes further and recommends a ban on the use of emotion recognition for important decisions that impact people’s lives and access to opportunities.
Deepfakes are videos that make fake content look convincingly real, produced by software using machine learning algorithms. Videos like this started to pop up online a few years ago and since then, regulators around the world are scrambling to prevent the spread of the malicious content.
While deepfake laws mean different things in different jurisdictions, what has changed to make deepfakes an urgent priority for policymakers? And will such laws be sufficient to keep pace with the spread of fake information?
One of the topics discussed at this week’s annual World Economic Forum Global Future Council in Dubai was how governments are coping with setting the rules for disruptive technologies. Given the now daily news coverage on the divergent opinions regarding the threats and opportunities brought by artificial intelligence (AI), it should come as no surprise that policymakers are coming under extreme pressure to do better.
It’s not simply the scale of the challenge, with government, industry and other institutions having to consider a wider range of technologies and technology implications than ever before. It is the sheer pace of change driving an unrelenting cycle of innovation and disruption, and, as AI becomes more commonplace, the pace of innovation and disruption will only further accelerate.
What with British Prime Minister Boris Johnson’s ‘limbless chickens’ speech at Tuesday’s U.N. General Assembly and Pope Francis urging Silicon Valley to make sure that artificial intelligence (AI) does not lead to a new ‘form of barbarism’, government AI policy has been thrust back into the headlines.
The increasing spread of emotional AI, brings with it new concerns about privacy, personal data rights and, well, freedom of expression (although, in a sense, perhaps, not thought about much in the past). The data that the emotion recognition captures could be considered to be biometric data, just like a scan of your fingerprint. However, there is little legislation globally that specifically addresses the capture, usage rights and privacy considerations of biometric data. Until GDPR.