governance Archives — Carrington Malin

April 16, 2023
AGBI-story-1280x719.png

Sam Altman is expected to meet policymakers in Dubai, as part of his world tour, but we’re only one big scandal away from a global crackdown.

The OpenAI CEO is expected to visit Dubai as part of his 16 stop global tour in May-June to meet with customers, developers and policymakers Since Altman’s visit follows the Elon Musk-backed open letter to halt additional development and training of LLMs like GPT and Italy’s banning of ChatGPT at the end of March, the question of AI regulation is, no doubt, being quickly pushed up regulators’ agendas.

Arabian Gulf Business Insight (AGBI) asked me why he is making this world tour and why now is the right time to talk to policymakers. In short, time is of the essence!

Italy’s ChatGPT ban over concerns about data privacy, lack of age restrictions and ChatGPT’s potential to misinform people at scale, provides a clear signal that OpenAI needs to open up channels with regulators worldwide to ensure that they feel they understand ChatCPT and the company’s plans a little better. Other regulators have these same concerns and it’s a significant challenge for regulators to keep abreast of how this fast-moving technology will affect existing laws, rights and data regulations

If OpenAI expects to keep releasing new more powerful versions, it needs to help set expectations now. So, it would be natural to expect there to be  dialogue between OpenAI and regulators, with OpenAI sharing what regulators can expect from its platforms, and regulators sharing their needs and concerns.

The more regulators feel ill informed or that laws are being ignored, the greater the risk of further bans. As with any new, little understood, technology, we’re only one big scandal away from a crackdown.”

As with any new, little understood technology, we’re only one big scandal away from a crackdown. So, it’s well worth OpenAI’s time to put some work now into keeping regulators informed.

You can read UAE-based journalist Megha Merani‘s full story in AGBI here


February 4, 2020
davos-wef-2020.jpg

The WEF worked with more than 100 companies and tech experts to develop a new framework for assessing risk and AI.

Companies are implementing new technologies faster than ever in the race to remain competitive, often without understanding the inherent risks.

In response to a growing need to raise awareness about the risks associated with artificial intelligence, the World Economic Forum, together with the Centre for the Fourth Industrial Revolution Network Fellows from Accenture, BBVA, IBM and Suntory Holdings, worked with more than 100 companies and technology experts over the past year to create the ‘Empowering AI Toolkit’. Developed with the structure of a company board meeting in mind, the toolkit provides a framework for mapping AI policy to company objectives and priorities.

Developed with the structure of a company board meeting in mind, the toolkit provides a framework for mapping artificial intelligence policy to company objectives and priorities.

Any board director reading through WEF’s Empowering AI Toolkit will find it valuable not because it delivers any silver bullets, but because it can provide much-needed context and direction to AI policy discussions – without having to hire expensive consultants.

The new framework identifies seven priorities, like brand strategy and cybersecurity, to be considered from an ethics, risk, audit and governance point of view. The toolkit was designed to mimic how board committees and organisations typically approach ethics, policy and risk.

Artificial intelligence promises to solve some of the most pressing issues faced by society, from ensuring fairer trade and reducing consumer waste, to predicting natural disasters and providing early diagnosis for cancer patients. But scandals such as big data breaches, exposed bias in computer algorithms and new solutions that threaten jobs can destroy brands and stock prices and irreparably damage public trust.

Facebook’s 2018 Cambridge Analytica data crisis opened the world’s eyes to the risks of trusting the private sector with detailed personal data. The fact that an otherwise unknown London analytics company had drawn data on 50 million Facebook users without their permission not only drew public backlash, it sent Facebook’s market value plunging $50 billion within a week of the episode being reported.

In addition to Facebook’s Cambridge Analytica woes, there have been a number of high-profile revelations that artificial intelligence systems used by both government and business have applied hidden bias when informing decisions that affect people’s lives. These include a number of cases where algorithms used by big companies in recruitment have been biased based on the race or gender of job candidates.

There is some awareness that new technologies can wreak havoc if not used carefully – but there isn’t enough. And it can challenge corporate boards to predict where a pitfall may present itself on a company’s path to becoming more tech-savvy.

Despite all the warning signs, there remains an “it can’t happen here” attitude. Customer experience company Genesys recently asked more than five thousand employers in six countries about their opinions about AI and found that 54 per cent were not concerned about the unethical use of AI in their companies.

Many corporations have established AI working groups, ethics boards and special committees to advise on policy, risks and strategy. A new KPMG survey found that 44 per cent of businesses surveyed claimed to have implemented an AI code of ethics and another 30 per cent said that they are working on one.Since AI is an emerging technology, new risks are emerging too. Any company could use a road map.

One of today’s biggest AI risks for corporations is the use of, as WEF calls them, ‘inscrutable black box algorithms’. Simply put, most algorithms work in a manner only understood by the programmers who developed them. These algorithms are often considered to be valuable intellectual property, further reinforcing the need to keep their inner-workings a secret and thus removed from scrutiny and governance.

There are already a number of collaborations, groups and institutes that are helping to address some of these issues. The non-profit coalition Partnership on AI, founded by tech giants Amazon, DeepMind, Facebook, Google, IBM and Microsoft, was established to research best practices to ensure that AI systems serve society. Last year, Harvard Kennedy School’s Belfer Center for Science and International Affairs convened the inaugural meeting of The Council on the Responsible Use of Artificial Intelligence, bringing together stakeholders from government, business, academia and society to examine policymaking for AI usage.

However, the speed and ubiquitous nature of artificial intelligence mean that even accurately defining certain risks remains a challenge. Even the best policies must allow for change. The good news is that WEF’s new AI toolkit is available free-of-charge and so could prove to be of immediate value to commercial policymakers the world over.

This story was first published in The National.