artificial intelligence Archives — Carrington Malin

October 5, 2023
f16-us-air-force-1280.png

Like it or not, autonomous systems are going to play an increasing role in defence capabilities moving forward, across all domains and using a wide variety of automated systems. Autonomous aerial systems could soon be relied on heavily by defence programmes, from small, low cost drones, through to AI pilot systems.

Over the past twenty years, efforts have intensified to create artificial intelligence pilots for military use. Last month, US Deputy Secretary of Defense Kathleen Hicks revealed that the US military’s Replicator programme plans to deploy thousands of autonomous weapons systems, including autonomous unmanned aircraft systems (UAS), over the next 18 to 24 months. Although this programme is primarily focused on lower cost, ‘attritable systems’, that the Pentagon would be happy to sacrifice in return for achieving a mission’s objective.

Using autonomous pilots for fighter jets that can cost upwards of $100 million each, is quite another matter entirely. However, there are clear future advantages for doing this, including enhanced combat capabilities, speed of response to threats, strategic advantage over other militaries, and reduced risk to human pilots.

Automated systems, of course, have been used in commercial aviation for many years. Commercial autopilot systems ultimately give pilots more control and enhanced ability to respond to new demands in the air, whilst autothrottle systems help optimise both throttle control and fuel economy. The capabilities of autopilot systems have grown dramatically over the past decade, as better sensors have become available. A modern jet airliner can have tens of thousands of electronic sensors, collecting terabytes of data during a flight.

Air forces clearly have different requirements to airline fleets, but there are lessons to be learned from the commercial sector. Introducing new auto-pilot systems have many attendant risks, as was illustrated by Boeing’s introduction of the Maneuvering Characteristics Augmentation System (MCAS) into commercial aircraft.

Military interest in AI-piloted planes began to grow in the 2010s, culminating in the US Air Force use of an AI algorithm to help co-pilot a successful test flight, of a Lockheed U-2 Dragon Lady, high altitude reconnaissance aircraft, in 2019. The ‘ARTUµ’ algorithm was developed by a small team of researchers at the Air Combat Command’s U-2 Federal Laboratory.

In 2020, the US Defense Advanced Research Projects Agency (DARPA) announced that, in a simulated dogfighting competition run by its Air Combat Evolution (ACE) programme in an F-16 simulator, an AI model defeated an experienced F-16 fighter pilot.

Now things have begun to get even more interesting. DARPA revealed earlier this year that an AI model developed by ACE had successfully piloted a F-16 in flight. The AI pilot flew multiple flights in a specially modified F-16, over several days of tests at Edwards Air Force Base in California.

Notwithstanding successful tests such as these, it is still premature to announce the advent of autonomous AI combat pilots. Tests conducted to-date have all been carried out in controlled conditions and in real combat scenarios many different operational factors apply. We can expect to see an increasing sophistication of AI apps enter the cockpit in the near term, but fighter pilots shouldn’t worry about their jobs just yet.

This article first appeared in Armada International

Image credit: U.S. Air Force.


September 4, 2023
think-your-ai-content-is-fooling-everyone-1280.png

So, you think that your AI generated content is fooling everyone? Think again.

If you are happily creating articles, posts and comments using Generative AI, feeling safe in the knowledge that no one will ever guess that your content is ‘AI’, dream on! Your audience is already developing a sixth sense to instantly tell human and GenAI content apart.

I’m telling you this to be kind. The more people that dismiss what you share as ‘fake’ AI content, the more chance there is that you are harming, not enhancing your personal brand.

So, as a well-known advocate of AI solutions and an intensive user of AI, why am I, of all people, telling you to be wary of posting AI generated content? To explain further, we have to consider the dynamics of today’s social media, the value of ‘Likes’ and how digital content impacts your ideal audience.

A common misconception is that more Likes equal greater validation of the content that you share. In reality, people Like your content for different reasons, while the volume of Likes can often have more to do with how the platform’s algorithm treats your piece of content, rather than its own particular merits.

So, who Likes your posts and articles?

    • The people that know you best, or consider themselves to be your fellow travelers on the same journey, may give your content a Like purely to be supportive.
    • People that follow the topics that you post about, may Like your content because it’s within their main area of focus, but that doesn’t mean they have to read it!
    • Similarly, people that use LinkedIn or other social media to keep up-to-date with the news, may Like your content if it delivers an interesting headline.
    • If you tag people or companies, then you may receive Likes in return, just on the basis that all publicity is good publicity.
    • If your followers include a lot of coworkers, subordinates or students that you teach, you may receive a lot of Likes, because either (hopefully!) they like the job that you’re doing, or are seeking recognition, themselves.
    • Then there are those that Like your content because they have read it, enjoyed reading it, or have derived value from doing so.

Make no mistake, that last category (the readers) are the minority!

If you’re a LinkedIn user then you will know that LinkedIn gives you the option to react to a post using different Likes (Celebrate, Support, Love, Insightful and Funny). I can’t count the number of times that I’ve seen the ‘Insightful’ Like used on posts with an erroneous, or broken link, to the content that they apparently found ‘Insightful’! Social media is a world where Love doesn’t mean love, Insightful doesn’t necessarily mean insightful, and Like doesn’t even have to mean like! In itself, the value of a Like is nothing.

Another factor to consider in assessing how well your content is doing, is that fact that your biggest fans may not react on social media at all! I frequently get comments about my articles, newsletters and reports via direct messages, Whatsapp, or offline during ‘real life’ conversations from people that never, or almost never, Like, comment or share on LinkedIn. Typically, these are my most valuable connections, such as senior decision makers, subject matter experts and public figures. It’s sometimes frustrating that they don’t Like or comment, but it’s far more important and valuable to me that they take the time to read my content.

AI generated content

So, returning to our topic of AI generated content, what is your measure for how successful your content is?

This obviously depends a lot on your own goals for creating that content to begin with. My goal, for example, is typically to provide value and insight to my targeted senior decision makers and subject matter experts. Their time permitting, these are my most valuable readers, and so I’m careful to ensure that their time will be well-spent reading my posts and articles.

Let’s consider your own goals, audiences and approach to content for a moment. Who are you trying to impress? What will encourage your top target audience to read your content and return to do so again and again? What is the key message that you want to reinforce? And what forms of content is your key audience most likely to consume and respond to?

Now, the big question is where does AI content fit in?

What’s the impact of one of your most valuable connections finding that your latest post, or article, is actually quite generic and clearly not written by you. Will that realisation affect how your connection thinks about you? And is that connection now more likely, or less likely to spend time reading your content in future? It probably depends on the format and purpose of that piece of content, and how appropriate the information used in it is for the reader in question.

However, let me be clear, before we proceed further. Before it sounds like I am dismissing all AI generated content. I am not. I use AI generated content in my work all the time, although rarely in the form it is first generated. I routinely edit and re-write most pieces of AI content.

What value does GenAI written content have?

Today’s AI generated text content (and I say today’s, because the quality and value of AI content is constantly changing) has different value depending on the format, purpose and type of information offered.

Format

  • Due to the way that generative AI models work, they are the most convincing and most accurate the shorter the piece of text is. They can generate full blog posts and articles to an average quality, but the longer they are, the more apparent it is that the article lacks the nuance that a human writer would add. Meanwhile, where context is needed, most generative AI chat services draw primarily on content that may be months, or years old. Finally, since AI creates articles based on other articles that have been written by many other people (including both good writers, and poor ones), originality is not GenAI’s strong suit.

Purpose

  • The usefulness of GenAI written text to you and your readers is going to heavily depend on the purpose of the content or communication. If your purpose is to simply inform, then GenAI provides a fast and efficient way of organising information and communicating key points. At the other end of the scale, if your purpose is to share new thinking, or influence the opinion of others, then there are definite pros and cons. If your purpose of using GenAI is to win recognition for being a great writer, then please, just don’t do it!

Information

  • What type of information you wish to include in your content is also key to the value and usefulness that GenAI can provide. For example, if you wish to present an argument in favour of something, is this a logical argument based purely on the facts, or an opinion-led argument with few facts to rely on? Does the content you wish to share come from the public domain, or from the beliefs and values that you hold inside? AI is clearly going to be much better equipped to create content without opinion, beliefs or values. Where such thinking is important, GenAI needs careful input, guidance and revision, if it is to create content that is close to your own opinion, beliefs and values.

If you’ve following my thinking so far, then it will probably be obvious to you where the cracks begin to appear when you start publishing AI generated content, or try to pass it off as your own.

What are the risks?

Now ask yourself, where are the biggest risks for your personal brand in using GenAI to create your content and communications? What’s the worst that can happen if your contacts, connections, colleagues, peers and readers identify your content as AI generated? Again, I believe it depends entirely on the context.

As an avid consumer of content via Linkedin, my problem with AI generated content is two-fold: emotional and logical.

Why do I have an emotional problem with AI content? When I open and read a short post, a long post, or an article from a connection, I feel that I have some measure of vested interest. So, when I read their insight or opinion, only to find that it’s GenAI, I often feel a negative emotional response. My immediate reaction is that ‘this is fake’. It’s emotional because I often take the time to read such content to learn about, or to understand the other person’s opinion. So, it’s basically disappointment.

Secondly, there are a number of logical problems that I now have when discovering GenAI content out of context, or being passed of as original thinking. If I consider the content to be valuable, then I treat it the same as human generated content. Why wouldn’t I? However, life is rarely that simple! Here are some of the new social media quandaries that I come up against:

  • When someone that I know and respect, posts GenAI content believing that it will pass as their own original written content, and it clearly fails to do so, should I tell them? Should I Like their content, even though I don’t? Do I have time to explain to them carefully and respectfully what the problem is?
  • When someone posts an AI generated comment on one of my social media posts, blogs or articles, that simply repeats a fact from my content without sharing an opinion, posing a question or adding value, should I Like it? How should I reply? Or should I delete it to save embarrassment all round?
  • When someone messages me and asks me to endorse a piece of content that looks like it was generated by ChatGPT in about 60 seconds, what do I say to them?

For what it’s worth, my own personal guidelines for using AI are to be as honest and transparent about my GenAI usage as I can. So, anything I use that has a significant element of GenAI created content in it, I now share with a credit or disclaimer.

It is true that GenAI can prove to be valuable to people that are not great writers, but it’s also true that it is only by gaining experience as a writer or editor, that you will have the tools to edit AI text content to be more human, and represent your personal brand better.

The famous horror-fiction writer Stephen King, says this in his book about writing:

“If you don’t have time to read, you don’t have the time (or the tools) to write.”

This is true of any form of writing.

When you’re learning to write better, writing ‘does what it says on the tin’. Reading and writing more comments will make you better at writing comments; reading and writing more social media posts will make you better at writing posts; while reading and writing more long-form articles will make you better at that. And each of those things will make you better equipped to more effectively use, edit and filter AI generated content to build your personal brand, rather than dilute it.

If you believe that you can skip that learning process and automate your content generation, without becoming its thoughtful moderator, then your GenAI content is probably only fooling one person: yourself.

If you enjoyed reading this article, you may also like:


April 16, 2023
AGBI-story-1280x719.png

Sam Altman is expected to meet policymakers in Dubai, as part of his world tour, but we’re only one big scandal away from a global crackdown.

The OpenAI CEO is expected to visit Dubai as part of his 16 stop global tour in May-June to meet with customers, developers and policymakers Since Altman’s visit follows the Elon Musk-backed open letter to halt additional development and training of LLMs like GPT and Italy’s banning of ChatGPT at the end of March, the question of AI regulation is, no doubt, being quickly pushed up regulators’ agendas.

Arabian Gulf Business Insight (AGBI) asked me why he is making this world tour and why now is the right time to talk to policymakers. In short, time is of the essence!

Italy’s ChatGPT ban over concerns about data privacy, lack of age restrictions and ChatGPT’s potential to misinform people at scale, provides a clear signal that OpenAI needs to open up channels with regulators worldwide to ensure that they feel they understand ChatCPT and the company’s plans a little better. Other regulators have these same concerns and it’s a significant challenge for regulators to keep abreast of how this fast-moving technology will affect existing laws, rights and data regulations

If OpenAI expects to keep releasing new more powerful versions, it needs to help set expectations now. So, it would be natural to expect there to be  dialogue between OpenAI and regulators, with OpenAI sharing what regulators can expect from its platforms, and regulators sharing their needs and concerns.

The more regulators feel ill informed or that laws are being ignored, the greater the risk of further bans. As with any new, little understood, technology, we’re only one big scandal away from a crackdown.”

As with any new, little understood technology, we’re only one big scandal away from a crackdown. So, it’s well worth OpenAI’s time to put some work now into keeping regulators informed.

You can read UAE-based journalist Megha Merani‘s full story in AGBI here


January 31, 2023
chatgpt-image.png

Will ChatGPT take our jobs? The truth is that it gets an awful lot right and it gets an awful lot wrong!

Communications professionals, writers, journalists and researchers all seem to be asking if Chat GPT and other new AI platforms will take their job. I’ve been using AI writing apps for a couple of years now to help overcome ‘writers block’, sketch out ideas and, well, just for fun! So, what do I think? Will AI take my job? Here’s a little analysis that I did in January 2023. 

A recent survey survey of UAE residents by communications consultancy duke+mir and research firm YouGov, found that 55% were concerned about losing their job to AI (and this went up to 66% in the under 25 age group). It’s a notion that tech firms and some governments have done their best to deride over the last five years, but the evidence is very clear: artificial intelligence allows companies to do more with less – and that means less people too.

It is true that new technologies, including AI, are also creating new categories of jobs, but that’s not much of a consolation if you don’t have, or can’t acquire, the new skills required to take one of those jobs. This said, there are many things that AI simply cannot do, because those jobs require human thinking, dexterity or other nuances that are particular to human beings.

However, for some, the arguments are a little too academic. Most people don’t know much about how AI works, how it’s used, or what the benefits and risks might be. Many have relied on tech firms, scientists, journalists and government spokepeople for insights on how AI technologies will affect them. Now, suddenly, this may have changed a little.

With the public beta release of the latest versions of OpenAI’s conversational AI chatbot ChatGPT. Now, anyone with an Internet connection can play with AI, discover its potential, explore its boundaries and ask it about the future.

So, just for fun, I asked ChatGPT to write this week’s weekly article. Will it take my job? You be the judge!

I gave ChatGPT the following instruction:

No alt text provided for this image

This is the first answer that the chatbot returned. It is unedited. I’ve fact checked this and put my comments under the article.


The Gulf Cooperation Council (GCC) states, which include Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates (UAE), have been actively investing in and adopting artificial intelligence (AI) and other emerging technologies in recent years. These efforts are driven by a desire to diversify their economies, improve efficiency and productivity, and enhance the overall competitiveness of their countries.(1)

One key trend in the adoption of AI in the GCC states is the increasing focus on the development of smart cities. For example, the UAE has launched several initiatives aimed at creating smart cities, such as the Dubai Smart City project and the Abu Dhabi Smart City project. These initiatives aim to use technology to improve the quality of life for citizens, enhance urban planning, and create new business opportunities.(2)

Another trend is the use of AI in the healthcare sector. For example, in Qatar, the government has launched the Qatar Genome Project, which aims to use genomics and AI to improve the diagnosis and treatment of diseases. Additionally, Saudi Arabia has launched the Misk Medical AI Program, which aims to use AI to improve the efficiency and effectiveness of healthcare in the country.(3)

Another key trend is the use of AI and other emerging technologies in the financial sector. For example, the Central Bank of Bahrain has launched the Bahrain FinTech Bay, which aims to promote the development of fintech and other innovative technologies in the country. Additionally, the Dubai International Financial Centre (DIFC) has launched the Dubai Blockchain Strategy, which aims to use blockchain technology to improve the efficiency and security of financial transactions in the UAE.(4)

In addition to these specific examples, GCC countries have also been investing in other areas such as energy, transportation, and retail. For instance, in Oman, the government has been working with companies like Siemens to implement AI-enabled systems in the oil and gas sector. In Saudi Arabia, the government has been working to develop autonomous vehicles and drones for transportation and delivery purposes.(5)

Overall, the adoption of AI and other emerging technologies in the GCC states is being driven by the desire to improve efficiency and productivity, diversify their economies, and enhance the overall competitiveness of their countries. These efforts are also supported by significant investments in research and development, as well as the development of supportive ecosystems, such as innovation hubs and accelerators. As a result, we can expect to see continued growth in the adoption of AI and other emerging technologies in the GCC states in the coming years.(6)

It is worth noting that the GCC countries have a lot of potential to be the leader in AI, but there is also a lot of work that needs to be done to overcome challenges such as lack of data privacy laws and shortage of local talent. It is important for the GCC governments to work on these challenges and to also focus on building an ecosystem that supports innovation, investment and collaboration in AI.(7)


Fact check & analysis

  1. Paragraph one is correct and provides a useful introduction.
  2. Paragraph two provides a useful example and is essentially correct, but contains a little misinformation. Dubai Smart City project was announced in 2014, but since then has developed into several iniatives with different names such as Smart Dubai, which is now Digital Dubai. I don’t remember there ever being an “Abu Dhabi Smart City” project by name.
  3. Paragraph three is also useful, although the language is repetitive. However, again, ChatGPT inserts its own names. The Qatar Genome Project is the name of an actual project. As far as I know, “Misk Medical AI Program” is not, although Saudi Arabia’s Misk Foundation does have medical programs.
  4. Paragraph four contains misinformation. Again the content topic here is useful for the context of the article, but Dubai Blockchain Strategy was not launched by DIFC, it was announced by Smart Dubai and Dubai Future Foundation in 2017.
  5. These paragraphs are now following a formula, but even so paragraph five is a useful contribution to the report. Although, I would say that the claim that Saudi Arabia is “working to develop autonomous vehicles” is overstated and possibly misleading.
  6. Paragraph six is correct, although I would prefer if it had also mentioned some examples to make it more specific.
  7. Paragraph seven is correct and useful.

So, should I give up, or stick around for a while longer? ?

Read more on this topic:

This story was originally published on Linkedin


January 24, 2022
tech-in-2022-zdnet-24Jan22.png

GCC tech in 2022: another big year for innovation? Last year saw a long list of government initiatives in the GCC to accelerate digital transformation, encourage innovation in government and create policies to encourage Fourth Industrial Revolution industries. Recently, I was interviewed by ZDNet’s Damian Radcliffe for his article on ‘the biggest trends shaping the digital future of the Middle East‘.

I have no doubt that 2022 will prove to be an exciting year for artificial intelligence and emerging technologies in the Middle East. There have been so many government initiatives, policy moves, proof of concepts and trials across public and private sectors, not to mention investments in new ventures and R&D over the past year, it’s difficult to simply keep track of the developments already in motion! However, I believe we’ll have plenty of new ventures, projects and initiatives to look forward to in 2022 too.

However, beyond the addition of more impressive sounding new government initiatives, I believe that we’re going to see more real evidence of initiatives and programmes set in process during the past 2-3 years bearing fruit. For example, the UAE published its first AI strategy in 2017. Now, nearly five years on, the strategy (which has been updated a number of times) has informed the launch of new initiatives across UAE education, skills development in government, investment, new projects and new organisations, public services and regulation. In Saudi Arabia, the progress made on data and AI at a government level, has paved the way for a new wave of government and private sector initiatives, companies, partnerships and investments.

As Damian’s article helps to illuminate, there are rapid changes taking place across the Middle East’s tech and telecom ecosystems, making the region an exciting place to be at the moment.

You can read Damian’s full article on Tech in 2022 in the Middle East region here.


July 10, 2021
nine-deepfake-video-predictions-1280x688.jpg

The UAE Council for Digital Wellbeing and the UAE National Programme for Artificial Intelligence this week published a Deepfake Guide to help raise social awareness about the technology. But what are deepfakes and are they all bad? Here are my top 9 deepfake video predictions!

Deepfake videos first hit the headlines six years ago when Stanford University created a model that could change the facial expressions of famous people in video. In 2017, University of Washington researchers released a fake video of former President Barack Obama making a speech using an artificial intelligence neural network model. It wasn’t long before consumers could create their own deepfake videos using a variety of tools, including deepfakesweb.com’s free online deepfake video application. Deepfake videos have now become commonplace circulating on social media of presidents, actors, pop stars and many other famous people.

So, what’s all the fuss about? Simply, that fakes of any nature can be used to deceive people or organisations with malicious intent or for ill-gotten gains. This could include cyber-bullying, fraud, defamation, revenge-porn or simply misuse of video for profit. It perhaps comes as no surprise that a 2019 study found that 96 per cent of all deepfake videos online were pornographic.

However, using technology to produce fakes is not a new thing. The swift rise of the photocopier in the 1970s and 80s allowed office workers all over the world to alter and reproduce copies of documents, letters and certificates. The ease with which printed information could be copied and altered, prompted changes in laws, bank notes, business processes and the use of anti-counterfeit measures such as holograms and watermarks.

Like photocopying, deepfake video technology is getting better and better at what it does as time goes on, but at a much faster speed of development. This means that the cutting edge of deepfake technology is likely to remain ahead of AI systems developed to detect fake video for a long time to come.

Any technology can be used for good or evil. However, in a few short years deepfake technology has got itself a terrible reputation. So, what is it good for? My take is that deepfake video technology – or synthetic video for the commercially-minded – is just one aspect of artificial intelligence that is going to change the way that we use video, but it will be an important one. Here are my top nine deepfake video predictions.

1. Deepfake tech is going to get easier and easier to use

It’s now quite easy to create a deepfake video using the free and paid-for consumer apps that are already in the public domain. However, as the world learns to deal with deepfake video, the technology will eventually be embedded into more and more applications, such as your mobile device’s camera app.

2. Deepfake technology’s reputation is going to get worse

There’s an awful lot of potential left for deepfake scandal! The mere fact that developers are creating software that can detect deepfake video, means that the small percentage of deepfake video that can not be identified as fake may be seen as having a virtual rubber stamp of approval! And believing that a influential deepfake video is authentic is where the problem starts.

3. Policymakers are going to struggle to regulate usage

Artificial intelligence is testing policymakers’ ability to develop and implement regulation like no other force before it. The issues are the most obvious when deepfakes are used for criminal activitiy (the courts are already having to deal with deepfake video). In the near future, regulators are also going to have to legislate on the rise of ‘legitimate’ use, seamlessly altering video for education, business, government, politics and other spheres.

4. One:one messaging

One of the most exciting possibilities is how deepfake modelling might be used to create personalised one:one messaging. Today, it’s possible for you to create a video of you voicing a cute animation via an iPhone. Creating and sending a deepfake video of your real self will soon be as easy as sending a Whatsapp message. If that sounds too frivolous, imagine that you’re stuck in a meeting and want to send a message to your five year-old.

5. Personalisation at scale

As the technology becomes easier to use and manipulate, and as the processing power becomes available to automate that process further, we’re going to be able to create extremely lifelike deepfake videos – or synthetic videos, if you rather – at scale. London-based Synthesia is already testing personalised AI video messages. That will open the doors for marketers to personalise a new generation of video messages at scale and deliver a whole new experience to consumers. Imagine if every new Tesla owner received a personal video message from Elon Musk (well, ok, imagine something else then!).

6. Deepfakes on the campaign trail

As marketers get their hands on new tools to create personalised video messages for millions, then there may be no stopping political parties from doing so too. Has your candidate been banned from social media? No problem! Send out a personalise appeal for support directly to your millions of supporters! In fact, this is one use that I could see being banned outright before it even gets started.

7. Video chatbots

There are already a number of developers creating lifelike synthetic video avatars for use as customer service chatbots, including Soul Machines and Synthesia. As AI generated avatars become more lifelike, the lines between different types of video avatars and AI altered deepfake videos are going to blur. The decisions on what platform, what AI technology, what video experience and what type of voice to add, are going to be based on creative preferences or brand goals, not technology.

8. Deepfake entertainment

Although some deepfake videos can be entertaining, their novelty value already seems to be fading. In the future, whether a deepfake is entertaining or not will depend on the idea and creativity behind it. We seem to be headed for an some kind of extended reality music world, where music, musicians, voices, characters and context are all interchangeable, manipulated by increasingly sophisticated technology. The Korean music industry is already investing heavily in virtual pop stars and mixed reality concerts. Deepfake representations will not be far behind. After all, they’re already reading the news! The Chinese national news service (Xinhua) has been using an AI news anchor for the past two years.

9. Your personal AI avatar

In 2019, Biz Stone co-founder of Twitter and Lars Buttler, CEO of San Francisco-based The AI Foundation, announced that they were working on a new technology that would allow anyone to create an AI avatar of themselves. The AI avatar would look like them, talk like them and act like them, autonomously. In comparison, creating personal avatars using deepfake technology (i.e. manipulating already existing video) could be a lot easier to do. It remains to be seen how long it will take before we have the capability to have our own autonomous AI avatars, but creating our own personal AI video chatbot using deepfake tech is just around the corner!

I hope that you liked my top deepfake video predictions! But, what do you think? Will AI altered deepfakes and AI generated video avatars soon compete for our attention? Or will one negate the need for the other? And how long do you think it will be before consumers are over-targeted by personalised AI generated video? Be the first to comment below and win a tube of Pringles!

This article was first posted on Linkedin. If you’re interested in this sort of thing, I also wrote about deepfakes for The National a couple of years ago. You can find that article here.


April 14, 2021
brazils-national-artificial-intelligence-strategy-1280x587.jpg

Brazil’s national artificial intelligence strategy – a Estratégia Brasileira de Inteligência Artificial or EBIA in Portuguese – was published in the Diário Oficial da União, the government’s official gazette, last week. The publication of the Brazilian national AI strategy follows a year of public consultation (via an online platform), seeking recommendations from the technology industry and academic experts and benchmarking (2019-2020) and a year of planning and development. The strategy focuses on the policy, investment and initiatives necessary to stimulate innovation and promote the ethical development and application of AI technologies in Brazil, to include education and research and development.

As a country that has struggled with both racial equality and gender equality, it’s no surprise that ethical concerns and policies are made a priority by EBIA. Therefore, core to the strategy is that AI should not create or reinforce prejudices, putting the onus on the developers of artificial intelligence systems to follow ethical principles, meet regulatory requirements and ultimately the responsibility for how their systems function in the real world. Ethical principles will also be applied by the government in issuing tenders and contracts for solutions and services powered by AI. The strategy also embraces the OECD’s five principles for a human-centric approach to AI.

Brazil's national artificial intelligence strategy chart

It’s important when reviewing the new EBIA to take into account the Brazilian Digital Transformation Strategy (2018),or E-Digital, which puts in place some foundational policy relevant to AI. E-Digital defines five key goals 1) promoting open government data availability; 2) promoting transparency through the use of ICT; 3) expanding and innovating the delivery of digital services; 4) sharing and integrating data, processes, systems, services and infrastructure; and 5) improving social participation in the lifecycle of public policies and services. This last goal was clearly embraced in the development of EBIA by including the year-long public consultation as part of the process.

More to follow on Brazil’s national artificial intelligence strategy…

Download A Estratégia Brasileira de Inteligência Artificial (EBIA) summary (PDF, Portuguese)

Also read about last year’s publication of the Indonesia National AI Strategy (Stranas KA) and Saudi Arabia’s National Strategy for Data & AI.


April 4, 2021
Can-the-GCC-leapfrog-the-West-in-AI.jpg

Budgets fueled by oil revenues and a relative lack of legacy systems offer distinct advantages to technology master planners. So, can the GCC leapfrog the West in AI adoption?

First time visitors to Saudi Arabia, the United Arab Emirates or any of the other Gulf Cooperation Council (GCC) states, cannot fail to be impressed by the pristine international airports, awe-inspiring highways and comprehensive digital government systems.

The region’s state-of-the-art infrastructure and ability to roll-out advanced technology owe much, not only to oil revenues but also to the lack of legacy infrastructure and systems. This has allowed the Gulf states to leap-frog and embrace new technologies faster than many countries in the West. Now they’re hoping to do the same with artificial intelligence, by embracing AI faster than anyone else.

If the past month’s news is anything to go by, the GCC has recently switched its adoption of emerging technologies up a gear.

UAE reveals 4IR development strategy

Notably, amongst the many tech-related government announcements in March, the UAE last week revealed its new industrial development strategy, ‘Operation 300bn’.  The plan aims to create a new industrial ecosystem consisting primarily of high-tech and Fourth Industrial Revolution (4IR) ventures. The past five years have seen the Emirates push technological innovation to the top of the national agenda. The UAE was one of the first countries to announce a national AI strategy in 2017 and the primary motivation behind its widely publicised Mars Hope Probe is actually to help catalyse innovation at home.

‘Operation 300bn’, which aims to increase the industrial sector’s contribution to the UAE’s GDP from AED 133 billion ($36bn) to AED 300 billion ($82bn) by 2031, confirms the central position of an advanced technology agenda at the heart of the country’s policymaking.

Qatar and Saudi Arabia have also increased their 4IR focus during the past few years, with Saudi Arabia forming the Saudi Data and AI Authority (SDAIA) in 2019 and announcing its national AI strategy in October last year. This month Qatar signaled readiness to proceed with its own AI strategy, forming a new committee to help drive implementation.

Fast-tracking digital transformation

Meanwhile, we’ve seen both public and private sectors increase the rate of adoption of AI and other emerging technologies, further accelerated by the onset of Covid-19.

According to new results released from Dell Technologies’ 2020 Digital Transformation Index, Saudi Arabia and the UAE seem to be accelerating ahead of the rest of the world in implementing digital transformation and cutting-edge technologies. The research found that 90 percent of organisations in the two countries fast-tracked digital transformation programmes last year, ahead of the index’s global benchmark of 80 percent.

This fast adoption is evidenced by news of some massive technology projects that we’ve heard about during the past few weeks.

DP World, Dubai’s multinational logistics and container terminal operator, has now implemented a fully-automated Terminal Operating System for one of its key container terminals in Jebel Ali Port. The home-developed system includes autonomous systems and remote control functionality for all of the facilities in the terminal.

In the energy sector, Aramco Trading Company, or ATC, which is responsible for transporting Saudi Aramco’s oil supplies to worldwide markets, and developer Maana have implemented an AI maritime fleet optimisation application purpose-built for the oil and gas industry. The application runs a digital twin of ATC’s global maritime operations, using AI to automatically optimise schedules across the fleet with a single click and offer scenarios and insights to aid planning.

Desert smart cities

There was also no shortage of smart city news this month, with Kuwait, Saudi Arabia and the UAE, in particular, forging ahead with initiatives to improve the lives of city residents, boost competitiveness and develop urban sustainability. Dubai International Airport’s use of iris scanner ID systems for automated passport control made headlines in February. This month, a Dubai 2040 Urban Master Plan was announced to leverage city planning and new technologies to create greater urban sustainability.

In Kuwait, Hong Kong group Wai Hung and Investment Projects General Trading Company signed a deal to build one million smart parking spaces in nine countries across the Middle East.  While, in neighbouring Saudi Arabia, the holy city of Makkah (Mecca) is deploying solar-powered smart bins to collect and autonomously sort empty plastic bottles.

Abu Dhabi’s AI powerhouse Group 42 announced a new partnership with the UK’s Serco Group to develop AI and IoT solutions for facilities management and support the outsourcing company’s shift towards data-driven operations. We may well see the future impact of this partnership reach far beyond the Gulf.

In another Group 42-backed initiative announced this month, Abu Dhabi’s first public trial of driverless vehicle services will begin by the end of 2021. Initially, three autonomous vehicles will provide transport services to tourists and residents visiting the Yas Mall area, but the plan is to increase both the coverage and the number of AVs involved during 2022.

Building the Gulf’s first quantum computer

Quantum computing has already been identified as an area of opportunity by GCC states, with a number of quantum computing research groups being formed in universities in Qatar, Saudi Arabia and the United Arab Emirates.  This year, King Abdullah University of Science and Technology (KAUST) has been recruiting for a Professor of Devices for Quantum Technologies, who will ultimately lead the university’s efforts to build quantum devices.

However, in Abu Dhabi, the newly formed Technology Innovation Institute (TII) is already building its own quantum computer at its Quantum Research Centre, in collaboration with Barcelona-based deep-tech startup Qilimanjaro. TII is the applied research arm of the emirate’s Advanced Technology Research Council, which both formulates research policy and orchestrates projects, resources and funding.

It’s research and development ventures such as this that symbolise the latest dreams of Gulf policy-makers. Over the years, the Gulf states have proved to be astute buyers of advanced technology, while taking none of the risks necessary to develop innovation at home.

Today, along with ambitious policies to embrace emerging technologies, build smart cities and leverage AI, there is also now momentum behind policies that actively encourage home-grown technology development. The region’s nascent R&D sector has already become an early beneficiary of this policy shift and it’s a sector that the world can expect to hear much more from during the coming years.

This article was originally published as ‘A letter from the Gulf’ in The AI Journal.


March 31, 2021
impact-of-ai-in-the-middle-east.gif

The impact of AI in the Middle East special report is out from Middle East Economic Digest (MEED), which includes features covering innovation, digital transformation in the construction industry, and update on Qatar’s national artificial intelligence strategy and MEED’s own Digital Transformation Index.

I was name-checked in the ‘Creating an artificial intelligence ecosystem‘ feature by Jennifer Aguinaldo, which explores the region’s quest to drive home-grown innovation and create an AI ecosystem that does more than simply buy technology from overseas. All the national AI strategies developed by countries around the region include plans to encourage innovation, incentivise startups and nurture local research and development. However, it is Saudi Arabia and the United Arab Emirates that have fast-tracked more initiatives, policy and supporting government programmes over the past few years.

As is normally the case with Middle East Economic Digest, the impact of AI in the Middle East report is behind the paywall. If you are a MEED subscriber, you can read Jennifer’s full article here.


January 10, 2021
artificial-intelligence-ethical-concerns-2020.jpg

My New Year’s Linkedin poll about changes in how people feel about their ethical concerns regarding AI doesn’t prove much, but it does show that 2020 did little to ease those concerns.

Opinions and our level of understanding about artificial intelligence can vary a great deal from person to person. For example, I consider myself a bit of a technophile and an advocate of many technologies including AI, with a higher than average level of understanding. However, I harbour many concerns about the ethical application, usage and the lack of governance for some AI technologies. My knowledge doesn’t stop me having serious concerns, nor do those concerns stop me from seeing the benefits of technology applied well. I also expect my views on the solutions to ethical issues to differ from others. AI ethics is a complex subject.

So, my intention in running this limited Linkedin poll over the past week (96 people responded) was not to analyse the level of concern that people feel about AI, nor the reasons behind it, but simply whether the widepread media coverage about AI during the pandemic had either heightened or alleviated people’s concerns.

The results of the poll show that few people (9%) felt that their ethical concerns about AI were alleviated during 2020. Meanwhile, a significant proportion (38%) felt that 2020’s media coverage had actually heightened their ethical concerns about AI. We can’t guess the level of concern among the third and largest group – the 53% that voted 2020 ‘hasn’t changed anything’ – however, it’s clear that 2020 media coverage about AI brought no news to alleviate any concerns they might have either.

Artificial intelligence ethical concerns poll 2020

Media stories about the role of AI technologies in responding to the coronavirus pandemic began to appear early on in 2020, with governments, corporations and NGOs providing examples of where AI was being put to work and how it was benefiting citizens, customers, businesses, health systems, public services and society in general. Surely, this presented a golden opportunity for proponents of AI to build trust in its applications and technologies?

Automation and AI chat bots allowed private and public sector services, including healthcare systems, to handle customer needs as live person-to-person communications became more difficult to ensure. Meanwhile, credit was given to AI for helping to speed up data analysis, research and development to find new solutions, treatments and vaccines to protect society against the onslaught of Covid-19. Then there was the wave of digital adoption by retail companies (AI powered or not) in an effort to provide digital, contactless access to their services, boosting consumer confidence in services and increasing usage of online ordering and contactless payments.

On the whole, trust in the technology industry remains relatively high compared to other industries, but, nevertheless, trust is being eroded and it’s not a surprise that new, less understood and less regulated technologies such as AI are fueling concerns. Fear of AI-driven job losses is a popular concern, but so are privacy, security and data issues. However, many people around the world are broadly positive about AI, in particular those in Asia. According to Pew Research Center, two thirds or more of people surveyed in India, Japan, Singapore, South Korea and Taiwan say that AI has been a good thing for society.

Since the beginning of the pandemic, AI’s public image has had wins and losses. For example, research from Amedeus found that 56% of Indians believe new technologies will boost their confidence in travel. Meanwhile, a study of National Health Service (NHS) workers in London found that although 70% believed that AI could be useful, 80% of participants believed that there could be serious privacy issues associated with its use. However, despite a relatively high level of trust in the US for government usage of facial recognition, the Black Lives Matter protests of 2020 highlighted deep concerns, prompting Amazon, IBM and Microsoft to halt the sale of facial recognition to police forces.

Overall, I don’t think that AI has been seen as the widespread buffer to the spread of Covid-19 as it, perhaps, could have turned out to be. Renowned global AI expert Kai-Fu Lee commented in a webinar last month that AI wasn’t really prepared to make the decisive difference in combating the spread of the new coronavirus. With no grand victory over Covid-19 to attribute to AI, its role over the past year it’s understandable that there was no grand victory for AI’s public image either. Meanwhile, all the inconvenient questions about AI’s future and the overall lack of clear policies that fuel concerns about AI remain, some even attracting greater attention during the pandemic.

This article was first posted on Linkedin.