Articles Archives — Carrington Malin

March 17, 2024
microsoft-venture-accelerators.png

Without methodically planning your sales and marketing, you are just as methodically planning to fail. Yes, even if you’re just an early stage venture with little budget.

I’ve talked to dozens of AI startup founders over the past few years. All of them passionate about their technology. All obsessed with what their technology can do for customers. Most, fortunate enough to have one or two customers that are innovation leaders and have helped them develop their proof of concepts. And, sadly, most of them seem to think that sales and marketing is not a priority. They often think that its something that new ventures don’t need to make a priority of, or perhaps only once they have more funding than they need for product development. They are wrong.

Before I provide you with an explanation. Let me begin by saying that I don’t blame tech founders for not prioritising sales and marketing. I’ve worked in sales and marketing for my whole career and the fundamentals are obvious to me mainly because of that. I can’t code, I don’t know how to build tech products. So, it’s hardly a surprise to me that sales and marketing is not the strong suit for many highly educated, experienced and talented technology developers. Why would it be?

If sales and marketing were both limited to outreach without strategy, both would be highly inefficient and unproductive.

In my experience, coders, software developers and system engineers tend to associate sales and marketing with what they see. Sales might seem like its all about salespeople getting out there and meeting people. So, human resource and a lot of talking!

Meanwhile, the most visible output from marketing is content. I’ve met many technical people that confuse creativity and creative content with marketing. They think marketing is advertisements, website copy and social media posts. So, therefore, in their minds, it’s a creative endeavour.

Now, of course those assumptions are mostly true. However, if sales and marketing were both limited to outreach without strategy, both would be highly inefficient and unproductive. If a you want a vehicle to travel from from point A to point B, it needs a steering wheel, to chart a course, and someone who can oversee the journey, direct actions, maintain the course and accomplish the goals of the trip.

Taking your message to people every day is the lifeblood of sales. But, it has to be the right message and you need to be talking to the right people. Sales requires a strategy that externalises your company proposition, product benefits and your vision effectively. Sales must also position your product appropriately against the alternatives, create a dialogue with customers that identifies needs, and present your product as the best-fit solution for the customer.

Likewise, much of the ongoing effort in a marketing department is spent on developing and running campaigns, and so that does include creative work and content. As with sales success, marketing is dependent on taking the right message to the right people. Marketing also plays a key role in defining and positioning your product in the terms that the market will understand, taking care to position it as relevant in the market context, and taking steps to help create a better environment to sell it in.

Common misconceptions often shape new ventures’ first investments in sales and marketing.

Common misconceptions often shape new ventures’ first investments in sales and marketing.

If you believe that sales is talking to people and the main goal is to talk to more people, perhaps young, energetic sales executives with good communications skills, are a good hire. But these sales hires will not help you with aligning the sales proposition, presentations, targeting and pricing of your product in context of the market.

If you believe that marketing’s job is to make noise and give you more visibility, then you may opt for young hires that know how to post on social media, write media content and organise events. But these hires will not help you with developing a robust strategy that aligns with market segments, ideal customer profiles, competitor pricing and your product vision.

In big global companies the research, strategy and definition is often managed by different teams, to the implementation and tactics. This highlights another problem, which is the availability of those skill sets in your region. Often big US or European tech leaders hold that expertise at their head offices: everything else around the world is mostly execution. So, hiring someone from such firms isn’t necessarily a solution, if their main expertise is effective execution of corporate strategy.

It is not a question of scale or maturity. It is a matter of measure.

Although expertise and experience usually play a key role in shaping sales and marketing, sales is not dependent on salespeople, and marketing is not dependent on having a marketing team, or a big advertising budget. It is not a question of scale or maturity. It is a matter of measure.

Every tech venture should have a written sales and marketing strategy. Depending on the sector, product and stage of the business, this could be a few slides, a single document of just a few pages, or a comprehensive set of plans, budgets, research, roadmaps and schedules. The role of such plans are to inform your day-to-day activities, whilst helping you plan and position your business for the future.

So, why do need to be this methodical, if you only have a small team with a limited sales capability and/or marketing budget?

1 – Many founders are great visionaries, but poor salespeople!

I’ve met many founders that can hold forth for hours on their product, the technology market and their vision for how technology will change the world. Invariably, this impresses other techies, but their pitch is often not focused on what their product brings customers now, or why it meets customer needs today. Founders can prove to be the best salesmen in the company, but what they say to whom, needs to be clearly defined (and practiced).

2 – Founders who are great salespeople, must often close every single sale!

There’s no doubt, most founders get better at selling as they grow their business. They get used to the things that customers ask and enjoy giving them chapter and verse on how the product was developed to meet their challenges. The problem with this is that, often, even long after multiple salespeople have been onboarded, the only person in the company who can close a sale, is the founder (because no one else has an effective sales story to tell!). Furthermore, it maybe the case that the founder can only do this with a customer that is an innovation leader and so is maybe more inclined to listen to the founder’s life story, in order to understand what the product does. Most customers won’t really want to do this.

3 – Early stage customers are often innovation leaders, that changes!

A tech venture’s first customers are often innovation leaders, these appear at the beginning of the technology adoption lifecycle. These are the organisations focused on innovation, with tech-savvy specialists and decision makers that are prepared to take risks in order to embrace innovation. Although ideal for ventures with new technology to develop and sell, these customers can prove to be few and far between. Just because you have found a customer that is very focused on your technological advantages and is happy to discuss them for hours, it doesn’t follow that the customer’s counterparts in other companies are similarly disposed. Most customers require that you make it easy for them to see understand benefits, win internal support, justify the purchase and buy.

4 – When you’re a new or fast-growing business, everyone should be selling!

We all need to be in sales. When you’re a small business everyone in the company needs to be equipped to help spread the word, position your business in the market and sell your product. That’s how you can ‘punch above your weight’, begin building a reputation in the wider market and also become a sought after employer. To do this, you will need to have developed a powerful story that everyone can tell (i.e. not just the founders). This is where sales and marketing meets human resources. What you do, how you sell and who your product helps, should be ingrained in your organisation.

The first step in meeting all four of these challenges, is to distill what it is that you are selling down to simple plans, goals, messages and proof points. It is also important to develop a strategy to take this to market, in context of the market environment, competition, prevalent pricing, customer pain points, and the vision for your product.

A technology venture that can build a sustainable, growing business on product development alone, is a very rare animal. Your business only becomes a leader when everyone else says it is. So, first you need an effective strategy to reach and convince the right customers, partners, opinion formers and opinion leaders, whilst growing your revenue. Big budget, small budget, or no budget, that’s what sales and marketing is for.

Image: Microsoft Ventures Seattle Accelerator (Credit: Microsoft)


October 5, 2023
f16-us-air-force-1280.png

Like it or not, autonomous systems are going to play an increasing role in defence capabilities moving forward, across all domains and using a wide variety of automated systems. Autonomous aerial systems could soon be relied on heavily by defence programmes, from small, low cost drones, through to AI pilot systems.

Over the past twenty years, efforts have intensified to create artificial intelligence pilots for military use. Last month, US Deputy Secretary of Defense Kathleen Hicks revealed that the US military’s Replicator programme plans to deploy thousands of autonomous weapons systems, including autonomous unmanned aircraft systems (UAS), over the next 18 to 24 months. Although this programme is primarily focused on lower cost, ‘attritable systems’, that the Pentagon would be happy to sacrifice in return for achieving a mission’s objective.

Using autonomous pilots for fighter jets that can cost upwards of $100 million each, is quite another matter entirely. However, there are clear future advantages for doing this, including enhanced combat capabilities, speed of response to threats, strategic advantage over other militaries, and reduced risk to human pilots.

Automated systems, of course, have been used in commercial aviation for many years. Commercial autopilot systems ultimately give pilots more control and enhanced ability to respond to new demands in the air, whilst autothrottle systems help optimise both throttle control and fuel economy. The capabilities of autopilot systems have grown dramatically over the past decade, as better sensors have become available. A modern jet airliner can have tens of thousands of electronic sensors, collecting terabytes of data during a flight.

Air forces clearly have different requirements to airline fleets, but there are lessons to be learned from the commercial sector. Introducing new auto-pilot systems have many attendant risks, as was illustrated by Boeing’s introduction of the Maneuvering Characteristics Augmentation System (MCAS) into commercial aircraft.

Military interest in AI-piloted planes began to grow in the 2010s, culminating in the US Air Force use of an AI algorithm to help co-pilot a successful test flight, of a Lockheed U-2 Dragon Lady, high altitude reconnaissance aircraft, in 2019. The ‘ARTUµ’ algorithm was developed by a small team of researchers at the Air Combat Command’s U-2 Federal Laboratory.

In 2020, the US Defense Advanced Research Projects Agency (DARPA) announced that, in a simulated dogfighting competition run by its Air Combat Evolution (ACE) programme in an F-16 simulator, an AI model defeated an experienced F-16 fighter pilot.

Now things have begun to get even more interesting. DARPA revealed earlier this year that an AI model developed by ACE had successfully piloted a F-16 in flight. The AI pilot flew multiple flights in a specially modified F-16, over several days of tests at Edwards Air Force Base in California.

Notwithstanding successful tests such as these, it is still premature to announce the advent of autonomous AI combat pilots. Tests conducted to-date have all been carried out in controlled conditions and in real combat scenarios many different operational factors apply. We can expect to see an increasing sophistication of AI apps enter the cockpit in the near term, but fighter pilots shouldn’t worry about their jobs just yet.

This article first appeared in Armada International

Image credit: U.S. Air Force.


September 4, 2023
think-your-ai-content-is-fooling-everyone-1280.png

So, you think that your AI generated content is fooling everyone? Think again.

If you are happily creating articles, posts and comments using Generative AI, feeling safe in the knowledge that no one will ever guess that your content is ‘AI’, dream on! Your audience is already developing a sixth sense to instantly tell human and GenAI content apart.

I’m telling you this to be kind. The more people that dismiss what you share as ‘fake’ AI content, the more chance there is that you are harming, not enhancing your personal brand.

So, as a well-known advocate of AI solutions and an intensive user of AI, why am I, of all people, telling you to be wary of posting AI generated content? To explain further, we have to consider the dynamics of today’s social media, the value of ‘Likes’ and how digital content impacts your ideal audience.

A common misconception is that more Likes equal greater validation of the content that you share. In reality, people Like your content for different reasons, while the volume of Likes can often have more to do with how the platform’s algorithm treats your piece of content, rather than its own particular merits.

So, who Likes your posts and articles?

    • The people that know you best, or consider themselves to be your fellow travelers on the same journey, may give your content a Like purely to be supportive.
    • People that follow the topics that you post about, may Like your content because it’s within their main area of focus, but that doesn’t mean they have to read it!
    • Similarly, people that use LinkedIn or other social media to keep up-to-date with the news, may Like your content if it delivers an interesting headline.
    • If you tag people or companies, then you may receive Likes in return, just on the basis that all publicity is good publicity.
    • If your followers include a lot of coworkers, subordinates or students that you teach, you may receive a lot of Likes, because either (hopefully!) they like the job that you’re doing, or are seeking recognition, themselves.
    • Then there are those that Like your content because they have read it, enjoyed reading it, or have derived value from doing so.

Make no mistake, that last category (the readers) are the minority!

If you’re a LinkedIn user then you will know that LinkedIn gives you the option to react to a post using different Likes (Celebrate, Support, Love, Insightful and Funny). I can’t count the number of times that I’ve seen the ‘Insightful’ Like used on posts with an erroneous, or broken link, to the content that they apparently found ‘Insightful’! Social media is a world where Love doesn’t mean love, Insightful doesn’t necessarily mean insightful, and Like doesn’t even have to mean like! In itself, the value of a Like is nothing.

Another factor to consider in assessing how well your content is doing, is that fact that your biggest fans may not react on social media at all! I frequently get comments about my articles, newsletters and reports via direct messages, Whatsapp, or offline during ‘real life’ conversations from people that never, or almost never, Like, comment or share on LinkedIn. Typically, these are my most valuable connections, such as senior decision makers, subject matter experts and public figures. It’s sometimes frustrating that they don’t Like or comment, but it’s far more important and valuable to me that they take the time to read my content.

AI generated content

So, returning to our topic of AI generated content, what is your measure for how successful your content is?

This obviously depends a lot on your own goals for creating that content to begin with. My goal, for example, is typically to provide value and insight to my targeted senior decision makers and subject matter experts. Their time permitting, these are my most valuable readers, and so I’m careful to ensure that their time will be well-spent reading my posts and articles.

Let’s consider your own goals, audiences and approach to content for a moment. Who are you trying to impress? What will encourage your top target audience to read your content and return to do so again and again? What is the key message that you want to reinforce? And what forms of content is your key audience most likely to consume and respond to?

Now, the big question is where does AI content fit in?

What’s the impact of one of your most valuable connections finding that your latest post, or article, is actually quite generic and clearly not written by you. Will that realisation affect how your connection thinks about you? And is that connection now more likely, or less likely to spend time reading your content in future? It probably depends on the format and purpose of that piece of content, and how appropriate the information used in it is for the reader in question.

However, let me be clear, before we proceed further. Before it sounds like I am dismissing all AI generated content. I am not. I use AI generated content in my work all the time, although rarely in the form it is first generated. I routinely edit and re-write most pieces of AI content.

What value does GenAI written content have?

Today’s AI generated text content (and I say today’s, because the quality and value of AI content is constantly changing) has different value depending on the format, purpose and type of information offered.

Format

  • Due to the way that generative AI models work, they are the most convincing and most accurate the shorter the piece of text is. They can generate full blog posts and articles to an average quality, but the longer they are, the more apparent it is that the article lacks the nuance that a human writer would add. Meanwhile, where context is needed, most generative AI chat services draw primarily on content that may be months, or years old. Finally, since AI creates articles based on other articles that have been written by many other people (including both good writers, and poor ones), originality is not GenAI’s strong suit.

Purpose

  • The usefulness of GenAI written text to you and your readers is going to heavily depend on the purpose of the content or communication. If your purpose is to simply inform, then GenAI provides a fast and efficient way of organising information and communicating key points. At the other end of the scale, if your purpose is to share new thinking, or influence the opinion of others, then there are definite pros and cons. If your purpose of using GenAI is to win recognition for being a great writer, then please, just don’t do it!

Information

  • What type of information you wish to include in your content is also key to the value and usefulness that GenAI can provide. For example, if you wish to present an argument in favour of something, is this a logical argument based purely on the facts, or an opinion-led argument with few facts to rely on? Does the content you wish to share come from the public domain, or from the beliefs and values that you hold inside? AI is clearly going to be much better equipped to create content without opinion, beliefs or values. Where such thinking is important, GenAI needs careful input, guidance and revision, if it is to create content that is close to your own opinion, beliefs and values.

If you’ve following my thinking so far, then it will probably be obvious to you where the cracks begin to appear when you start publishing AI generated content, or try to pass it off as your own.

What are the risks?

Now ask yourself, where are the biggest risks for your personal brand in using GenAI to create your content and communications? What’s the worst that can happen if your contacts, connections, colleagues, peers and readers identify your content as AI generated? Again, I believe it depends entirely on the context.

As an avid consumer of content via Linkedin, my problem with AI generated content is two-fold: emotional and logical.

Why do I have an emotional problem with AI content? When I open and read a short post, a long post, or an article from a connection, I feel that I have some measure of vested interest. So, when I read their insight or opinion, only to find that it’s GenAI, I often feel a negative emotional response. My immediate reaction is that ‘this is fake’. It’s emotional because I often take the time to read such content to learn about, or to understand the other person’s opinion. So, it’s basically disappointment.

Secondly, there are a number of logical problems that I now have when discovering GenAI content out of context, or being passed of as original thinking. If I consider the content to be valuable, then I treat it the same as human generated content. Why wouldn’t I? However, life is rarely that simple! Here are some of the new social media quandaries that I come up against:

  • When someone that I know and respect, posts GenAI content believing that it will pass as their own original written content, and it clearly fails to do so, should I tell them? Should I Like their content, even though I don’t? Do I have time to explain to them carefully and respectfully what the problem is?
  • When someone posts an AI generated comment on one of my social media posts, blogs or articles, that simply repeats a fact from my content without sharing an opinion, posing a question or adding value, should I Like it? How should I reply? Or should I delete it to save embarrassment all round?
  • When someone messages me and asks me to endorse a piece of content that looks like it was generated by ChatGPT in about 60 seconds, what do I say to them?

For what it’s worth, my own personal guidelines for using AI are to be as honest and transparent about my GenAI usage as I can. So, anything I use that has a significant element of GenAI created content in it, I now share with a credit or disclaimer.

It is true that GenAI can prove to be valuable to people that are not great writers, but it’s also true that it is only by gaining experience as a writer or editor, that you will have the tools to edit AI text content to be more human, and represent your personal brand better.

The famous horror-fiction writer Stephen King, says this in his book about writing:

“If you don’t have time to read, you don’t have the time (or the tools) to write.”

This is true of any form of writing.

When you’re learning to write better, writing ‘does what it says on the tin’. Reading and writing more comments will make you better at writing comments; reading and writing more social media posts will make you better at writing posts; while reading and writing more long-form articles will make you better at that. And each of those things will make you better equipped to more effectively use, edit and filter AI generated content to build your personal brand, rather than dilute it.

If you believe that you can skip that learning process and automate your content generation, without becoming its thoughtful moderator, then your GenAI content is probably only fooling one person: yourself.

If you enjoyed reading this article, you may also like:


April 21, 2023
cruise-chevy-2.png

The recent passing of a new autonomous vehicle (AV) law in Dubai highlights the emirate’s tenacious commitment to innovation. But there is much to do before we see driverless taxi services on the roads.

Dubai’s ambition to place itself at the forefront of autonomous transport is exciting because it is unprecedented. However, this very same lack of precedent means it cannot lean on the experiences of others to develop new regulations, technologies or infrastructure.

Dubai’s plans switched into high gear in 2021 when the Roads and Transport Authority (RTA) signed an agreement with General Motors’ autonomous vehicle company Cruise to operate its self-driving taxis and ride-hailing services. The agreement will make Dubai the first city outside the US to offer Cruise’s driverless taxi services, with a goal of putting 4,000 AVs on the road by 2030.

The Dubai Smart Mobility Strategy aims to convert 25 percent of total transportation journeys into trips via self-driving transportation by 2030, including driverless rail transport.

The new Law No.9 of 2023, passed last week by Sheikh Mohammed Bin Rashid Al Maktoum, vice president, prime minister and ruler of Dubai, is not the first legislation to support the emirate’s future driverless vehicle services sector. The law follows resolutions made by the emirate’s executive council, local legislation development led by the RTA and UAE laws issued at a federal level to allow temporary licensing of AV trials.

But on the legal front Dubai must innovate when it comes to regulation. There are simply no comprehensive laws or guidelines in place for the public use of autonomous vehicles anywhere else.

For example, the UK government plans to allow autonomous vehicles on the road by 2025. However, a closer look shows that British regulation so far only covers testing, insurance and liability. Further afield, the EU has implemented a framework for approving level 3 and 4 AVs but offers little detail on their operation on the road. Meanwhile, the world’s most extensive driverless vehicle trials, which are taking place in China and the US, have only been authorised via case-by-case permissions issued by authorities. This is also true of the UAE’s trials.

Given the emirate’s ambitious roadmap, Dubai’s partnership with Cruise is a smart move. The US company was one of two operators to receive a permit to offer paid driverless taxi services in California in 2021. Early last year it began offering services within designated areas of San Francisco, but only between the hours of 22:00 and 06:00. The company now operates about 300 robotaxis across San Francisco, Austin and Phoenix.

However, no new technology is without its teething problems. US media have reported a variety of complaints related to the San Francisco trial, including immobile cars blocking traffic, multiple vehicles stopping in the middle of the road and incorrect signalling. In March a Cruise robotaxi bumped into the rear of a San Francisco bus, prompting an urgent software recall across its entire fleet of AVs.

While Dubai’s modern road infrastructure and digital traffic management are both big pluses for future AV services, the city still has its own unique set of physical and digital characteristics, operational needs and system integration requirements to consider. City-specific behavioural factors also apply relating to motorists, pedestrians and passengers. So, notwithstanding Cruise’s past two years of service trials in California and the five years of testing before that, only so much can be taken for granted as the US AV firm and the RTA work together on Dubai’s driverless services.

At this point in time, the RTA is perhaps the only public transport authority in the world that is developing an ecosystem to put 4,000 driverless taxis on the road in one city.

This is why innovation, not best practice, must drive Dubai’s autonomous vehicle plans.

This article first appeared in Arabian Gulf Business Insight.


March 18, 2023
tii-atrc.jpg

In a week full of big technology news, globally and regionally, the Abu Dhabi government’s leading applied research centre has announced that it has developed one of the highest performing large language models (or LLMs) in the world. It’s a massive win for the research institute and proof positive for Abu Dhabi’s fast growing R&D ecosystem.

Technology firms have gone to great lengths since the public beta of ChatGPT was introduced in November to make it clear that OpenAIGPT series of models are not the only highly advanced LLMs in development.

Google in particular has moved fast to try and demonstrate that its supremacy in search is not under threat from OpenAI’s models. Tuesday, the search giant also announced a host of AI features that will be brought to its Gmail platform, apparently in answer to Microsoft‘s plans to leverage both its own AI models and OpenAI’s into products across its portfolio. However, announced the same day, OpenAI revealed that it has begun introducing its most advanced and human-like AI model to-date: GPT-4.

Whatever way you look at it, OpenAI’s GPT-3 was an incredible feat of R&D. It has many limitations, as users of its ChatGPT public beta can attest, but it also showcases powerful capabilities and hints at the future potential of advanced AI models. The triumph for OpenAI though, and perhaps the whole AI sector in general, was the enormous publicity and public recognition of AI’s potential. Now everyone thinks they understand what AI can do, even though they are sure to be further educated by GPT-4 and the new wave of applications built on new advanced AI models heading their way.

So, what does this emerging wave of LLMs mean for other research labs and R&D institutions developing their own AI models around the world? To begin with, the bar to entry into LLMs is set high, in terms of both the technology and the budget required.

Advanced AI models today are trained, not born. GPT-3 was trained on hundreds of billions of words, numbers and computer code. According to a blog from San Francisco-based Lambda Labs, training GPT-3 might take 355 ‘GPU-years’ at a cost of $4.6 million for a single training run. Meanwhile, running ChatGPT reportedly costs OpenAI more than $100,000 per day. R&D labs competing in the world of LLMs clearly need deep pockets.

Then, it perhaps goes without saying, but institutions planning to develop breakthrough LLMs, must also have the right talent. And of course, global competition for the type of top researchers needed to develop new AI models is fierce, to say the least!

Just as critical to having the right talent and the right budget, is having the right vision. R&D institutions are often bogged down in bureacracy, while most tech firms are, necessarily, focused on short term rewards. In this game, to win, the players must have the vision to invest in developing AI models that are ‘ahead of the curve’ and the commitment to stick with it.

Therefore, for those following Abu Dhabi’s R&D story, it is not an entirely unexpected discovery that the Technology Innovation Institute (TII) has been investing heavily in the development of LLMs.

Formed in 2020, as the Abu Dhabi Government’s Advanced Technology Research Council‘s applied research arm, TII was founded to deliver discovery science and breakthrough technologies that have a global impact. An AI research centre was created in 2021, now called the AI and Digital Science Research Centre (AIDRC), to both support AI plans across the institute’s domain-focused labs and develop its own research. Overall, TII now employs more than 600, based in its Masdar City campus.

This week TII announced the launch of Falcon LLM, a foundational large language model with 40 billion parameters, developed by the AIDRC’s AI Cross-Centre Unit. The unit’s team previously built ‘NOOR’, the world’s largest Arabic natural language processing (NLP) model, announced less than one year ago.

However, Falcon is no copy of GPT, nor other LLM’s recently announced by global research labs and has innovations of its own. Falcon uses only 75 percent of GPT-3’s training compute (i.e. the amount of computer resources needed), 80 percent of the compute required by Google’s PaLM-62B and 40 percent of that required by DeepMind‘s Chinchilla AI model.

According to TII, Falcon’s superior performance is due to its state-of-the-art data pipeline. The AI model was kept relatively modest in size with unprecedented data quality.

Although no official third party ranking has been published yet, it is thought that Falcon will rank in the world’s top 5 large language models in a classical benchmark evaluation and may even rank number one for some specific benchmarks (not counting the newly arrived GPT-4).

Large language models have proved to be good at generating text, creating computer code and solving complex problems. The models can be used to power a wide range of applications, such as chatbots, virtual assistants, language translation and content generation. As demonstrated by OpenAI’s ChatGPT public beta testing, they can also be trained to process natural language commands from humans.

Now that the UAE now has one of the best and highest performing large language models in the world, what is the potential impact of TII’s Falcon LLM?

First, like all LLM’s, Falcon could be used for a variety of applications. Although plans for the commercialisation of the new model have not been announced, Falcon could provide a platform for both TII and potential technology partners to develop new use cases across many industry sectors and many functional areas. For development teams in the region, it’s a plus to have the core technology developer close at hand.

Second, Falcon also has technological advantages that businesses and government orgnisations can benefit from, which aren’t available via existing global platforms. The model’s economic use of compute, means that it lends itself for use as an on-premise solution, far more than other models that use more system capacity. In addition, if you’re a government organisation, implementing that on-premise solution means that no national data is going to be transferred outside of the country for processing.

Finally, Falcon is intellectual property developed in the UAE and a huge milestone for a less than three year-old research institute. The emirate is funding scientifically and commercially significant research and attracting some of the brightest minds from around the world to make it happen.

Of equal importance, if Falcon is anything to go by, at both a government policy level and an institutional level, Abu Dhabi has the vision and the drive to develop breakthrough research.

I don’t think that yesterday’s announcement will be the last we will hear about Falcon LLM! Stay tuned!

This article first appeared in Middle East AI News.


January 31, 2023
chatgpt-image.png

Will ChatGPT take our jobs? The truth is that it gets an awful lot right and it gets an awful lot wrong!

Communications professionals, writers, journalists and researchers all seem to be asking if Chat GPT and other new AI platforms will take their job. I’ve been using AI writing apps for a couple of years now to help overcome ‘writers block’, sketch out ideas and, well, just for fun! So, what do I think? Will AI take my job? Here’s a little analysis that I did in January 2023. 

A recent survey survey of UAE residents by communications consultancy duke+mir and research firm YouGov, found that 55% were concerned about losing their job to AI (and this went up to 66% in the under 25 age group). It’s a notion that tech firms and some governments have done their best to deride over the last five years, but the evidence is very clear: artificial intelligence allows companies to do more with less – and that means less people too.

It is true that new technologies, including AI, are also creating new categories of jobs, but that’s not much of a consolation if you don’t have, or can’t acquire, the new skills required to take one of those jobs. This said, there are many things that AI simply cannot do, because those jobs require human thinking, dexterity or other nuances that are particular to human beings.

However, for some, the arguments are a little too academic. Most people don’t know much about how AI works, how it’s used, or what the benefits and risks might be. Many have relied on tech firms, scientists, journalists and government spokepeople for insights on how AI technologies will affect them. Now, suddenly, this may have changed a little.

With the public beta release of the latest versions of OpenAI’s conversational AI chatbot ChatGPT. Now, anyone with an Internet connection can play with AI, discover its potential, explore its boundaries and ask it about the future.

So, just for fun, I asked ChatGPT to write this week’s weekly article. Will it take my job? You be the judge!

I gave ChatGPT the following instruction:

No alt text provided for this image

This is the first answer that the chatbot returned. It is unedited. I’ve fact checked this and put my comments under the article.


The Gulf Cooperation Council (GCC) states, which include Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates (UAE), have been actively investing in and adopting artificial intelligence (AI) and other emerging technologies in recent years. These efforts are driven by a desire to diversify their economies, improve efficiency and productivity, and enhance the overall competitiveness of their countries.(1)

One key trend in the adoption of AI in the GCC states is the increasing focus on the development of smart cities. For example, the UAE has launched several initiatives aimed at creating smart cities, such as the Dubai Smart City project and the Abu Dhabi Smart City project. These initiatives aim to use technology to improve the quality of life for citizens, enhance urban planning, and create new business opportunities.(2)

Another trend is the use of AI in the healthcare sector. For example, in Qatar, the government has launched the Qatar Genome Project, which aims to use genomics and AI to improve the diagnosis and treatment of diseases. Additionally, Saudi Arabia has launched the Misk Medical AI Program, which aims to use AI to improve the efficiency and effectiveness of healthcare in the country.(3)

Another key trend is the use of AI and other emerging technologies in the financial sector. For example, the Central Bank of Bahrain has launched the Bahrain FinTech Bay, which aims to promote the development of fintech and other innovative technologies in the country. Additionally, the Dubai International Financial Centre (DIFC) has launched the Dubai Blockchain Strategy, which aims to use blockchain technology to improve the efficiency and security of financial transactions in the UAE.(4)

In addition to these specific examples, GCC countries have also been investing in other areas such as energy, transportation, and retail. For instance, in Oman, the government has been working with companies like Siemens to implement AI-enabled systems in the oil and gas sector. In Saudi Arabia, the government has been working to develop autonomous vehicles and drones for transportation and delivery purposes.(5)

Overall, the adoption of AI and other emerging technologies in the GCC states is being driven by the desire to improve efficiency and productivity, diversify their economies, and enhance the overall competitiveness of their countries. These efforts are also supported by significant investments in research and development, as well as the development of supportive ecosystems, such as innovation hubs and accelerators. As a result, we can expect to see continued growth in the adoption of AI and other emerging technologies in the GCC states in the coming years.(6)

It is worth noting that the GCC countries have a lot of potential to be the leader in AI, but there is also a lot of work that needs to be done to overcome challenges such as lack of data privacy laws and shortage of local talent. It is important for the GCC governments to work on these challenges and to also focus on building an ecosystem that supports innovation, investment and collaboration in AI.(7)


Fact check & analysis

  1. Paragraph one is correct and provides a useful introduction.
  2. Paragraph two provides a useful example and is essentially correct, but contains a little misinformation. Dubai Smart City project was announced in 2014, but since then has developed into several iniatives with different names such as Smart Dubai, which is now Digital Dubai. I don’t remember there ever being an “Abu Dhabi Smart City” project by name.
  3. Paragraph three is also useful, although the language is repetitive. However, again, ChatGPT inserts its own names. The Qatar Genome Project is the name of an actual project. As far as I know, “Misk Medical AI Program” is not, although Saudi Arabia’s Misk Foundation does have medical programs.
  4. Paragraph four contains misinformation. Again the content topic here is useful for the context of the article, but Dubai Blockchain Strategy was not launched by DIFC, it was announced by Smart Dubai and Dubai Future Foundation in 2017.
  5. These paragraphs are now following a formula, but even so paragraph five is a useful contribution to the report. Although, I would say that the claim that Saudi Arabia is “working to develop autonomous vehicles” is overstated and possibly misleading.
  6. Paragraph six is correct, although I would prefer if it had also mentioned some examples to make it more specific.
  7. Paragraph seven is correct and useful.

So, should I give up, or stick around for a while longer? ?

Read more on this topic:

This story was originally published on Linkedin


September 29, 2022
Dubai-metaverse-North-Star.png

Dubai Metaverse Assembly took place this week, drawing together technology experts, innovators and government policymakers from all over the world and connecting with tens of thousands of virtual attendees. Many visions, opinions and predictions were shared at the event but on one thing they all agreed on: Dubai is leading the way in creating metaverse policy!

The Dubai Metaverse Assembly took place at the Museum of the Future and the Jumeirah Emirates Towers in Dubai this week (28-29 September), drawing 500 technology experts, innovators and government policymakers together from more than 40 prominent local and global organisations. Following on from the Dubai Metaverse Strategy announced in July, the event certainly did its job of capturing the attention of global metaverse and Web3 leaders. More than 20,000 people worldwide watched the event virtually.

Many visions, opinions and predictions were shared, including new metaverse phrases such as Gross Metaverse Product and B2A (standing for business-to-avatar). However, the broad consensus at this event was that the future remains hard to predict! No one really knows how long the grand metaverse concepts that the technology industry has will take to come together, when our new and existing virtual worlds will become interoperable or what regulation is required to govern the metaverse. Meta intimated that many of the concepts being talked about today will take decades to become virtual reality.

One thing that technologists, investors, businesses and finance professionals did seem to agree on, was that the metaverse demands forward-looking policymaking and for governments to proactively set the agenda. They also all agreed that this was exactly what the Government of Dubai was doing, as one delegate put it ‘at light speed’, fast-developing the environment that metaverse, Web3 and DeFi businesses need to create and grow.

Delivered in the presence of His Highness Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, Crown Prince of Dubai, Chairman of The Executive Council of Dubai and Chairman of the Board of Trustees of the Dubai Future Foundation, H.E Omar Sultan AlOlama Al Olama, UAE Minister of State for Artificial Intelligence delivered a compelling case for why Dubai was well-positioned to become a global metaverse hub. The Dubai Metaverse Strategy will both capitalise on Dubai’s strengths and invite metaverse firms to help shape how Dubai leverages the new technologies and develops new policy.

The Dubai Metaverse Strategy echoes some of the Dubai Blockchain Strategy announced in 2016, which included a goal for all Dubai government documents to be moved to blockchain. In recognition of the fact that there were both huge opportunities, yet many unknown facts, the government formed the Global Blockchain Council. Inviting global blockchain leaders and local innovators to contribute, the new council set about identifying how blockchain technology could help government, specific use cases and steps that would need to be taken to move forwards with the emerging technology.

As touched on in last week’s Middle East AI News, the Government of Dubai has worked hard to position itself to better identify, assess and act on upcoming digital and technology opportunities. It’s open engagement of the private sector, combined with its agility in policymaking now allows the emirate to move forward quickly and purposefully with initiatives like the Dubai Metaverse Strategy, together with supporting policy and regulatory frameworks.. This, in turn, provides developers, innovators and platforms with the confidence to use Dubai as a based to push ahead with their own plans.

Keith Jordan, Vice President Innovation, Mastercard Labs summed it up in comments made on Day Two of the Dubai Metaverse Assembly, “What’s really amazing is that the [Dubai] vision is being set from the top down. That’s really important, because you need to set that North Star.”

And that’s what Dubai’s become extremely good at. Finding that ‘North Star’ and setting the coordinates.

This article was first posted in my weekly Middle East AI News on Linkedin.


September 16, 2022
global-ai-summit-2022.png

The Global AI Summit 2022 brought together thousands of business leaders, policymakers and technology experts in Riyadh this week. Organised by the Saudi Data and Artificial Intelligence Authority, or SDAIA, under the theme of ‘Artificial intelligence for the good of humanity’, the event showcased the breadth and depth of the Kingdom’s National Strategy for Data and AI.

This week saw the second edition of the Global AI Summit took place in Riyadh (September 13th – 15th), bringing together a world-class roster of speakers with thousands of influential delegates under the theme of ‘Artificial intelligence for the good of humanity’.

Some may be quick to dismiss such conference themes for being a little too ambitious, or perhaps not representative of the conference content itself (often speakers at these things end up talking about what they want to talk about, regardless of any theme). However, this week’s Summit seemed to truly serve its stated purpose!

At the same time, the event gave us a glimpse of how the Kingdom’s National Strategy for Data & AI – which was officially launched at the first Global AI Summit in 2020 – is beginning to affect all aspects of Saudi Arabia’s public and private sectors, society and culture, education, R&D and policymaking.

For those close to the Kingdom’s data and AI initiatives, a lot of what was shared was perhaps already common knowledge. However, the Summit clearly went to great lengths to create a platform to engage decision makers and policymakers from across all sectors of business, government and society. Government departments, Saudi businesses and global technology firms were able to showcase a wide range of data and AI projects, many of which have been fast-tracked to begin delivering results at the earliest point possible.

Under the ‘for the good of humanity’ theme, the Summit also gave the government the opportunity to show that it is striving to ensure policymaking takes into account all aspects of local society and positions the Kingdom as a desirable partner for global organisations, businesses, plus other sovereign nations that want to embrace digital transformation.

Alongside the big deals such as SCAI‘s investment in a $207 million SenseTime Middle East and Africa joint venture, and Saudi Aramco‘s new $250 million ‘Global AI Corridor’ initiative, the government announced a new partnership with the I.T.U. to develop a new Global AI Readiness Framework, and that it was joining The World Bank’s Digital Development Partnership (DDP), which helps developing countries leverage digital innovations.

Was the Global AI Summit a big public relations exercise then? Well, of course it was, and by many accounts, a very successful one. However, it is the carefully curated content and discussion of the Summit that made it especially meaningful to the national AI strategy’s broad objectives and to other nations trying to reap the benefits of AI.

When one considers that the organiser of the conference, the Saudi Data and AI Authority (SDAIA) was first formed just three years ago and that the National Strategy for Data & AI was approved by the Saudi King little more than two years ago, the progress made since, is quite astonishing. The speed and effectiveness of government digital transformation programmes, not to mention the enormous investment in digital infrastructure, has also inspired Saudi businesses to ‘step up to the plate’.

Equally impressive is the public support that AI has in the Kingdom of Saudi Arabia. It may come as no surprise that, in sync with many countries worldwide, 77 per cent of Saudi Arabia’s government IT decision-makers are prioritising AI (Yougov/SAP 2022). Afterall, this is becoming the norm.

What is more unexpected is the level of support for AI technologies amongst the Saudi public. According to a World Economic Forum survey conducted by Ipsos at the end of last year, some 80 per cent of respondents from the Kingdom expected AI to change their lives, compared with less than half of respondents from Canada, Germany, France, the U.K., or from the U.S.

‘Artificial intelligence for the good of humanity’ becomes all the more meaningful, when your whole country is engaged in the objective.

This article was first posted in my weekly Middle East AI News on Linkedin.


July 13, 2021
The-rise-of-AI-diplomacy-expo-2020-dubai.png

AI diplomacy has added a new dimension to international relations and the United Arab Emirates is working hard to build bilateral ties that boost its AI capabilities. It’s also using its AI successes in government and forward looking AI policies to enhance its international reputation.

The UAE’s appetite for artificial intelligence is plain to see for anyone browsing its daily news media. The country’s leadership was one of the world’s first to identify AI as a top priority for government planning and policy, announcing the UAE’s national AI strategy in 2017 and appointing the world’s first minister for AI.

This focus at the top has helped make AI a priority across business, education and the whole public sector. These days, the prospect of AI seems to be embedded into every government programme, public initiative and commercial deal.

It has also now become commonplace to see diplomatic communiqués that mention artificial intelligence. It seems, AI diplomacy is on the rise. The UAE’s foreign relations meetings and forums over the past few weeks with Azerbaijan, Japan, France, Greece, Luxembourg and others have all touched on artificial intelligence.

The GCC has always relied on foreign technology firms and so technology has always had its place in the region’s diplomacy. Over the past few years, the AI race has brought new focus to technology in foreign policy, in particular after the arrival of the Covid-19 pandemic.

Israel-UAE tech collaboration on the fast-track

The historic agreement to normalise bilateral relations between Israel and the UAE last August became the diplomatic event of the year. The agreement considered many economic, trade and security issues: cooperation in energy, water and developing a coronavirus vaccine were pinpointed at the time. However, much of the engagement between the two countries that followed has been tech-centric. In fact, the UAE’s minister of state for AI, Omar bin Sultan Al Olama, last year called the new technology collaboration between Israel and the UAE an ‘undeniable need’.

Shortly after the accord was signed, Abu Dhabi-based AI firm Group 42 announced the opening of a wholly-owned subsidiary in Israel, following memorandums of understanding with Israel Aerospace Industries (IAI) and Rafael Advanced Defense Systems Ltd., and this year formed a joint venture with the latter. The group’s Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) signed an agreement Israel’s Weizmann Institute of Science to create joint AI research and development programmes.

A number of other significant partnerships were signed between Israeli and UAE business groups in 2020. Businessman Abdullah Saeed Al Naboodah partnered with Israeli venture capital giant OurCrowd to form Phoenix Capital, a $100 million fund to back technology investments between the two countries. Meanwhile, the UAE’s in-person tech trade events have seen planeloads of Israeli businessmen attend over the past year.

However, it would be a mistake to assume that the UAE is inclined to ‘put all its eggs in one basket’. The Emirates has a long-standing policy of building bilateral relations with almost all countries across the world, but AI diplomacy has given some of these new purpose. Reem bint Ibrahim Al Hashemy, the UAE’s Minister of State for International Cooperation, recently described Finland as one of the country’s most important partners in innovation and artificial intelligence. The UAE also seems to have stepped up engagement with Estonia on the back of the Baltic state’s success in leveraging big data and smart systems.

China-UAE relations bring R&D to the fore

One of the UAE’s most significant technology collaborations is its deepening relationship with China. Although diplomatic relations have been established for many years, President Xi Jinping’s 2018 visit to the UAE seems to have taken relations with China to a new level. This has included cooperation on combatting the coronavirus pandemic (for example, the UAE fast-tracked human trials of Sinopharm’s Covid-19 vaccine), energy, trade, investment, infrastructure, and high technologies including 5G, big data, and AI.

Increasingly, the Chinese and Emirati institutions are collaborating on technology, research and innovation. An M.o.U. on higher education was signed with China’s ministry of education during an official visit to Beijing by the Crown Prince of Abu Dhabi Sheikh Mohammed bin Zayed Al Nahyan in 2015. That M.o.U. has paved the way for partnerships between universities and research institutions. Abu Dhabi’s Masdar Institute of Science and Technology (now part of Khalifa University) signed an agreement with Tsinghua University, sometimes referred to as China’s MIT, during the state visit.

Sheikh Mohammed bin Zayed returned to Beijing in 2019, where he met Wang Zhigang, China’s Minister of Science and Technology at Tsinghua University and was awarded an honorary professorship by the university for his role in supporting science, technology and innovation.

Khalifa University of Science and Technology signed a joint research agreement with Tsinghua University, during the same visit. Khalifa University, which opened its Robotics and Intelligent Systems Institute the same year, also has agreements with Georgia Tech, Korea Advanced Institute of Science and Technology (KAIST) and Massachusetts Institute of Technology (MIT), among others.

The UAE mission to Beijing also returned with a deal signed between Abu Dhabi Investment Office (Adio) and Chinese AI giant SenseTime to locate its Europe, Middle East and Africa (EMEA) AI Centre of Excellence in in the emirate.

New collaboration opportunities are being reviewed on a regular basis. China’s top scientific institution, the Chinese Academy of Sciences, signed a joint agreement on scientific research earlier this year with the United Arab Emirates University. Meanwhile, a few weeks ago, Dr. Sultan bin Ahmed Al Jaber, UAE Minister of Industry and Advanced Technology, and managing director & group CEO of ADNOC Group, spoke about the two countries growing technological cooperation at the 2021 Pujiang Innovation Forum in Shanghai.

Expo 2020 Dubai to grandstand tech cooperation

The biggest opportunity for the UAE to strengthen and build on state-backed technology cooperation this year is Expo 2020 Dubai (1 October 2021 to 31 March 2022), at which China has one of the largest national pavilions. Covering 4,600 square metres of space, the ‘Light of China’ pavilion will highlight the country’s achievements in information, science, technology and transportation. Exhibits include FAST (the five hundred metre aperture spherical telescope), the Beidou satellite navigation system, plus the latest 5G and artificial intelligence technologies.

Expo 2020, which is taking place under the theme ‘Connecting Minds, Creating the Future’, is sure to be the most heavily tech-focused World Expo ever to take place, where visitors will be able to experience how AI, A.R., V.R. and other future technologies can be used for education, green energy, urban mobility and many other fields.

Many of the GCC’s biggest trading partners are naturally using Expo 2022 to showcase their countries’ science and technology capabilities. For example, the USA Pavilion will showcase innovations and technology from urban mobility to quantum computing.  While, according to Simon Penney, Her Majesty’s Trade Commissioner for the Middle East, artificial intelligence will be a key theme for the United Kingdom Pavilion at Expo 2020 Dubai, reflecting already strong collaboration between the UK and UAE in AI and advanced technologies.

Half of Expo 2020’s twelve premium partners are technology-related, including Chinese AI and Internet of Things company, Terminus Technologies which, as the official robotics partner, will deploy 150 service robots across the expo. Other premium partners include Accenture, Cisco, SAP, Siemens and UAE telecom provider Etisalat.

With 190 countries expected to participate in the six-month long exhibition, we can expect Expo 2020 Dubai to facilitate plenty of opportunities for AI diplomacy, AI collaborations and new AI deals.

This article was originally published as ‘A letter from the Gulf’ in The AI Journal.

Also see the previous ‘Letter from the Gulf’


July 10, 2021
nine-deepfake-video-predictions-1280x688.jpg

The UAE Council for Digital Wellbeing and the UAE National Programme for Artificial Intelligence this week published a Deepfake Guide to help raise social awareness about the technology. But what are deepfakes and are they all bad? Here are my top 9 deepfake video predictions!

Deepfake videos first hit the headlines six years ago when Stanford University created a model that could change the facial expressions of famous people in video. In 2017, University of Washington researchers released a fake video of former President Barack Obama making a speech using an artificial intelligence neural network model. It wasn’t long before consumers could create their own deepfake videos using a variety of tools, including deepfakesweb.com’s free online deepfake video application. Deepfake videos have now become commonplace circulating on social media of presidents, actors, pop stars and many other famous people.

So, what’s all the fuss about? Simply, that fakes of any nature can be used to deceive people or organisations with malicious intent or for ill-gotten gains. This could include cyber-bullying, fraud, defamation, revenge-porn or simply misuse of video for profit. It perhaps comes as no surprise that a 2019 study found that 96 per cent of all deepfake videos online were pornographic.

However, using technology to produce fakes is not a new thing. The swift rise of the photocopier in the 1970s and 80s allowed office workers all over the world to alter and reproduce copies of documents, letters and certificates. The ease with which printed information could be copied and altered, prompted changes in laws, bank notes, business processes and the use of anti-counterfeit measures such as holograms and watermarks.

Like photocopying, deepfake video technology is getting better and better at what it does as time goes on, but at a much faster speed of development. This means that the cutting edge of deepfake technology is likely to remain ahead of AI systems developed to detect fake video for a long time to come.

Any technology can be used for good or evil. However, in a few short years deepfake technology has got itself a terrible reputation. So, what is it good for? My take is that deepfake video technology – or synthetic video for the commercially-minded – is just one aspect of artificial intelligence that is going to change the way that we use video, but it will be an important one. Here are my top nine deepfake video predictions.

1. Deepfake tech is going to get easier and easier to use

It’s now quite easy to create a deepfake video using the free and paid-for consumer apps that are already in the public domain. However, as the world learns to deal with deepfake video, the technology will eventually be embedded into more and more applications, such as your mobile device’s camera app.

2. Deepfake technology’s reputation is going to get worse

There’s an awful lot of potential left for deepfake scandal! The mere fact that developers are creating software that can detect deepfake video, means that the small percentage of deepfake video that can not be identified as fake may be seen as having a virtual rubber stamp of approval! And believing that a influential deepfake video is authentic is where the problem starts.

3. Policymakers are going to struggle to regulate usage

Artificial intelligence is testing policymakers’ ability to develop and implement regulation like no other force before it. The issues are the most obvious when deepfakes are used for criminal activitiy (the courts are already having to deal with deepfake video). In the near future, regulators are also going to have to legislate on the rise of ‘legitimate’ use, seamlessly altering video for education, business, government, politics and other spheres.

4. One:one messaging

One of the most exciting possibilities is how deepfake modelling might be used to create personalised one:one messaging. Today, it’s possible for you to create a video of you voicing a cute animation via an iPhone. Creating and sending a deepfake video of your real self will soon be as easy as sending a Whatsapp message. If that sounds too frivolous, imagine that you’re stuck in a meeting and want to send a message to your five year-old.

5. Personalisation at scale

As the technology becomes easier to use and manipulate, and as the processing power becomes available to automate that process further, we’re going to be able to create extremely lifelike deepfake videos – or synthetic videos, if you rather – at scale. London-based Synthesia is already testing personalised AI video messages. That will open the doors for marketers to personalise a new generation of video messages at scale and deliver a whole new experience to consumers. Imagine if every new Tesla owner received a personal video message from Elon Musk (well, ok, imagine something else then!).

6. Deepfakes on the campaign trail

As marketers get their hands on new tools to create personalised video messages for millions, then there may be no stopping political parties from doing so too. Has your candidate been banned from social media? No problem! Send out a personalise appeal for support directly to your millions of supporters! In fact, this is one use that I could see being banned outright before it even gets started.

7. Video chatbots

There are already a number of developers creating lifelike synthetic video avatars for use as customer service chatbots, including Soul Machines and Synthesia. As AI generated avatars become more lifelike, the lines between different types of video avatars and AI altered deepfake videos are going to blur. The decisions on what platform, what AI technology, what video experience and what type of voice to add, are going to be based on creative preferences or brand goals, not technology.

8. Deepfake entertainment

Although some deepfake videos can be entertaining, their novelty value already seems to be fading. In the future, whether a deepfake is entertaining or not will depend on the idea and creativity behind it. We seem to be headed for an some kind of extended reality music world, where music, musicians, voices, characters and context are all interchangeable, manipulated by increasingly sophisticated technology. The Korean music industry is already investing heavily in virtual pop stars and mixed reality concerts. Deepfake representations will not be far behind. After all, they’re already reading the news! The Chinese national news service (Xinhua) has been using an AI news anchor for the past two years.

9. Your personal AI avatar

In 2019, Biz Stone co-founder of Twitter and Lars Buttler, CEO of San Francisco-based The AI Foundation, announced that they were working on a new technology that would allow anyone to create an AI avatar of themselves. The AI avatar would look like them, talk like them and act like them, autonomously. In comparison, creating personal avatars using deepfake technology (i.e. manipulating already existing video) could be a lot easier to do. It remains to be seen how long it will take before we have the capability to have our own autonomous AI avatars, but creating our own personal AI video chatbot using deepfake tech is just around the corner!

I hope that you liked my top deepfake video predictions! But, what do you think? Will AI altered deepfakes and AI generated video avatars soon compete for our attention? Or will one negate the need for the other? And how long do you think it will be before consumers are over-targeted by personalised AI generated video? Be the first to comment below and win a tube of Pringles!

This article was first posted on Linkedin. If you’re interested in this sort of thing, I also wrote about deepfakes for The National a couple of years ago. You can find that article here.