Articles Archives — Carrington Malin

January 10, 2025
office-clapping-1280.jpg

It’s becoming a new communications quandary – When do you tell your audience that you’ve used AI in creating something?

When do you announce proudly that your new creation was produced using the latest AI technologies? When do you need a disclaimer? And is it ethical to keep quiet about it altogether? These are questions that that I’ve given quite a lot of thought to over the past couple of years.

At this point, two year’s after the launch of OpenAI”s ChatGPT, it’s not hard to figure out that very soon everyone is going to use Generative AI tools to help them in everyday communications, writing, and to produce creative work.

However, I believe that we are still at the messy stage of GenAI!

The messy stage of GenAI!

The quality of GenAI generated content still varies greatly due to differences in technology platforms, the skills of the end user and the type of job at hand. This means that we’re going to continue to see a wide variety of content at varying levels of quality and effectiveness and that most of us will be able to identify a high percentage of AI content when we see it. Spotting AI content is becoming a sort of superpower! Once you begin noticing AI content, you just can’t stop seeing it. So, in this environment, it could be a judgement call deciding when to be proud of your AI content and tell everyone what you’ve done, and when to keep quiet.

Spotting AI content is becoming a sort of superpower! Once you begin noticing AI content, you just can’t stop seeing it.

There are also, of course, ethical dilemmas which accompany AI content, including how to decide when AI has had a positive impact (added value) or a negative one (e.g. done someone out of a job). Then there is copyright, fair use of data, and the potential for AI plagiarisation.

Timing

As with most things concerning communications, what you say and don’t say has a lot to do with timing. Firstly, many of the issues that we wrestle with today, could be a thing of the past in five years time. For example, the negative connotations to your multi-million dollar business cancelling your photography agency’s contract, because your going to save money by creating all your catalogue shots using AI. This is a very present day issue. In ten years time, whatever photographers remain in business will have adjusted to the new reality and no one will bat an eyelid if you never hire an agency of any kind, ever again.

Secondly, like any other communications requirement, with a little forethought and planning you should be able to work out what messages and policies to put in place now when talking about AI in today’s environment and then map out how these might change over the next year or two, according to potential changes in perceptions and reputational risks. Just because AI has some unknowns, it doesn’t mean that it can’t be planned for.

A little empathy goes a long way

The biggest risk, as usual, is not taking into account the perceptions of employees, customers and other stakeholders in your use of AI, and communications about it. Part of the problem here is that many organisations these days have a team of people that are well-versed in AI, but this often does not include the communications and marketing team!

Whilst all your marketing counterparts may be jolly impressed that you created your latest campaign in one day and made it home in time for tea, your customers are likely to care more about your message and what that campaign means to them.

So, does one announce “AI campaigns”? For me, it’s all about whether this helps meet the goals, resonates with the target audience and doesn’t risk upsetting other audiences. Whilst all your marketing counterparts may be jolly impressed that you created your latest campaign in one day and made it home in time for tea, your customers are likely to care more about your message and what that campaign means to them. It’s easy to let the ‘humble AI brag’ creep into communications because we all want to be seen moving with the times, but unless there’s a clear benefit for your key audiences, it really doesn’t belong there.

Transparency and authenticity

As with many corporate reputation risks, reviewing how and where more transparency should be offered on AI usage can help mitigate some of that risk. For example, making it clear that your website chat support is responded to by an AI chatbot and not a human, can help avoid customers making false assumptions (and perhaps being unnecessarily annoyed or upset).

What about marketing content? Should you be transparent about what content was created using AI? My experience is that the more personal the communication, the more sensitivity there is. I may not care if your $100,000 billboard was created entirely by AI, but when I when I receive a personal email from you, I probably expect more authenticity.

A personal perspective

Last year, I began labelling my LinkedIn content to show where and how I used AI. The use of ChatGPT and other Generative AI tools to write posts, articles and comments has started to proliferate on LinkedIn. As you have probably seen yourself, sometimes people use GenAI to great effect and sometimes content lacks context, nuance and the human touch that makes it engaging. So, I’ve found that posting in this environment can invite scrutiny – and occasionally accusations as to whether you are using AI to post, or not.

I would much rather that the focus remains on what my content communicates, rather than what role AI played.

I use AI extensively when planning, creating and repurposing content, but I still create more content with little or no help from AI. Although AI-generated content rarely accounts for more than 50% of any written work, I don’t really want my audience to either assume that I’m using AI to generate everything, nor to assume that I don’t use AI at all. Additionally, I would much rather that the focus remains on what my content communicates, rather than what role AI played. So, I now add a footnote at the end of all my LinkedIn posts and articles, which mentions whether I’ve used AI and what I’ve used it for.

If you are guided by your goals, your audience, the context and the potential risks, then deciding on how and when to communicate your use of AI can be very straightforward.

This article first appeared in my monthly AI First newsletter.

Image credit:  Drazen Zigic via Freepik.


April 6, 2024
carrington-malin-linkedin-profile-06Apr24.png

It’s 20 years now since I signed up for LinkedIn, so here are 20 things I’ve learned from the B2B social networking platform.

To mark my 20 years on LinkedIn, here are 20 things I’ve learned from LinkedIn, since I joined the B2B social networking platform on April 6th, 2004. Of course, my experience does extend beyond LinkedIn, and the lessons I’ve learned are also not exclusive to LinkedIn either.

  1. LinkedIn is hands-down the best social network for B2B. I’ve lost count of the number of customers I’ve signed as a result of LinkedIn outreach, inbound connections, referrals from my connections, let alone paid-for LinkedIn marketing campaigns. Although it’s critical important to respect the people that you target.
  2. Everyone has their own motivations for using LinkedIn. For some, its simply where they look for their next job. For some, it’s their way of staying up-to-date on business news and trends. Some want to sell. Some want to network. Some don’t actually know!
  3. Networking is great, but trust wins business and that doesn’t come quickly. I love connecting with interesting people on LinkedIn and I’m happy to try to help people that connect with me. However, any meaningful business involves a commitment on both sides and that requires trust. Just because someone accepts your invite to connect, doesn’t mean that they have any reason to trust you.
  4. Different people can help you in different ways. A connection’s value to me often has nothing to do with their job title or where they work. Sure, it’s great to be connected with leaders in exciting roles around the world, but these are generally the connections that I have the least frequent contact with. When I’m asking for help, they are often too busy, or simply not online.
  5. When someone likes your post, it doesn’t mean that they’ve read it! People like LinkedIn posts for different reasons. They could be reciprocating your Likes on their posts, they might have the intention to read it later or it could be just a random impulse to like something whilst scrolling! The people that actually read your posts are the minority.
  6. Success isn’t related to how successful people say they are. Or the number of likes someone gets! Some company’s seem to develop cultures where liking your boss’s post is something you do, even if you disagree with everything he’s ever done! Likewise, someone who has just achieved a success that others may only dream of, may not even mention it on LinkedIn.
  7. Opinions are like – well, you know! Everyone is entitled to their opinion. Some like to share it widely, some only share their opinion when prompted by someone else’s content, some seem to bestow it on their followers like it is a rare gift to mankind! I often end up learning things from opinions that I completely disagree with.
  8. Some people just don’t Like or comment. However, some of them do read. Many times I’ve assumed, wrongly, that someone hasn’t read my article, blog or seen my LinkedIn post. Then, when I mention it to them later, they say “yes, of course, I read it”. That’s actually more valuable to me than a Like.
  9. LinkedIn’s algorithm favours mass media. I can’t count the number of times, I’ve casually shared a new story from the mass media and seen it get ten, twenty or even thirty times the Likes than when I’ve shared original content that I’ve spent the whole day on. And this doesn’t seem to depend on the quality of the mass media story either!
  10. Don’t assume that connections actually read your profile! I receive messages virtually every day requesting calls and meetings to discuss potential projects, opportunities or collaboration ideas. Every week at least one of those people has asked for the meeting, because they’ve assumed they know something about me without reading my profile.
  11. LinkedIn profiles are two-dimensional, people are not. I’ve worked hard on my Linkedin to ensure that it represents me well, appears in the right searches and highlights what I offer to followers, connections and business contacts. However, whilst profiles are a great tool to help you learn more about someone, there’s much that you cannot learn about someone from their LinkedIn profile. Don’t assume too much!
  12. Your story is your story. Your brand is your brand. Still, it may take you a while to fogure out how to tell your story via LinkedIn. Do seek advice and listen to feedback or suggestions for your LinkedIn activities. However, remember that what works great for someone else on LinkedIn, may not suit you. Don’t adopt someone else’s tactics or content ideas, unless you think they will resonate with your most valuable audiences.
  13. Automating poorly harms your brand. There are many tools that promise to make your life easier on LinkedIn, notably GenAI content tools, message automation tools and LinkedIn marketing tools themselves. You have to ask yourself, what you would feel when seeing such content or receiving such messages. If you don’t actually know, because you don’t pay that much attention to your own automated content, then you’re likely doing yourself and/or your brand more harm than good.
  14. Yes, there are fakes and frauds on LinkedIn. As with all social platforms, LinkedIn gets its fair share of fakes and frauds. Sometimes these are easy to spot, sometimes not so easy. A new profile with few details, few connections and a professionally take photo of a pretty woman is one of the easiest fakes to spot! When in doubt, there is always the option to decline the invite to connect, or even delete the connection!
  15. Connection without conversation is pointless. I always start a conversation with new connections via a brief intro message. I want them to know what my focus is in case there is an opportunity to cooperate with them in the future, and I want to understand what they do for the same reason. At the very least, I’ll be able to go back to that conversation later to remind myself why we connected in the first place.
  16. Referrals have more value than cold approaches. Any 2nd level LinkedIn connection is connected to one of your existing connections. That means that its possible you could get introduced by one of your existing connections. When you actually have an opportunity to talk to a 2nd level connection about, a referral from one of your 1st level connections will help you build trust faster.
  17. A little empathy and respect cost you nothing! Everyone is is at a different point in their own journey. LinkedIn has members that are students through to retirees. There are over 200 nationalities on LinkedIn and so many users that dont’ have English as their first language. It’s easy to be dismissive of posts, messages or comments because they don’t fit your own world-view. It’s just as easy to be kind.
  18. Most people will say nothing. Regardless of whether they agree or disagree, or approve or disapprove, most people won’t comment on your posts. Just like most people won’t comment on that new “visionary leader” title that you’ve just given yourself. Nor tell you how impressed they were by something you shared last week. So, don’t read too much into people’s feedback on LinkedIn, or lack of it.
  19. Persistence, resilience and balance! Everyone needs to pick their own comfort level with LinkedIn. However, if you expect to get business results from LinkedIn, it pays to be persistent. Big followings and engaged connections can take years to build – this includes periods of time when your efforts may not be rewarded. How much effort you put in, needs to align with your ultimate goals. There’s nothing wrong with just spending an hour on LinkedIn each week, just don’t expect to become a shooting star.
  20. You reap what you sow. What value you get out of LinkedIn, depends heavily on what you value you put in. Be interesting, people will be interested. Be helpful, people will be helpful in return. Post frequently and even LinkedIn algorithm will be more supportive of your efforts.

I hope that you enjoyed my list of  ‘ 20 things I’ve learned from LinkedIn’ . If you haven’t yet found me on LinkedIn, click here.

Read my last article about LinkedIn:


March 17, 2024
microsoft-venture-accelerators.png

Without methodically planning your sales and marketing, you are just as methodically planning to fail. Yes, even if you’re just an early stage venture with little budget.

I’ve talked to dozens of AI startup founders over the past few years. All of them passionate about their technology. All obsessed with what their technology can do for customers. Most, fortunate enough to have one or two customers that are innovation leaders and have helped them develop their proof of concepts. And, sadly, most of them seem to think that sales and marketing is not a priority. They often think that its something that new ventures don’t need to make a priority of, or perhaps only once they have more funding than they need for product development. They are wrong.

Before I provide you with an explanation. Let me begin by saying that I don’t blame tech founders for not prioritising sales and marketing. I’ve worked in sales and marketing for my whole career and the fundamentals are obvious to me mainly because of that. I can’t code, I don’t know how to build tech products. So, it’s hardly a surprise to me that sales and marketing is not the strong suit for many highly educated, experienced and talented technology developers. Why would it be?

If sales and marketing were both limited to outreach without strategy, both would be highly inefficient and unproductive.

In my experience, coders, software developers and system engineers tend to associate sales and marketing with what they see. Sales might seem like its all about salespeople getting out there and meeting people. So, human resource and a lot of talking!

Meanwhile, the most visible output from marketing is content. I’ve met many technical people that confuse creativity and creative content with marketing. They think marketing is advertisements, website copy and social media posts. So, therefore, in their minds, it’s a creative endeavour.

Now, of course those assumptions are mostly true. However, if sales and marketing were both limited to outreach without strategy, both would be highly inefficient and unproductive. If a you want a vehicle to travel from from point A to point B, it needs a steering wheel, to chart a course, and someone who can oversee the journey, direct actions, maintain the course and accomplish the goals of the trip.

Taking your message to people every day is the lifeblood of sales. But, it has to be the right message and you need to be talking to the right people. Sales requires a strategy that externalises your company proposition, product benefits and your vision effectively. Sales must also position your product appropriately against the alternatives, create a dialogue with customers that identifies needs, and present your product as the best-fit solution for the customer.

Likewise, much of the ongoing effort in a marketing department is spent on developing and running campaigns, and so that does include creative work and content. As with sales success, marketing is dependent on taking the right message to the right people. Marketing also plays a key role in defining and positioning your product in the terms that the market will understand, taking care to position it as relevant in the market context, and taking steps to help create a better environment to sell it in.

Common misconceptions often shape new ventures’ first investments in sales and marketing.

Common misconceptions often shape new ventures’ first investments in sales and marketing.

If you believe that sales is talking to people and the main goal is to talk to more people, perhaps young, energetic sales executives with good communications skills, are a good hire. But these sales hires will not help you with aligning the sales proposition, presentations, targeting and pricing of your product in context of the market.

If you believe that marketing’s job is to make noise and give you more visibility, then you may opt for young hires that know how to post on social media, write media content and organise events. But these hires will not help you with developing a robust strategy that aligns with market segments, ideal customer profiles, competitor pricing and your product vision.

In big global companies the research, strategy and definition is often managed by different teams, to the implementation and tactics. This highlights another problem, which is the availability of those skill sets in your region. Often big US or European tech leaders hold that expertise at their head offices: everything else around the world is mostly execution. So, hiring someone from such firms isn’t necessarily a solution, if their main expertise is effective execution of corporate strategy.

It is not a question of scale or maturity. It is a matter of measure.

Although expertise and experience usually play a key role in shaping sales and marketing, sales is not dependent on salespeople, and marketing is not dependent on having a marketing team, or a big advertising budget. It is not a question of scale or maturity. It is a matter of measure.

Every tech venture should have a written sales and marketing strategy. Depending on the sector, product and stage of the business, this could be a few slides, a single document of just a few pages, or a comprehensive set of plans, budgets, research, roadmaps and schedules. The role of such plans are to inform your day-to-day activities, whilst helping you plan and position your business for the future.

So, why do need to be this methodical, if you only have a small team with a limited sales capability and/or marketing budget?

1 – Many founders are great visionaries, but poor salespeople!

I’ve met many founders that can hold forth for hours on their product, the technology market and their vision for how technology will change the world. Invariably, this impresses other techies, but their pitch is often not focused on what their product brings customers now, or why it meets customer needs today. Founders can prove to be the best salesmen in the company, but what they say to whom, needs to be clearly defined (and practiced).

2 – Founders who are great salespeople, must often close every single sale!

There’s no doubt, most founders get better at selling as they grow their business. They get used to the things that customers ask and enjoy giving them chapter and verse on how the product was developed to meet their challenges. The problem with this is that, often, even long after multiple salespeople have been onboarded, the only person in the company who can close a sale, is the founder (because no one else has an effective sales story to tell!). Furthermore, it maybe the case that the founder can only do this with a customer that is an innovation leader and so is maybe more inclined to listen to the founder’s life story, in order to understand what the product does. Most customers won’t really want to do this.

3 – Early stage customers are often innovation leaders, that changes!

A tech venture’s first customers are often innovation leaders, these appear at the beginning of the technology adoption lifecycle. These are the organisations focused on innovation, with tech-savvy specialists and decision makers that are prepared to take risks in order to embrace innovation. Although ideal for ventures with new technology to develop and sell, these customers can prove to be few and far between. Just because you have found a customer that is very focused on your technological advantages and is happy to discuss them for hours, it doesn’t follow that the customer’s counterparts in other companies are similarly disposed. Most customers require that you make it easy for them to see understand benefits, win internal support, justify the purchase and buy.

4 – When you’re a new or fast-growing business, everyone should be selling!

We all need to be in sales. When you’re a small business everyone in the company needs to be equipped to help spread the word, position your business in the market and sell your product. That’s how you can ‘punch above your weight’, begin building a reputation in the wider market and also become a sought after employer. To do this, you will need to have developed a powerful story that everyone can tell (i.e. not just the founders). This is where sales and marketing meets human resources. What you do, how you sell and who your product helps, should be ingrained in your organisation.

The first step in meeting all four of these challenges, is to distill what it is that you are selling down to simple plans, goals, messages and proof points. It is also important to develop a strategy to take this to market, in context of the market environment, competition, prevalent pricing, customer pain points, and the vision for your product.

A technology venture that can build a sustainable, growing business on product development alone, is a very rare animal. Your business only becomes a leader when everyone else says it is. So, first you need an effective strategy to reach and convince the right customers, partners, opinion formers and opinion leaders, whilst growing your revenue. Big budget, small budget, or no budget, that’s what sales and marketing is for.

Image: Microsoft Ventures Seattle Accelerator (Credit: Microsoft)


October 5, 2023
f16-us-air-force-1280.png

Like it or not, autonomous systems are going to play an increasing role in defence capabilities moving forward, across all domains and using a wide variety of automated systems. Autonomous aerial systems could soon be relied on heavily by defence programmes, from small, low cost drones, through to AI pilot systems.

Over the past twenty years, efforts have intensified to create artificial intelligence pilots for military use. Last month, US Deputy Secretary of Defense Kathleen Hicks revealed that the US military’s Replicator programme plans to deploy thousands of autonomous weapons systems, including autonomous unmanned aircraft systems (UAS), over the next 18 to 24 months. Although this programme is primarily focused on lower cost, ‘attritable systems’, that the Pentagon would be happy to sacrifice in return for achieving a mission’s objective.

Using autonomous pilots for fighter jets that can cost upwards of $100 million each, is quite another matter entirely. However, there are clear future advantages for doing this, including enhanced combat capabilities, speed of response to threats, strategic advantage over other militaries, and reduced risk to human pilots.

Automated systems, of course, have been used in commercial aviation for many years. Commercial autopilot systems ultimately give pilots more control and enhanced ability to respond to new demands in the air, whilst autothrottle systems help optimise both throttle control and fuel economy. The capabilities of autopilot systems have grown dramatically over the past decade, as better sensors have become available. A modern jet airliner can have tens of thousands of electronic sensors, collecting terabytes of data during a flight.

Air forces clearly have different requirements to airline fleets, but there are lessons to be learned from the commercial sector. Introducing new auto-pilot systems have many attendant risks, as was illustrated by Boeing’s introduction of the Maneuvering Characteristics Augmentation System (MCAS) into commercial aircraft.

Military interest in AI-piloted planes began to grow in the 2010s, culminating in the US Air Force use of an AI algorithm to help co-pilot a successful test flight, of a Lockheed U-2 Dragon Lady, high altitude reconnaissance aircraft, in 2019. The ‘ARTUµ’ algorithm was developed by a small team of researchers at the Air Combat Command’s U-2 Federal Laboratory.

In 2020, the US Defense Advanced Research Projects Agency (DARPA) announced that, in a simulated dogfighting competition run by its Air Combat Evolution (ACE) programme in an F-16 simulator, an AI model defeated an experienced F-16 fighter pilot.

Now things have begun to get even more interesting. DARPA revealed earlier this year that an AI model developed by ACE had successfully piloted a F-16 in flight. The AI pilot flew multiple flights in a specially modified F-16, over several days of tests at Edwards Air Force Base in California.

Notwithstanding successful tests such as these, it is still premature to announce the advent of autonomous AI combat pilots. Tests conducted to-date have all been carried out in controlled conditions and in real combat scenarios many different operational factors apply. We can expect to see an increasing sophistication of AI apps enter the cockpit in the near term, but fighter pilots shouldn’t worry about their jobs just yet.

This article first appeared in Armada International

Image credit: U.S. Air Force.


September 4, 2023
think-your-ai-content-is-fooling-everyone-1280.png

So, you think that your AI generated content is fooling everyone? Think again.

If you are happily creating articles, posts and comments using Generative AI, feeling safe in the knowledge that no one will ever guess that your content is ‘AI’, dream on! Your audience is already developing a sixth sense to instantly tell human and GenAI content apart.

I’m telling you this to be kind. The more people that dismiss what you share as ‘fake’ AI content, the more chance there is that you are harming, not enhancing your personal brand.

So, as a well-known advocate of AI solutions and an intensive user of AI, why am I, of all people, telling you to be wary of posting AI generated content? To explain further, we have to consider the dynamics of today’s social media, the value of ‘Likes’ and how digital content impacts your ideal audience.

A common misconception is that more Likes equal greater validation of the content that you share. In reality, people Like your content for different reasons, while the volume of Likes can often have more to do with how the platform’s algorithm treats your piece of content, rather than its own particular merits.

So, who Likes your posts and articles?

    • The people that know you best, or consider themselves to be your fellow travelers on the same journey, may give your content a Like purely to be supportive.
    • People that follow the topics that you post about, may Like your content because it’s within their main area of focus, but that doesn’t mean they have to read it!
    • Similarly, people that use LinkedIn or other social media to keep up-to-date with the news, may Like your content if it delivers an interesting headline.
    • If you tag people or companies, then you may receive Likes in return, just on the basis that all publicity is good publicity.
    • If your followers include a lot of coworkers, subordinates or students that you teach, you may receive a lot of Likes, because either (hopefully!) they like the job that you’re doing, or are seeking recognition, themselves.
    • Then there are those that Like your content because they have read it, enjoyed reading it, or have derived value from doing so.

Make no mistake, that last category (the readers) are the minority!

If you’re a LinkedIn user then you will know that LinkedIn gives you the option to react to a post using different Likes (Celebrate, Support, Love, Insightful and Funny). I can’t count the number of times that I’ve seen the ‘Insightful’ Like used on posts with an erroneous, or broken link, to the content that they apparently found ‘Insightful’! Social media is a world where Love doesn’t mean love, Insightful doesn’t necessarily mean insightful, and Like doesn’t even have to mean like! In itself, the value of a Like is nothing.

Another factor to consider in assessing how well your content is doing, is that fact that your biggest fans may not react on social media at all! I frequently get comments about my articles, newsletters and reports via direct messages, Whatsapp, or offline during ‘real life’ conversations from people that never, or almost never, Like, comment or share on LinkedIn. Typically, these are my most valuable connections, such as senior decision makers, subject matter experts and public figures. It’s sometimes frustrating that they don’t Like or comment, but it’s far more important and valuable to me that they take the time to read my content.

AI generated content

So, returning to our topic of AI generated content, what is your measure for how successful your content is?

This obviously depends a lot on your own goals for creating that content to begin with. My goal, for example, is typically to provide value and insight to my targeted senior decision makers and subject matter experts. Their time permitting, these are my most valuable readers, and so I’m careful to ensure that their time will be well-spent reading my posts and articles.

Let’s consider your own goals, audiences and approach to content for a moment. Who are you trying to impress? What will encourage your top target audience to read your content and return to do so again and again? What is the key message that you want to reinforce? And what forms of content is your key audience most likely to consume and respond to?

Now, the big question is where does AI content fit in?

What’s the impact of one of your most valuable connections finding that your latest post, or article, is actually quite generic and clearly not written by you. Will that realisation affect how your connection thinks about you? And is that connection now more likely, or less likely to spend time reading your content in future? It probably depends on the format and purpose of that piece of content, and how appropriate the information used in it is for the reader in question.

However, let me be clear, before we proceed further. Before it sounds like I am dismissing all AI generated content. I am not. I use AI generated content in my work all the time, although rarely in the form it is first generated. I routinely edit and re-write most pieces of AI content.

What value does GenAI written content have?

Today’s AI generated text content (and I say today’s, because the quality and value of AI content is constantly changing) has different value depending on the format, purpose and type of information offered.

Format

  • Due to the way that generative AI models work, they are the most convincing and most accurate the shorter the piece of text is. They can generate full blog posts and articles to an average quality, but the longer they are, the more apparent it is that the article lacks the nuance that a human writer would add. Meanwhile, where context is needed, most generative AI chat services draw primarily on content that may be months, or years old. Finally, since AI creates articles based on other articles that have been written by many other people (including both good writers, and poor ones), originality is not GenAI’s strong suit.

Purpose

  • The usefulness of GenAI written text to you and your readers is going to heavily depend on the purpose of the content or communication. If your purpose is to simply inform, then GenAI provides a fast and efficient way of organising information and communicating key points. At the other end of the scale, if your purpose is to share new thinking, or influence the opinion of others, then there are definite pros and cons. If your purpose of using GenAI is to win recognition for being a great writer, then please, just don’t do it!

Information

  • What type of information you wish to include in your content is also key to the value and usefulness that GenAI can provide. For example, if you wish to present an argument in favour of something, is this a logical argument based purely on the facts, or an opinion-led argument with few facts to rely on? Does the content you wish to share come from the public domain, or from the beliefs and values that you hold inside? AI is clearly going to be much better equipped to create content without opinion, beliefs or values. Where such thinking is important, GenAI needs careful input, guidance and revision, if it is to create content that is close to your own opinion, beliefs and values.

If you’ve following my thinking so far, then it will probably be obvious to you where the cracks begin to appear when you start publishing AI generated content, or try to pass it off as your own.

What are the risks?

Now ask yourself, where are the biggest risks for your personal brand in using GenAI to create your content and communications? What’s the worst that can happen if your contacts, connections, colleagues, peers and readers identify your content as AI generated? Again, I believe it depends entirely on the context.

As an avid consumer of content via Linkedin, my problem with AI generated content is two-fold: emotional and logical.

Why do I have an emotional problem with AI content? When I open and read a short post, a long post, or an article from a connection, I feel that I have some measure of vested interest. So, when I read their insight or opinion, only to find that it’s GenAI, I often feel a negative emotional response. My immediate reaction is that ‘this is fake’. It’s emotional because I often take the time to read such content to learn about, or to understand the other person’s opinion. So, it’s basically disappointment.

Secondly, there are a number of logical problems that I now have when discovering GenAI content out of context, or being passed of as original thinking. If I consider the content to be valuable, then I treat it the same as human generated content. Why wouldn’t I? However, life is rarely that simple! Here are some of the new social media quandaries that I come up against:

  • When someone that I know and respect, posts GenAI content believing that it will pass as their own original written content, and it clearly fails to do so, should I tell them? Should I Like their content, even though I don’t? Do I have time to explain to them carefully and respectfully what the problem is?
  • When someone posts an AI generated comment on one of my social media posts, blogs or articles, that simply repeats a fact from my content without sharing an opinion, posing a question or adding value, should I Like it? How should I reply? Or should I delete it to save embarrassment all round?
  • When someone messages me and asks me to endorse a piece of content that looks like it was generated by ChatGPT in about 60 seconds, what do I say to them?

For what it’s worth, my own personal guidelines for using AI are to be as honest and transparent about my GenAI usage as I can. So, anything I use that has a significant element of GenAI created content in it, I now share with a credit or disclaimer.

It is true that GenAI can prove to be valuable to people that are not great writers, but it’s also true that it is only by gaining experience as a writer or editor, that you will have the tools to edit AI text content to be more human, and represent your personal brand better.

The famous horror-fiction writer Stephen King, says this in his book about writing:

“If you don’t have time to read, you don’t have the time (or the tools) to write.”

This is true of any form of writing.

When you’re learning to write better, writing ‘does what it says on the tin’. Reading and writing more comments will make you better at writing comments; reading and writing more social media posts will make you better at writing posts; while reading and writing more long-form articles will make you better at that. And each of those things will make you better equipped to more effectively use, edit and filter AI generated content to build your personal brand, rather than dilute it.

If you believe that you can skip that learning process and automate your content generation, without becoming its thoughtful moderator, then your GenAI content is probably only fooling one person: yourself.

If you enjoyed reading this article, you may also like:


April 21, 2023
cruise-chevy-2.png

The recent passing of a new autonomous vehicle (AV) law in Dubai highlights the emirate’s tenacious commitment to innovation. But there is much to do before we see driverless taxi services on the roads.

Dubai’s ambition to place itself at the forefront of autonomous transport is exciting because it is unprecedented. However, this very same lack of precedent means it cannot lean on the experiences of others to develop new regulations, technologies or infrastructure.

Dubai’s plans switched into high gear in 2021 when the Roads and Transport Authority (RTA) signed an agreement with General Motors’ autonomous vehicle company Cruise to operate its self-driving taxis and ride-hailing services. The agreement will make Dubai the first city outside the US to offer Cruise’s driverless taxi services, with a goal of putting 4,000 AVs on the road by 2030.

The Dubai Smart Mobility Strategy aims to convert 25 percent of total transportation journeys into trips via self-driving transportation by 2030, including driverless rail transport.

The new Law No.9 of 2023, passed last week by Sheikh Mohammed Bin Rashid Al Maktoum, vice president, prime minister and ruler of Dubai, is not the first legislation to support the emirate’s future driverless vehicle services sector. The law follows resolutions made by the emirate’s executive council, local legislation development led by the RTA and UAE laws issued at a federal level to allow temporary licensing of AV trials.

But on the legal front Dubai must innovate when it comes to regulation. There are simply no comprehensive laws or guidelines in place for the public use of autonomous vehicles anywhere else.

For example, the UK government plans to allow autonomous vehicles on the road by 2025. However, a closer look shows that British regulation so far only covers testing, insurance and liability. Further afield, the EU has implemented a framework for approving level 3 and 4 AVs but offers little detail on their operation on the road. Meanwhile, the world’s most extensive driverless vehicle trials, which are taking place in China and the US, have only been authorised via case-by-case permissions issued by authorities. This is also true of the UAE’s trials.

Given the emirate’s ambitious roadmap, Dubai’s partnership with Cruise is a smart move. The US company was one of two operators to receive a permit to offer paid driverless taxi services in California in 2021. Early last year it began offering services within designated areas of San Francisco, but only between the hours of 22:00 and 06:00. The company now operates about 300 robotaxis across San Francisco, Austin and Phoenix.

However, no new technology is without its teething problems. US media have reported a variety of complaints related to the San Francisco trial, including immobile cars blocking traffic, multiple vehicles stopping in the middle of the road and incorrect signalling. In March a Cruise robotaxi bumped into the rear of a San Francisco bus, prompting an urgent software recall across its entire fleet of AVs.

While Dubai’s modern road infrastructure and digital traffic management are both big pluses for future AV services, the city still has its own unique set of physical and digital characteristics, operational needs and system integration requirements to consider. City-specific behavioural factors also apply relating to motorists, pedestrians and passengers. So, notwithstanding Cruise’s past two years of service trials in California and the five years of testing before that, only so much can be taken for granted as the US AV firm and the RTA work together on Dubai’s driverless services.

At this point in time, the RTA is perhaps the only public transport authority in the world that is developing an ecosystem to put 4,000 driverless taxis on the road in one city.

This is why innovation, not best practice, must drive Dubai’s autonomous vehicle plans.

This article first appeared in Arabian Gulf Business Insight.


March 18, 2023
tii-atrc.jpg

In a week full of big technology news, globally and regionally, the Abu Dhabi government’s leading applied research centre has announced that it has developed one of the highest performing large language models (or LLMs) in the world. It’s a massive win for the research institute and proof positive for Abu Dhabi’s fast growing R&D ecosystem.

Technology firms have gone to great lengths since the public beta of ChatGPT was introduced in November to make it clear that OpenAIGPT series of models are not the only highly advanced LLMs in development.

Google in particular has moved fast to try and demonstrate that its supremacy in search is not under threat from OpenAI’s models. Tuesday, the search giant also announced a host of AI features that will be brought to its Gmail platform, apparently in answer to Microsoft‘s plans to leverage both its own AI models and OpenAI’s into products across its portfolio. However, announced the same day, OpenAI revealed that it has begun introducing its most advanced and human-like AI model to-date: GPT-4.

Whatever way you look at it, OpenAI’s GPT-3 was an incredible feat of R&D. It has many limitations, as users of its ChatGPT public beta can attest, but it also showcases powerful capabilities and hints at the future potential of advanced AI models. The triumph for OpenAI though, and perhaps the whole AI sector in general, was the enormous publicity and public recognition of AI’s potential. Now everyone thinks they understand what AI can do, even though they are sure to be further educated by GPT-4 and the new wave of applications built on new advanced AI models heading their way.

So, what does this emerging wave of LLMs mean for other research labs and R&D institutions developing their own AI models around the world? To begin with, the bar to entry into LLMs is set high, in terms of both the technology and the budget required.

Advanced AI models today are trained, not born. GPT-3 was trained on hundreds of billions of words, numbers and computer code. According to a blog from San Francisco-based Lambda Labs, training GPT-3 might take 355 ‘GPU-years’ at a cost of $4.6 million for a single training run. Meanwhile, running ChatGPT reportedly costs OpenAI more than $100,000 per day. R&D labs competing in the world of LLMs clearly need deep pockets.

Then, it perhaps goes without saying, but institutions planning to develop breakthrough LLMs, must also have the right talent. And of course, global competition for the type of top researchers needed to develop new AI models is fierce, to say the least!

Just as critical to having the right talent and the right budget, is having the right vision. R&D institutions are often bogged down in bureacracy, while most tech firms are, necessarily, focused on short term rewards. In this game, to win, the players must have the vision to invest in developing AI models that are ‘ahead of the curve’ and the commitment to stick with it.

Therefore, for those following Abu Dhabi’s R&D story, it is not an entirely unexpected discovery that the Technology Innovation Institute (TII) has been investing heavily in the development of LLMs.

Formed in 2020, as the Abu Dhabi Government’s Advanced Technology Research Council‘s applied research arm, TII was founded to deliver discovery science and breakthrough technologies that have a global impact. An AI research centre was created in 2021, now called the AI and Digital Science Research Centre (AIDRC), to both support AI plans across the institute’s domain-focused labs and develop its own research. Overall, TII now employs more than 600, based in its Masdar City campus.

This week TII announced the launch of Falcon LLM, a foundational large language model with 40 billion parameters, developed by the AIDRC’s AI Cross-Centre Unit. The unit’s team previously built ‘NOOR’, the world’s largest Arabic natural language processing (NLP) model, announced less than one year ago.

However, Falcon is no copy of GPT, nor other LLM’s recently announced by global research labs and has innovations of its own. Falcon uses only 75 percent of GPT-3’s training compute (i.e. the amount of computer resources needed), 80 percent of the compute required by Google’s PaLM-62B and 40 percent of that required by DeepMind‘s Chinchilla AI model.

According to TII, Falcon’s superior performance is due to its state-of-the-art data pipeline. The AI model was kept relatively modest in size with unprecedented data quality.

Although no official third party ranking has been published yet, it is thought that Falcon will rank in the world’s top 5 large language models in a classical benchmark evaluation and may even rank number one for some specific benchmarks (not counting the newly arrived GPT-4).

Large language models have proved to be good at generating text, creating computer code and solving complex problems. The models can be used to power a wide range of applications, such as chatbots, virtual assistants, language translation and content generation. As demonstrated by OpenAI’s ChatGPT public beta testing, they can also be trained to process natural language commands from humans.

Now that the UAE now has one of the best and highest performing large language models in the world, what is the potential impact of TII’s Falcon LLM?

First, like all LLM’s, Falcon could be used for a variety of applications. Although plans for the commercialisation of the new model have not been announced, Falcon could provide a platform for both TII and potential technology partners to develop new use cases across many industry sectors and many functional areas. For development teams in the region, it’s a plus to have the core technology developer close at hand.

Second, Falcon also has technological advantages that businesses and government orgnisations can benefit from, which aren’t available via existing global platforms. The model’s economic use of compute, means that it lends itself for use as an on-premise solution, far more than other models that use more system capacity. In addition, if you’re a government organisation, implementing that on-premise solution means that no national data is going to be transferred outside of the country for processing.

Finally, Falcon is intellectual property developed in the UAE and a huge milestone for a less than three year-old research institute. The emirate is funding scientifically and commercially significant research and attracting some of the brightest minds from around the world to make it happen.

Of equal importance, if Falcon is anything to go by, at both a government policy level and an institutional level, Abu Dhabi has the vision and the drive to develop breakthrough research.

I don’t think that yesterday’s announcement will be the last we will hear about Falcon LLM! Stay tuned!

This article first appeared in Middle East AI News.


January 31, 2023
chatgpt-image.png

Will ChatGPT take our jobs? The truth is that it gets an awful lot right and it gets an awful lot wrong!

Communications professionals, writers, journalists and researchers all seem to be asking if Chat GPT and other new AI platforms will take their job. I’ve been using AI writing apps for a couple of years now to help overcome ‘writers block’, sketch out ideas and, well, just for fun! So, what do I think? Will AI take my job? Here’s a little analysis that I did in January 2023. 

A recent survey survey of UAE residents by communications consultancy duke+mir and research firm YouGov, found that 55% were concerned about losing their job to AI (and this went up to 66% in the under 25 age group). It’s a notion that tech firms and some governments have done their best to deride over the last five years, but the evidence is very clear: artificial intelligence allows companies to do more with less – and that means less people too.

It is true that new technologies, including AI, are also creating new categories of jobs, but that’s not much of a consolation if you don’t have, or can’t acquire, the new skills required to take one of those jobs. This said, there are many things that AI simply cannot do, because those jobs require human thinking, dexterity or other nuances that are particular to human beings.

However, for some, the arguments are a little too academic. Most people don’t know much about how AI works, how it’s used, or what the benefits and risks might be. Many have relied on tech firms, scientists, journalists and government spokepeople for insights on how AI technologies will affect them. Now, suddenly, this may have changed a little.

With the public beta release of the latest versions of OpenAI’s conversational AI chatbot ChatGPT. Now, anyone with an Internet connection can play with AI, discover its potential, explore its boundaries and ask it about the future.

So, just for fun, I asked ChatGPT to write this week’s weekly article. Will it take my job? You be the judge!

I gave ChatGPT the following instruction:

No alt text provided for this image

This is the first answer that the chatbot returned. It is unedited. I’ve fact checked this and put my comments under the article.


The Gulf Cooperation Council (GCC) states, which include Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates (UAE), have been actively investing in and adopting artificial intelligence (AI) and other emerging technologies in recent years. These efforts are driven by a desire to diversify their economies, improve efficiency and productivity, and enhance the overall competitiveness of their countries.(1)

One key trend in the adoption of AI in the GCC states is the increasing focus on the development of smart cities. For example, the UAE has launched several initiatives aimed at creating smart cities, such as the Dubai Smart City project and the Abu Dhabi Smart City project. These initiatives aim to use technology to improve the quality of life for citizens, enhance urban planning, and create new business opportunities.(2)

Another trend is the use of AI in the healthcare sector. For example, in Qatar, the government has launched the Qatar Genome Project, which aims to use genomics and AI to improve the diagnosis and treatment of diseases. Additionally, Saudi Arabia has launched the Misk Medical AI Program, which aims to use AI to improve the efficiency and effectiveness of healthcare in the country.(3)

Another key trend is the use of AI and other emerging technologies in the financial sector. For example, the Central Bank of Bahrain has launched the Bahrain FinTech Bay, which aims to promote the development of fintech and other innovative technologies in the country. Additionally, the Dubai International Financial Centre (DIFC) has launched the Dubai Blockchain Strategy, which aims to use blockchain technology to improve the efficiency and security of financial transactions in the UAE.(4)

In addition to these specific examples, GCC countries have also been investing in other areas such as energy, transportation, and retail. For instance, in Oman, the government has been working with companies like Siemens to implement AI-enabled systems in the oil and gas sector. In Saudi Arabia, the government has been working to develop autonomous vehicles and drones for transportation and delivery purposes.(5)

Overall, the adoption of AI and other emerging technologies in the GCC states is being driven by the desire to improve efficiency and productivity, diversify their economies, and enhance the overall competitiveness of their countries. These efforts are also supported by significant investments in research and development, as well as the development of supportive ecosystems, such as innovation hubs and accelerators. As a result, we can expect to see continued growth in the adoption of AI and other emerging technologies in the GCC states in the coming years.(6)

It is worth noting that the GCC countries have a lot of potential to be the leader in AI, but there is also a lot of work that needs to be done to overcome challenges such as lack of data privacy laws and shortage of local talent. It is important for the GCC governments to work on these challenges and to also focus on building an ecosystem that supports innovation, investment and collaboration in AI.(7)


Fact check & analysis

  1. Paragraph one is correct and provides a useful introduction.
  2. Paragraph two provides a useful example and is essentially correct, but contains a little misinformation. Dubai Smart City project was announced in 2014, but since then has developed into several iniatives with different names such as Smart Dubai, which is now Digital Dubai. I don’t remember there ever being an “Abu Dhabi Smart City” project by name.
  3. Paragraph three is also useful, although the language is repetitive. However, again, ChatGPT inserts its own names. The Qatar Genome Project is the name of an actual project. As far as I know, “Misk Medical AI Program” is not, although Saudi Arabia’s Misk Foundation does have medical programs.
  4. Paragraph four contains misinformation. Again the content topic here is useful for the context of the article, but Dubai Blockchain Strategy was not launched by DIFC, it was announced by Smart Dubai and Dubai Future Foundation in 2017.
  5. These paragraphs are now following a formula, but even so paragraph five is a useful contribution to the report. Although, I would say that the claim that Saudi Arabia is “working to develop autonomous vehicles” is overstated and possibly misleading.
  6. Paragraph six is correct, although I would prefer if it had also mentioned some examples to make it more specific.
  7. Paragraph seven is correct and useful.

So, should I give up, or stick around for a while longer? ?

Read more on this topic:

This story was originally published on Linkedin


September 29, 2022
Dubai-metaverse-North-Star.png

Dubai Metaverse Assembly took place this week, drawing together technology experts, innovators and government policymakers from all over the world and connecting with tens of thousands of virtual attendees. Many visions, opinions and predictions were shared at the event but on one thing they all agreed on: Dubai is leading the way in creating metaverse policy!

The Dubai Metaverse Assembly took place at the Museum of the Future and the Jumeirah Emirates Towers in Dubai this week (28-29 September), drawing 500 technology experts, innovators and government policymakers together from more than 40 prominent local and global organisations. Following on from the Dubai Metaverse Strategy announced in July, the event certainly did its job of capturing the attention of global metaverse and Web3 leaders. More than 20,000 people worldwide watched the event virtually.

Many visions, opinions and predictions were shared, including new metaverse phrases such as Gross Metaverse Product and B2A (standing for business-to-avatar). However, the broad consensus at this event was that the future remains hard to predict! No one really knows how long the grand metaverse concepts that the technology industry has will take to come together, when our new and existing virtual worlds will become interoperable or what regulation is required to govern the metaverse. Meta intimated that many of the concepts being talked about today will take decades to become virtual reality.

One thing that technologists, investors, businesses and finance professionals did seem to agree on, was that the metaverse demands forward-looking policymaking and for governments to proactively set the agenda. They also all agreed that this was exactly what the Government of Dubai was doing, as one delegate put it ‘at light speed’, fast-developing the environment that metaverse, Web3 and DeFi businesses need to create and grow.

Delivered in the presence of His Highness Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, Crown Prince of Dubai, Chairman of The Executive Council of Dubai and Chairman of the Board of Trustees of the Dubai Future Foundation, H.E Omar Sultan AlOlama Al Olama, UAE Minister of State for Artificial Intelligence delivered a compelling case for why Dubai was well-positioned to become a global metaverse hub. The Dubai Metaverse Strategy will both capitalise on Dubai’s strengths and invite metaverse firms to help shape how Dubai leverages the new technologies and develops new policy.

The Dubai Metaverse Strategy echoes some of the Dubai Blockchain Strategy announced in 2016, which included a goal for all Dubai government documents to be moved to blockchain. In recognition of the fact that there were both huge opportunities, yet many unknown facts, the government formed the Global Blockchain Council. Inviting global blockchain leaders and local innovators to contribute, the new council set about identifying how blockchain technology could help government, specific use cases and steps that would need to be taken to move forwards with the emerging technology.

As touched on in last week’s Middle East AI News, the Government of Dubai has worked hard to position itself to better identify, assess and act on upcoming digital and technology opportunities. It’s open engagement of the private sector, combined with its agility in policymaking now allows the emirate to move forward quickly and purposefully with initiatives like the Dubai Metaverse Strategy, together with supporting policy and regulatory frameworks.. This, in turn, provides developers, innovators and platforms with the confidence to use Dubai as a based to push ahead with their own plans.

Keith Jordan, Vice President Innovation, Mastercard Labs summed it up in comments made on Day Two of the Dubai Metaverse Assembly, “What’s really amazing is that the [Dubai] vision is being set from the top down. That’s really important, because you need to set that North Star.”

And that’s what Dubai’s become extremely good at. Finding that ‘North Star’ and setting the coordinates.

This article was first posted in my weekly Middle East AI News on Linkedin.


September 16, 2022
global-ai-summit-2022.png

The Global AI Summit 2022 brought together thousands of business leaders, policymakers and technology experts in Riyadh this week. Organised by the Saudi Data and Artificial Intelligence Authority, or SDAIA, under the theme of ‘Artificial intelligence for the good of humanity’, the event showcased the breadth and depth of the Kingdom’s National Strategy for Data and AI.

This week saw the second edition of the Global AI Summit took place in Riyadh (September 13th – 15th), bringing together a world-class roster of speakers with thousands of influential delegates under the theme of ‘Artificial intelligence for the good of humanity’.

Some may be quick to dismiss such conference themes for being a little too ambitious, or perhaps not representative of the conference content itself (often speakers at these things end up talking about what they want to talk about, regardless of any theme). However, this week’s Summit seemed to truly serve its stated purpose!

At the same time, the event gave us a glimpse of how the Kingdom’s National Strategy for Data & AI – which was officially launched at the first Global AI Summit in 2020 – is beginning to affect all aspects of Saudi Arabia’s public and private sectors, society and culture, education, R&D and policymaking.

For those close to the Kingdom’s data and AI initiatives, a lot of what was shared was perhaps already common knowledge. However, the Summit clearly went to great lengths to create a platform to engage decision makers and policymakers from across all sectors of business, government and society. Government departments, Saudi businesses and global technology firms were able to showcase a wide range of data and AI projects, many of which have been fast-tracked to begin delivering results at the earliest point possible.

Under the ‘for the good of humanity’ theme, the Summit also gave the government the opportunity to show that it is striving to ensure policymaking takes into account all aspects of local society and positions the Kingdom as a desirable partner for global organisations, businesses, plus other sovereign nations that want to embrace digital transformation.

Alongside the big deals such as SCAI‘s investment in a $207 million SenseTime Middle East and Africa joint venture, and Saudi Aramco‘s new $250 million ‘Global AI Corridor’ initiative, the government announced a new partnership with the I.T.U. to develop a new Global AI Readiness Framework, and that it was joining The World Bank’s Digital Development Partnership (DDP), which helps developing countries leverage digital innovations.

Was the Global AI Summit a big public relations exercise then? Well, of course it was, and by many accounts, a very successful one. However, it is the carefully curated content and discussion of the Summit that made it especially meaningful to the national AI strategy’s broad objectives and to other nations trying to reap the benefits of AI.

When one considers that the organiser of the conference, the Saudi Data and AI Authority (SDAIA) was first formed just three years ago and that the National Strategy for Data & AI was approved by the Saudi King little more than two years ago, the progress made since, is quite astonishing. The speed and effectiveness of government digital transformation programmes, not to mention the enormous investment in digital infrastructure, has also inspired Saudi businesses to ‘step up to the plate’.

Equally impressive is the public support that AI has in the Kingdom of Saudi Arabia. It may come as no surprise that, in sync with many countries worldwide, 77 per cent of Saudi Arabia’s government IT decision-makers are prioritising AI (Yougov/SAP 2022). Afterall, this is becoming the norm.

What is more unexpected is the level of support for AI technologies amongst the Saudi public. According to a World Economic Forum survey conducted by Ipsos at the end of last year, some 80 per cent of respondents from the Kingdom expected AI to change their lives, compared with less than half of respondents from Canada, Germany, France, the U.K., or from the U.S.

‘Artificial intelligence for the good of humanity’ becomes all the more meaningful, when your whole country is engaged in the objective.

This article was first posted in my weekly Middle East AI News on Linkedin.