AI ethics Archives — Carrington Malin

January 10, 2025
office-clapping-1280.jpg

It’s becoming a new communications quandary – When do you tell your audience that you’ve used AI in creating something?

When do you announce proudly that your new creation was produced using the latest AI technologies? When do you need a disclaimer? And is it ethical to keep quiet about it altogether? These are questions that that I’ve given quite a lot of thought to over the past couple of years.

At this point, two year’s after the launch of OpenAI”s ChatGPT, it’s not hard to figure out that very soon everyone is going to use Generative AI tools to help them in everyday communications, writing, and to produce creative work.

However, I believe that we are still at the messy stage of GenAI!

The messy stage of GenAI!

The quality of GenAI generated content still varies greatly due to differences in technology platforms, the skills of the end user and the type of job at hand. This means that we’re going to continue to see a wide variety of content at varying levels of quality and effectiveness and that most of us will be able to identify a high percentage of AI content when we see it. Spotting AI content is becoming a sort of superpower! Once you begin noticing AI content, you just can’t stop seeing it. So, in this environment, it could be a judgement call deciding when to be proud of your AI content and tell everyone what you’ve done, and when to keep quiet.

Spotting AI content is becoming a sort of superpower! Once you begin noticing AI content, you just can’t stop seeing it.

There are also, of course, ethical dilemmas which accompany AI content, including how to decide when AI has had a positive impact (added value) or a negative one (e.g. done someone out of a job). Then there is copyright, fair use of data, and the potential for AI plagiarisation.

Timing

As with most things concerning communications, what you say and don’t say has a lot to do with timing. Firstly, many of the issues that we wrestle with today, could be a thing of the past in five years time. For example, the negative connotations to your multi-million dollar business cancelling your photography agency’s contract, because your going to save money by creating all your catalogue shots using AI. This is a very present day issue. In ten years time, whatever photographers remain in business will have adjusted to the new reality and no one will bat an eyelid if you never hire an agency of any kind, ever again.

Secondly, like any other communications requirement, with a little forethought and planning you should be able to work out what messages and policies to put in place now when talking about AI in today’s environment and then map out how these might change over the next year or two, according to potential changes in perceptions and reputational risks. Just because AI has some unknowns, it doesn’t mean that it can’t be planned for.

A little empathy goes a long way

The biggest risk, as usual, is not taking into account the perceptions of employees, customers and other stakeholders in your use of AI, and communications about it. Part of the problem here is that many organisations these days have a team of people that are well-versed in AI, but this often does not include the communications and marketing team!

Whilst all your marketing counterparts may be jolly impressed that you created your latest campaign in one day and made it home in time for tea, your customers are likely to care more about your message and what that campaign means to them.

So, does one announce “AI campaigns”? For me, it’s all about whether this helps meet the goals, resonates with the target audience and doesn’t risk upsetting other audiences. Whilst all your marketing counterparts may be jolly impressed that you created your latest campaign in one day and made it home in time for tea, your customers are likely to care more about your message and what that campaign means to them. It’s easy to let the ‘humble AI brag’ creep into communications because we all want to be seen moving with the times, but unless there’s a clear benefit for your key audiences, it really doesn’t belong there.

Transparency and authenticity

As with many corporate reputation risks, reviewing how and where more transparency should be offered on AI usage can help mitigate some of that risk. For example, making it clear that your website chat support is responded to by an AI chatbot and not a human, can help avoid customers making false assumptions (and perhaps being unnecessarily annoyed or upset).

What about marketing content? Should you be transparent about what content was created using AI? My experience is that the more personal the communication, the more sensitivity there is. I may not care if your $100,000 billboard was created entirely by AI, but when I when I receive a personal email from you, I probably expect more authenticity.

A personal perspective

Last year, I began labelling my LinkedIn content to show where and how I used AI. The use of ChatGPT and other Generative AI tools to write posts, articles and comments has started to proliferate on LinkedIn. As you have probably seen yourself, sometimes people use GenAI to great effect and sometimes content lacks context, nuance and the human touch that makes it engaging. So, I’ve found that posting in this environment can invite scrutiny – and occasionally accusations as to whether you are using AI to post, or not.

I would much rather that the focus remains on what my content communicates, rather than what role AI played.

I use AI extensively when planning, creating and repurposing content, but I still create more content with little or no help from AI. Although AI-generated content rarely accounts for more than 50% of any written work, I don’t really want my audience to either assume that I’m using AI to generate everything, nor to assume that I don’t use AI at all. Additionally, I would much rather that the focus remains on what my content communicates, rather than what role AI played. So, I now add a footnote at the end of all my LinkedIn posts and articles, which mentions whether I’ve used AI and what I’ve used it for.

If you are guided by your goals, your audience, the context and the potential risks, then deciding on how and when to communicate your use of AI can be very straightforward.

This article first appeared in my monthly AI First newsletter.

Image credit:  Drazen Zigic via Freepik.


April 26, 2024
mit-sloan-article-25-Apr-24.png

The Middle East region has limited oversight and no real regulation for AI, however the technology’s influence is growing fast is already having a growing impact in recruitment, HR, banking, healthcare and security.

The fast-growing role of artificial intelligence is creating a growing need for Middle East AI regulation and governance. The influence of AI technologies is growing fast and can already impact who gets a job, who is eligible for a loan, and even what kind of medical treatment a patient receives.

MIT Sloan Management Review Middle East asked me to comment on the region’s AI policymaking related to ethics and governance, and also how businesses in the region are dealing with these issues.

Ethics, governance and responsible AI has definitely been added to government agendas during the past few years, as countries in the Middle East strive to embrace AI effectively and take their place in the global AI ecosystem. As a result, there are an increasing number of guidelines and policy frameworks developed by government departments, regulators, and public authorities in the region.

The Middle East is certainly watching the progress of the EU AI Act, and we expect to see AI regulation come into the Middle East. I think, as with the GDPR Act, it inspired more data protection laws worldwide and continues to do so. And so, that’s a role that the EU AI Act might play.

The question is when will AI governance policies make it into legislation. The recent passing of the EU AI Act may prompt some countries to being introducing legislation, but the truth is that no one really knows when the EU AI Act will come into effect and what revisions may be made before that happens.

However, in my view – due to the speed at which AI is accelerating, the sheer volume of use cases and its far-reaching impact – sitting back and waiting for government regulation should not be an option. Organisations, individuals and tech firms can help ensure a better AI future today, by embracing best practices for ethics and governance, and choosing products and services that make a commitment to the ethical, responsible and transparent use of AI.

You can read May El Habachi’s full article in MIT Sloan Management Review here:


January 10, 2021
artificial-intelligence-ethical-concerns-2020.jpg

My New Year’s Linkedin poll about changes in how people feel about their ethical concerns regarding AI doesn’t prove much, but it does show that 2020 did little to ease those concerns.

Opinions and our level of understanding about artificial intelligence can vary a great deal from person to person. For example, I consider myself a bit of a technophile and an advocate of many technologies including AI, with a higher than average level of understanding. However, I harbour many concerns about the ethical application, usage and the lack of governance for some AI technologies. My knowledge doesn’t stop me having serious concerns, nor do those concerns stop me from seeing the benefits of technology applied well. I also expect my views on the solutions to ethical issues to differ from others. AI ethics is a complex subject.

So, my intention in running this limited Linkedin poll over the past week (96 people responded) was not to analyse the level of concern that people feel about AI, nor the reasons behind it, but simply whether the widepread media coverage about AI during the pandemic had either heightened or alleviated people’s concerns.

The results of the poll show that few people (9%) felt that their ethical concerns about AI were alleviated during 2020. Meanwhile, a significant proportion (38%) felt that 2020’s media coverage had actually heightened their ethical concerns about AI. We can’t guess the level of concern among the third and largest group – the 53% that voted 2020 ‘hasn’t changed anything’ – however, it’s clear that 2020 media coverage about AI brought no news to alleviate any concerns they might have either.

Artificial intelligence ethical concerns poll 2020

Media stories about the role of AI technologies in responding to the coronavirus pandemic began to appear early on in 2020, with governments, corporations and NGOs providing examples of where AI was being put to work and how it was benefiting citizens, customers, businesses, health systems, public services and society in general. Surely, this presented a golden opportunity for proponents of AI to build trust in its applications and technologies?

Automation and AI chat bots allowed private and public sector services, including healthcare systems, to handle customer needs as live person-to-person communications became more difficult to ensure. Meanwhile, credit was given to AI for helping to speed up data analysis, research and development to find new solutions, treatments and vaccines to protect society against the onslaught of Covid-19. Then there was the wave of digital adoption by retail companies (AI powered or not) in an effort to provide digital, contactless access to their services, boosting consumer confidence in services and increasing usage of online ordering and contactless payments.

On the whole, trust in the technology industry remains relatively high compared to other industries, but, nevertheless, trust is being eroded and it’s not a surprise that new, less understood and less regulated technologies such as AI are fueling concerns. Fear of AI-driven job losses is a popular concern, but so are privacy, security and data issues. However, many people around the world are broadly positive about AI, in particular those in Asia. According to Pew Research Center, two thirds or more of people surveyed in India, Japan, Singapore, South Korea and Taiwan say that AI has been a good thing for society.

Since the beginning of the pandemic, AI’s public image has had wins and losses. For example, research from Amedeus found that 56% of Indians believe new technologies will boost their confidence in travel. Meanwhile, a study of National Health Service (NHS) workers in London found that although 70% believed that AI could be useful, 80% of participants believed that there could be serious privacy issues associated with its use. However, despite a relatively high level of trust in the US for government usage of facial recognition, the Black Lives Matter protests of 2020 highlighted deep concerns, prompting Amazon, IBM and Microsoft to halt the sale of facial recognition to police forces.

Overall, I don’t think that AI has been seen as the widespread buffer to the spread of Covid-19 as it, perhaps, could have turned out to be. Renowned global AI expert Kai-Fu Lee commented in a webinar last month that AI wasn’t really prepared to make the decisive difference in combating the spread of the new coronavirus. With no grand victory over Covid-19 to attribute to AI, its role over the past year it’s understandable that there was no grand victory for AI’s public image either. Meanwhile, all the inconvenient questions about AI’s future and the overall lack of clear policies that fuel concerns about AI remain, some even attracting greater attention during the pandemic.

This article was first posted on Linkedin.