February 2020 — Carrington Malin

Blogs, writing, published articles, media interviews and other news
February 19, 2020
defence-of-freelancers.png

It may be easy to write-off some freelance projects as learning experiments or acceptable ‘fast fails’, but, be honest, did you set direction clearly to begin with?

Whether you’re starting to market your first venture or are helping someone else promote theirs, you’re probably already making use of online talent platforms to help you get things done by freelancers at a cheaper price point.

If you’ve been doing this for a while, then you may also have already found out that remote freelancers are often great for solving short term needs ‘in the moment’, but their work isn’t always that useful for the longer term. You may even write-off disappointing projects as learning experiments or fast fails that help you set direction. Although, the honest truth is that the resources used have probably have not had enough direction to begin with and more time invested at the outset would have helped you achieve a better result.

Process, process, process

When you engage a marketing or communications agency to develop creative work, advertising or PR campaigns, they’re likely to spend quite a lot of time focusing on your brief. This involves asking all the right questions about your goals, expectations and current situation, but it also involves challenging your assumptions and those of your competitors, plus considering all the available options to ensure that creative efforts pay off and campaigns perform. It can be time intensive process and is one of the things that tends to make using traditional agencies more expensive than independent consultants, since agencies leverage an expensive multidisciplinary team to achieve this.

When you contract freelancers, the traditional briefing and planning process can get completely thrown out of the window!

  • A carefully researched and planned brand identity process that takes weeks, becomes a logo delivered at a fraction of the cost in a day or two.
  • A new website can be set up in a matter of days, or even hours, because the design, layout and structure will likely be based on the developer’s past work or available templates first: and your brand’s needs second.
  • Freelancer writers can develop marketing copy fast, but, again, often based on the freelancer’s past work, not your brand’s story, tone of voice or with your other content plans in mind.

Clarity and context is all up to you

However, in defence of remote freelancers around the world, they are simply serving client needs. If you want something fast and cheap, without giving your venture’s needs much thought, someone will sell you a fast and cheap service. If you ask for an image to be edited for your marketing and don’t brief the designer on the creative direction that you want or how this image should be consistent with all your other work, then that’s your own fault. If you ask for a logo for your new business cards, without mentioning that you’d like to use this logo 100 times bigger for your front of house signage next month, don’t blame the designer when the design doesn’t scale and looks completely awful.

As part of a startup team, the likelihood is that you’re focused on outcomes, not on individual tasks. The most important thing is whether your efforts help your startup make progress. Consultants and freelancers, on the other hand, are focused on the task at hand and it is up to you to provide all the context and direction.

In fact, the value of all outsourced marketing and communications work is dependent on the clarity of the brief, answering the right questions and intelligent planning. If you’re trying to get things done by using freelancers and remote workers to reduce expenses, then that puts you in charge of developing your own brief, research and planning to see how each piece of work best supports your goals and where it fits into the bigger picture. This takes time, quality of thought and some effort on your part to articulate company needs clearly in written form.

Skill-up and improve results

Ultimately – if you’re not from a marketing background or not intimately familiar with the type of project – you’ll need to develop a deeper understanding of advertising and communications in order to get the best out of an outsourced service. So, you’ll find that it’s well worth the effort to set aside a few hours a week for online learning, reading marketing books and, perhaps, ‘going back to school’ by taking relevant short courses.

If you don’t already have the experience or skills, then it may save you wasted time, effort and budget to work with a communications and marketing expert that can help you map out current and future needs, define clear marketing projects and focus those freelance resources more effectively.

Some projects are always going to work better than others, just as some freelancers are going to be a better fit than others. However, if you can improve your process, more of those marketing projects will prove to be worthwhile investments that help your venture grow, rather than simply expenses to be minimised.

This story was first published on SME10x.com.


February 10, 2020
what-will-ai-teach-our-children-1200.jpg

As our world becomes AI First, we’ll soon see a new generation of AI natives – those that have never known a world without AI assistance – with their own set of needs, behaviours and preferences.

My daughter learned to recite the alphabet from Youtube when she was three and taught both her mother and grandmother how to use Netflix at age four. It was then when she discovered Google Voice Search and was delighted when she searched for the children’s rhyme There was an old woman who swallowed a fly and instantly discovered a video of the song. Since then, of course, she’s become a user of Amazon Alexa, Google Home and — now seven years old —has her own tablet, but nevertheless still borrows mobile devices from anyone that will allow her to amuse herself with apps and voice queries. For parents these days, this is the new normal.

The unprecedented accessibility of today’s technology begs many questions for parents, educators, healthcare professionals and society as a whole. Until the arrival of the iPad’s tap and swipe interface, literacy served parental control very well. If your child couldn’t type — or at least read — then they could not do very much with the Internet, discover content, participate in digital messaging or, most importantly, use digital devices to get into any trouble.

In the 80s, access to computers was mostly limited to those that wanted to learn MS DOS commands. With the proliferation of Microsoft Windows in the late 90s, users had to, at least, be able to read. In the 2000s, rich visual cues for point-and-click navigation on the Internet had begun to take over, but this still required a basic level of technical expertise to engage. Fast forward to 2019 and many homes have multiple, always-on devices that can be activated by voice commands. The only requirement the system makes of the user, is that they can speak a few words.

In the early 2000s, educational institutions, government departments and child welfare groups began campaigning in earnest for child safety on the Internet, raising awareness, for the most part, of dangers facing children from the age of 9 years old upwards that might have been using the Internet unsupervised. Today, with the increasing popularity of artificial intelligence-powered virtual assistants and other smart devices, your child could be accessing the Internet at age three or four. At first, they won’t be able to do very much with that access, but they learn fast!

So, now our globally-networked, AI-powered technology has become accessible even to tiny tots, what impact does this have on parenting, learning and a child’s cognitive development?

Throughout most of the past two decades, the American Academy of Pediatrics stood by its strict recommendation to parents of absolutely no screen time of any kind before the age of 2 years old. For parents with iPads and TV sets in the house trying to enforce this increasingly controversial rule, this was both frustrating and perplexing. It was hard to understand what the harm was in a one year-old watching an hour of TV or nursery rhymes on an iPad. In 2016, the AAP repealed its no-screen rule and instead introduce a more practical set of guidelines for parents raising children in a multi-media environment.

Unfortunately for the AAP, it is likely that their new set of technology guidelines for parents will be obsolete quite soon. AI voice technologies are being rapidly adopted around the world, with the likes of Alexa and Google Assistant being incorporated into a wider and wider range of devices and becoming commonplace in households globally.

As any family that has these devices at home will already know, children can turn out to be the biggest users of virtual assistants, both via mobile devices and via smart speakers. Whilst the language barrier prevents one and two year olds accessing the technology, today’s parents can expect that it won’t been too long after hearing baby’s first words that baby starts talking to AI.

Although circumstances obviously vary from child to child, according to their development and affinity to the technology, having always-on AI voice in the room raises its own set of questions.

For example, when does a child become aware than an AI voice device is not actually human? Is feeling empathy for a software programme a problem?

Should we, in the process of teaching our young children to be courteous, insist that they use pleases and thank yous when giving voice commands? If not, what are the implications of children growing up, from an early age, getting used to giving commands, while most parents are trying to teach them to be more polite?

Young children today are our first generation of AI natives. They will be the first generation to grow up never having known a world that wasn’t assisted by artificial intelligence. As the digital native generations before them, their needs and behaviours will be shaped by and in tune with prevailing technologies.

Whilst we can expect many false starts, artificial intelligence is going to be widely embraced by education systems to teach, tutor, test and grade school children and their work. In fact, it will prove to be pivotal to 21st century education.

Today, China is far into the lead in piloting new AI-powered school programmes. Some 60,000 schools in China — or nearly a quarter of those in the country — are currently piloting an AI system which grades student papers, identifies errors and makes recommendations to students on improvements such as writing style and the structure or theme of essays. A government programme led by scientists, the AI system is not intended to replace human teachers, just improve efficiency and reduce time spent on reviewing and marking student papers. Teachers can then invest more time and energy in teaching itself.

Chinese after-school tutoring platform Knowbox has raised over $300 million in funding since its launch in 2014, to help school students learn via apps that provide highly personlised curated lessons. It’s already working with 100,000 schools in China and has its sights set on the global education market.

Meanwhile, China is in the advanced stages of developing curricula on AI theory and coding for primary and secondary schools. Guangdong province, which borders Hong Kong and Macau, introduced courses on artificial intelligence to primary and middle school students from September 2019. The programme will be piloted in about 100 schools in the province, but by 2022 all primary and middle schools in the region’s capital Guangzhou will have AI courses incorporated into their regular curriculum.

Singapore launched its Code for Fun (CFF) schools programme in 2014 in selected schools, at first targeting about 93,000 students. Developed by the Ministry of Education and IMDA (the Infocomm Media Development Authority) the 10-hour programme teaches children core computing and coding concepts via simple visual programming-based lessons. All primary schools in Singapore will have adopted the programme by 2020.

Children growing up during the next decade, will simply take AI for granted, as a pervasive new wave of AI-powered services supports their every want and need. However, just as this new generation will find it hard to understand what life was like before AI, older generations will find some of the new habits and behaviours of AI natives unfathomable.

For better or for worse, the drivers for AI development and deployment are economic and commercial. So, we can expect brands and commercial services to continue be at the forefront of innovation in AI. Which means, just as previous generations have been characterised as being self-involved — beginning with the original ‘Me Generation’ of Baby Boomers, so AI natives are likely to struggle to explain themselves in a world that seemingly revolves around them.

There’s been much public comment over the past ten years to suggest that Millennials — the age group born between 1981 and 1996 — have developed to be more narcissistic than previous generations. The familiar argument credits social media and ecommerce with driving the need for young people’s excessive attention and instant gratification. Although, it is true that every generation of adults seem to view the population’s youth as narcissistic.

“The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise. Children are now tyrants, not the servants of their households. They no longer rise when elders enter the room. They contradict their parents, chatter before company, gobble up dainties at the table, cross their legs, and tyrannise their teachers.”

– Socrates, 5th Century B.C. Greek philosopher.

University researchers in Europe and the U.S. have been trying to ascertain whether there has been a clear increase in narcissism for the past decade, but the truth has been found to be less straightforward than common prejudices.

A study by a joint European-U.S. university research team published in Psychological Science, suggested that there was a ‘small and continuous decline’ in narcissism among college students from 1992 to 2015. A recent study led by, then University of Mannheim, researcher Eunike Wezel and due to be published in the Journal of Language and Social Psychology found that, overall, narcissism seems to decline with age.

What is clear, is that young people in our globally-connected and information-rich world do appear to be better educated and more worldly-wise than previous generations, often having more confidence and being far more concerned with climate change, the destruction of our environment and the future of our planet.

So, as our technology becomes AI first, we can hope that ubiquitous access to knowledge, education and tools to empower individual aspirations is going to be a positive thing.

On the home front, a big part of the problem with parental control is that until very recently computer systems have never been developed with the under-tens in mind: let alone the under-fives. In the past, due to the technical knowledge required and the convenient literacy barrier, software developers rarely had to take children into account. This is now changing quite swiftly.

Amazon introduced a child-focused version of its Echo smart speaker a year or two ago, with a parental control dashboard which gives parents the options to limit access, set a cut-off for bedtime and choose what Alexa skills their children are permitted to use. It also released a ‘Say the Magic Words’ skill to help teach children good manners.

Meanwhile, Google is continuing to develop the capabilities of Family Link, a parental control hub for family Google accounts introduced in 2017. It boasts features such as setting screen time limits, approving Android Apps and even the ability to lock children’s devices remotely. Google also allows parents to set up Google Home voice profiles for their children.

Both Google and Amazon allow virtual assistant users to turn-off payment features to avoid accidental Barbie doll or remote-controlled toy orders.

The arrival of AI in our homes presents new challenges for parents, not entirely unlike the arrival of the television, video games, cable TV or the home broadband Internet connection. At first parents and child experts alike will struggle to put the benefits and risks of AI voice devices into context. Many children will succeed at this faster than either one.

This story first appeared on My AI Brand (Medium)


February 6, 2020
google-meena-chatbot.jpg

The tech giant’s new chatbot could make AI-powered communication more conversational and even more profitable

Since the launch of Apple’s Siri a decade ago, more than 1.5 billion virtual assistants have been installed on smartphones and other devices. There can be few electronics users who don’t recognise the enormous promise of conversational AI. However, our seemingly hard of hearing virtual assistants and awkward artificial intelligence chatbot conversations have also proven the technology’s limitations.

Anyone who uses AI assistants is sure to experience frequent misunderstandings, irrelevant answers and way too many ‘I don’t know’ responses, while many corporate chatbots simply serve up pre-defined bits of information whether you ask for them to or not. So, while we have seen massive advances in natural language processing (NLP) during recent years, human-to-AI conversations remain far from ‘natural’.

But that may soon change.

Last week, a team from Google published an academic paper on ‘Meena’, an open-domain chatbot developed on top of a huge neural network and trained on about 40 billion words of real social media conversations. The result, Google says, is that Meena can chat with you about just about anything and hold a better conversation than any other AI agent created to-date.

One of the things that Google’s development team has been working on is how to increase the chatbot’s ability to hold multi-turn conversations, where a user’s follow-up questions are considered by AI in context of the whole conversations so far. The team’s solution has been to build the chatbot on a neural network, a set of algorithms modeled loosely on the way the human brain works, which is designed to recognise patterns in data. This neural network was then trained on large volumes of data to create 2.6 billion parameters, which inform those algorithms and so improve Meena’s conversation quality.

Creating conversational computer applications that can pass for human intelligence has been a core theme for both computer science and science fiction since the fifties. Alan Turing, the famous British World War II codebreaker and one of the founding fathers of AI theory, developed a test to measure if a computer system can exhibit intelligent behaviour indistinguishable from that of a human in 1950. Since then, the Turing Test has been somewhat of a Holy Grail for computer scientists and technology developers.

However, Google’s quest to develop a superior chatbot is far from academic. The global AI chatbot market offers one of the best examples for how AI can drive revenue for businesses. Business and government organisations worldwide are investing in chatbots, in an effort to enhance customer service levels, decrease costs and open up new revenue opportunities. According to research company Markets and Markets, the global market for conversational AI solutions is forecast to grow from $4.2 billion (Dh15.4bn) in 2019 to $15.7bn by the year 2024.

Chatbot solutions built for large enterprises have the ability to carry on tens of thousands of conversations simultaneously, drawing on millions of data points. Global advisory firm Gartner Group has found AI chatbots used for customer service can lead to reductions in customer calls, email and other enquiries by up to 70 per cent.

All this industry growth and customer service success is taking place despite the innumerable issues that users encounter when trying to have customer service conversations with AI chatbots. As consumers, we are now conditioned to dealing with technology that doesn’t quite work. If the benefits outweigh the frustration, we’re happy to work around the problem. We rephrase our questions when a chatbot can’t interpret our request or choose from the options offered, rather than try to solicit further information. Or, if we feel the conversation is just too much effort for the reward, we just give up.

The latent opportunity for virtual customer assistants is that they could play an active role in defining needs and preferences in the moment, whilst in conversation with the customer, helping to create highly personalised services. Today, programmers have to limit the options that customer service chatbots offer or too many conversations result in dead-ends, unmet requests and frustrated customers. So, choices offered to customers by chatbots, are often as simple as A, B or C.

If developers can increase a chatbot’s ability to hold a more natural human conversation, then chatbots may have the opportunity to solicit more actionable data from customer conversations, resolve a wider range of customer issues automatically and identify additional revenue opportunities in an instant.

Given how fast the chatbot technology market is growing, the payback from enabling AI chatbots to bring customer conversations to a more profitable conclusion could register in the billions of dollars.

This story was first published in The National.


February 4, 2020
davos-wef-2020.jpg

The WEF worked with more than 100 companies and tech experts to develop a new framework for assessing risk and AI.

Companies are implementing new technologies faster than ever in the race to remain competitive, often without understanding the inherent risks.

In response to a growing need to raise awareness about the risks associated with artificial intelligence, the World Economic Forum, together with the Centre for the Fourth Industrial Revolution Network Fellows from Accenture, BBVA, IBM and Suntory Holdings, worked with more than 100 companies and technology experts over the past year to create the ‘Empowering AI Toolkit’. Developed with the structure of a company board meeting in mind, the toolkit provides a framework for mapping AI policy to company objectives and priorities.

Developed with the structure of a company board meeting in mind, the toolkit provides a framework for mapping artificial intelligence policy to company objectives and priorities.

Any board director reading through WEF’s Empowering AI Toolkit will find it valuable not because it delivers any silver bullets, but because it can provide much-needed context and direction to AI policy discussions – without having to hire expensive consultants.

The new framework identifies seven priorities, like brand strategy and cybersecurity, to be considered from an ethics, risk, audit and governance point of view. The toolkit was designed to mimic how board committees and organisations typically approach ethics, policy and risk.

Artificial intelligence promises to solve some of the most pressing issues faced by society, from ensuring fairer trade and reducing consumer waste, to predicting natural disasters and providing early diagnosis for cancer patients. But scandals such as big data breaches, exposed bias in computer algorithms and new solutions that threaten jobs can destroy brands and stock prices and irreparably damage public trust.

Facebook’s 2018 Cambridge Analytica data crisis opened the world’s eyes to the risks of trusting the private sector with detailed personal data. The fact that an otherwise unknown London analytics company had drawn data on 50 million Facebook users without their permission not only drew public backlash, it sent Facebook’s market value plunging $50 billion within a week of the episode being reported.

In addition to Facebook’s Cambridge Analytica woes, there have been a number of high-profile revelations that artificial intelligence systems used by both government and business have applied hidden bias when informing decisions that affect people’s lives. These include a number of cases where algorithms used by big companies in recruitment have been biased based on the race or gender of job candidates.

There is some awareness that new technologies can wreak havoc if not used carefully – but there isn’t enough. And it can challenge corporate boards to predict where a pitfall may present itself on a company’s path to becoming more tech-savvy.

Despite all the warning signs, there remains an “it can’t happen here” attitude. Customer experience company Genesys recently asked more than five thousand employers in six countries about their opinions about AI and found that 54 per cent were not concerned about the unethical use of AI in their companies.

Many corporations have established AI working groups, ethics boards and special committees to advise on policy, risks and strategy. A new KPMG survey found that 44 per cent of businesses surveyed claimed to have implemented an AI code of ethics and another 30 per cent said that they are working on one.Since AI is an emerging technology, new risks are emerging too. Any company could use a road map.

One of today’s biggest AI risks for corporations is the use of, as WEF calls them, ‘inscrutable black box algorithms’. Simply put, most algorithms work in a manner only understood by the programmers who developed them. These algorithms are often considered to be valuable intellectual property, further reinforcing the need to keep their inner-workings a secret and thus removed from scrutiny and governance.

There are already a number of collaborations, groups and institutes that are helping to address some of these issues. The non-profit coalition Partnership on AI, founded by tech giants Amazon, DeepMind, Facebook, Google, IBM and Microsoft, was established to research best practices to ensure that AI systems serve society. Last year, Harvard Kennedy School’s Belfer Center for Science and International Affairs convened the inaugural meeting of The Council on the Responsible Use of Artificial Intelligence, bringing together stakeholders from government, business, academia and society to examine policymaking for AI usage.

However, the speed and ubiquitous nature of artificial intelligence mean that even accurately defining certain risks remains a challenge. Even the best policies must allow for change. The good news is that WEF’s new AI toolkit is available free-of-charge and so could prove to be of immediate value to commercial policymakers the world over.

This story was first published in The National.