February 2020 — Carrington Malin

Blogs, writing, published articles, media interviews and other news
February 19, 2020
defence-of-freelancers.png

Whether you’re starting to market your first venture or are helping someone else promote theirs, you’re probably already making use of talent platforms to help you get things done by online freelancers at a cheaper price point.

If you’ve been doing this for a while, then you may also have already found out that remote freelancers are often great for solving short term needs ‘in the moment’, but their work isn’t always that useful for the longer term. You may even write-off disappointing projects as learning experiments or fast fails that help you set direction. Although, the honest truth is that the online freelancers used have probably have not had enough direction to begin with and more time invested at the outset would have helped you achieve a better result.

Continue reading this story on SME10x.com.


February 10, 2020
what-will-ai-teach-our-children-1200.jpg

As our world becomes AI First, we’ll soon see a new generation of AI natives – those that have never known a world without AI assistance – with their own set of needs, behaviours and preferences.

My daughter learned to recite the alphabet from Youtube when she was three and taught both her mother and grandmother how to use Netflix at age four. It was then when she discovered Google Voice Search and was delighted when she searched for the children’s rhyme There was an old woman who swallowed a fly and instantly discovered a video of the song. Since then, of course, she’s become a user of Amazon Alexa, Google Home and — now seven years old —has her own tablet, but nevertheless still borrows mobile devices from anyone that will allow her to amuse herself with apps and voice queries. For parents these days, this is the new normal.

The unprecedented accessibility of today’s technology begs many questions for parents, educators, healthcare professionals and society as a whole. Until the arrival of the iPad’s tap and swipe interface, literacy served parental control very well. If your child couldn’t type — or at least read — then they could not do very much with the Internet, discover content, participate in digital messaging or, most importantly, use digital devices to get into any trouble.

In the 80s, access to computers was mostly limited to those that wanted to learn MS DOS commands. With the proliferation of Microsoft Windows in the late 90s, users had to, at least, be able to read. In the 2000s, rich visual cues for point-and-click navigation on the Internet had begun to take over, but this still required a basic level of technical expertise to engage. Fast forward to 2019 and many homes have multiple, always-on devices that can be activated by voice commands. The only requirement the system makes of the user, is that they can speak a few words.

In the early 2000s, educational institutions, government departments and child welfare groups began campaigning in earnest for child safety on the Internet, raising awareness, for the most part, of dangers facing children from the age of 9 years old upwards that might have been using the Internet unsupervised. Today, with the increasing popularity of artificial intelligence-powered virtual assistants and other smart devices, your child could be accessing the Internet at age three or four. At first, they won’t be able to do very much with that access, but they learn fast!

So, now our globally-networked, AI-powered technology has become accessible even to tiny tots, what impact does this have on parenting, learning and a child’s cognitive development?

Throughout most of the past two decades, the American Academy of Pediatrics stood by its strict recommendation to parents of absolutely no screen time of any kind before the age of 2 years old. For parents with iPads and TV sets in the house trying to enforce this increasingly controversial rule, this was both frustrating and perplexing. It was hard to understand what the harm was in a one year-old watching an hour of TV or nursery rhymes on an iPad. In 2016, the AAP repealed its no-screen rule and instead introduce a more practical set of guidelines for parents raising children in a multi-media environment.

Unfortunately for the AAP, it is likely that their new set of technology guidelines for parents will be obsolete quite soon. AI voice technologies are being rapidly adopted around the world, with the likes of Alexa and Google Assistant being incorporated into a wider and wider range of devices and becoming commonplace in households globally.

As any family that has these devices at home will already know, children can turn out to be the biggest users of virtual assistants, both via mobile devices and via smart speakers. Whilst the language barrier prevents one and two year olds accessing the technology, today’s parents can expect that it won’t been too long after hearing baby’s first words that baby starts talking to AI.

Although circumstances obviously vary from child to child, according to their development and affinity to the technology, having always-on AI voice in the room raises its own set of questions.

For example, when does a child become aware than an AI voice device is not actually human? Is feeling empathy for a software programme a problem?

Should we, in the process of teaching our young children to be courteous, insist that they use pleases and thank yous when giving voice commands? If not, what are the implications of children growing up, from an early age, getting used to giving commands, while most parents are trying to teach them to be more polite?

Young children today are our first generation of AI natives. They will be the first generation to grow up never having known a world that wasn’t assisted by artificial intelligence. As the digital native generations before them, their needs and behaviours will be shaped by and in tune with prevailing technologies.

Whilst we can expect many false starts, artificial intelligence is going to be widely embraced by education systems to teach, tutor, test and grade school children and their work. In fact, it will prove to be pivotal to 21st century education.

Today, China is far into the lead in piloting new AI-powered school programmes. Some 60,000 schools in China — or nearly a quarter of those in the country — are currently piloting an AI system which grades student papers, identifies errors and makes recommendations to students on improvements such as writing style and the structure or theme of essays. A government programme led by scientists, the AI system is not intended to replace human teachers, just improve efficiency and reduce time spent on reviewing and marking student papers. Teachers can then invest more time and energy in teaching itself.

Chinese after-school tutoring platform Knowbox has raised over $300 million in funding since its launch in 2014, to help school students learn via apps that provide highly personlised curated lessons. It’s already working with 100,000 schools in China and has its sights set on the global education market.

Meanwhile, China is in the advanced stages of developing curricula on AI theory and coding for primary and secondary schools. Guangdong province, which borders Hong Kong and Macau, introduced courses on artificial intelligence to primary and middle school students from September 2019. The programme will be piloted in about 100 schools in the province, but by 2022 all primary and middle schools in the region’s capital Guangzhou will have AI courses incorporated into their regular curriculum.

Singapore launched its Code for Fun (CFF) schools programme in 2014 in selected schools, at first targeting about 93,000 students. Developed by the Ministry of Education and IMDA (the Infocomm Media Development Authority) the 10-hour programme teaches children core computing and coding concepts via simple visual programming-based lessons. All primary schools in Singapore will have adopted the programme by 2020.

Children growing up during the next decade, will simply take AI for granted, as a pervasive new wave of AI-powered services supports their every want and need. However, just as this new generation will find it hard to understand what life was like before AI, older generations will find some of the new habits and behaviours of AI natives unfathomable.

For better or for worse, the drivers for AI development and deployment are economic and commercial. So, we can expect brands and commercial services to continue be at the forefront of innovation in AI. Which means, just as previous generations have been characterised as being self-involved — beginning with the original ‘Me Generation’ of Baby Boomers, so AI natives are likely to struggle to explain themselves in a world that seemingly revolves around them.

There’s been much public comment over the past ten years to suggest that Millennials — the age group born between 1981 and 1996 — have developed to be more narcissistic than previous generations. The familiar argument credits social media and ecommerce with driving the need for young people’s excessive attention and instant gratification. Although, it is true that every generation of adults seem to view the population’s youth as narcissistic.

“The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise. Children are now tyrants, not the servants of their households. They no longer rise when elders enter the room. They contradict their parents, chatter before company, gobble up dainties at the table, cross their legs, and tyrannise their teachers.”

– Socrates, 5th Century B.C. Greek philosopher.

University researchers in Europe and the U.S. have been trying to ascertain whether there has been a clear increase in narcissism for the past decade, but the truth has been found to be less straightforward than common prejudices.

A study by a joint European-U.S. university research team published in Psychological Science, suggested that there was a ‘small and continuous decline’ in narcissism among college students from 1992 to 2015. A recent study led by, then University of Mannheim, researcher Eunike Wezel and due to be published in the Journal of Language and Social Psychology found that, overall, narcissism seems to decline with age.

What is clear, is that young people in our globally-connected and information-rich world do appear to be better educated and more worldly-wise than previous generations, often having more confidence and being far more concerned with climate change, the destruction of our environment and the future of our planet.

So, as our technology becomes AI first, we can hope that ubiquitous access to knowledge, education and tools to empower individual aspirations is going to be a positive thing.

On the home front, a big part of the problem with parental control is that until very recently computer systems have never been developed with the under-tens in mind: let alone the under-fives. In the past, due to the technical knowledge required and the convenient literacy barrier, software developers rarely had to take children into account. This is now changing quite swiftly.

Amazon introduced a child-focused version of its Echo smart speaker a year or two ago, with a parental control dashboard which gives parents the options to limit access, set a cut-off for bedtime and choose what Alexa skills their children are permitted to use. It also released a ‘Say the Magic Words’ skill to help teach children good manners.

Meanwhile, Google is continuing to develop the capabilities of Family Link, a parental control hub for family Google accounts introduced in 2017. It boasts features such as setting screen time limits, approving Android Apps and even the ability to lock children’s devices remotely. Google also allows parents to set up Google Home voice profiles for their children.

Both Google and Amazon allow virtual assistant users to turn-off payment features to avoid accidental Barbie doll or remote-controlled toy orders.

The arrival of AI in our homes presents new challenges for parents, not entirely unlike the arrival of the television, video games, cable TV or the home broadband Internet connection. At first parents and child experts alike will struggle to put the benefits and risks of AI voice devices into context. Many children will succeed at this faster than either one.

This story first appeared on My AI Brand (Medium)


February 6, 2020
google-meena-chatbot.jpg

A team from Google published an academic paper on ‘Meena’, a more human-like chatbot which can hold multi-turn conversations. The open-domain chatbot has been developed on top of a huge neural network and trained on about 40 billion words of real social media conversations. The result, Google says, is that Meena can chat with you about just about anything and hold a better conversation than any other AI agent created to-date.

Anyone who uses AI assistants is sure to experience frequent misunderstandings, irrelevant answers and way too many ‘I don’t know’ responses, while many corporate chatbots simply serve up pre-defined bits of information whether you ask for them to or not. So, while we have seen massive advances in natural language processing (NLP) during recent years, human-to-AI conversations remain far from ‘natural’. Google’s Meena project could prove to be a positive step forwards.

Continue reading this story in The National.


February 4, 2020
davos-wef-2020.jpg

New AI policy toolkits are under development by a variety of NGOs, think tanks, management consultancies and global technology firms. The underlying issue is that companies are implementing new technologies faster than ever in the race to remain competitive, often without understanding the inherent risks.

In response to a growing need to raise awareness about the risks associated with artificial intelligence, the World Economic Forum, together with the Centre for the Fourth Industrial Revolution Network Fellows from Accenture, BBVA, IBM and Suntory Holdings, worked with more than 100 companies and technology experts over the past year to create a new AI policy toolkit.

Developed with the structure of a company board meeting in mind, the toolkit provides a framework for mapping artificial intelligence policy to company objectives and priorities.

Continue reading this story in The National.