artificial intelligence Archives — Carrington Malin

July 10, 2021
nine-deepfake-video-predictions-1280x688.jpg

The UAE Council for Digital Wellbeing and the UAE National Programme for Artificial Intelligence this week published a Deepfake Guide to help raise social awareness about the technology. But what are deepfakes and are they all bad? Here are my top 9 deepfake video predictions!

Deepfake videos first hit the headlines six years ago when Stanford University created a model that could change the facial expressions of famous people in video. In 2017, University of Washington researchers released a fake video of former President Barack Obama making a speech using an artificial intelligence neural network model. It wasn’t long before consumers could create their own deepfake videos using a variety of tools, including deepfakesweb.com’s free online deepfake video application. Deepfake videos have now become commonplace circulating on social media of presidents, actors, pop stars and many other famous people.

So, what’s all the fuss about? Simply, that fakes of any nature can be used to deceive people or organisations with malicious intent or for ill-gotten gains. This could include cyber-bullying, fraud, defamation, revenge-porn or simply misuse of video for profit. It perhaps comes as no surprise that a 2019 study found that 96 per cent of all deepfake videos online were pornographic.

However, using technology to produce fakes is not a new thing. The swift rise of the photocopier in the 1970s and 80s allowed office workers all over the world to alter and reproduce copies of documents, letters and certificates. The ease with which printed information could be copied and altered, prompted changes in laws, bank notes, business processes and the use of anti-counterfeit measures such as holograms and watermarks.

Like photocopying, deepfake video technology is getting better and better at what it does as time goes on, but at a much faster speed of development. This means that the cutting edge of deepfake technology is likely to remain ahead of AI systems developed to detect fake video for a long time to come.

Any technology can be used for good or evil. However, in a few short years deepfake technology has got itself a terrible reputation. So, what is it good for? My take is that deepfake video technology – or synthetic video for the commercially-minded – is just one aspect of artificial intelligence that is going to change the way that we use video, but it will be an important one. Here are my top nine deepfake video predictions.

1. Deepfake tech is going to get easier and easier to use

It’s now quite easy to create a deepfake video using the free and paid-for consumer apps that are already in the public domain. However, as the world learns to deal with deepfake video, the technology will eventually be embedded into more and more applications, such as your mobile device’s camera app.

2. Deepfake technology’s reputation is going to get worse

There’s an awful lot of potential left for deepfake scandal! The mere fact that developers are creating software that can detect deepfake video, means that the small percentage of deepfake video that can not be identified as fake may be seen as having a virtual rubber stamp of approval! And believing that a influential deepfake video is authentic is where the problem starts.

3. Policymakers are going to struggle to regulate usage

Artificial intelligence is testing policymakers’ ability to develop and implement regulation like no other force before it. The issues are the most obvious when deepfakes are used for criminal activitiy (the courts are already having to deal with deepfake video). In the near future, regulators are also going to have to legislate on the rise of ‘legitimate’ use, seamlessly altering video for education, business, government, politics and other spheres.

4. One:one messaging

One of the most exciting possibilities is how deepfake modelling might be used to create personalised one:one messaging. Today, it’s possible for you to create a video of you voicing a cute animation via an iPhone. Creating and sending a deepfake video of your real self will soon be as easy as sending a Whatsapp message. If that sounds too frivolous, imagine that you’re stuck in a meeting and want to send a message to your five year-old.

5. Personalisation at scale

As the technology becomes easier to use and manipulate, and as the processing power becomes available to automate that process further, we’re going to be able to create extremely lifelike deepfake videos – or synthetic videos, if you rather – at scale. London-based Synthesia is already testing personalised AI video messages. That will open the doors for marketers to personalise a new generation of video messages at scale and deliver a whole new experience to consumers. Imagine if every new Tesla owner received a personal video message from Elon Musk (well, ok, imagine something else then!).

6. Deepfakes on the campaign trail

As marketers get their hands on new tools to create personalised video messages for millions, then there may be no stopping political parties from doing so too. Has your candidate been banned from social media? No problem! Send out a personalise appeal for support directly to your millions of supporters! In fact, this is one use that I could see being banned outright before it even gets started.

7. Video chatbots

There are already a number of developers creating lifelike synthetic video avatars for use as customer service chatbots, including Soul Machines and Synthesia. As AI generated avatars become more lifelike, the lines between different types of video avatars and AI altered deepfake videos are going to blur. The decisions on what platform, what AI technology, what video experience and what type of voice to add, are going to be based on creative preferences or brand goals, not technology.

8. Deepfake entertainment

Although some deepfake videos can be entertaining, their novelty value already seems to be fading. In the future, whether a deepfake is entertaining or not will depend on the idea and creativity behind it. We seem to be headed for an some kind of extended reality music world, where music, musicians, voices, characters and context are all interchangeable, manipulated by increasingly sophisticated technology. The Korean music industry is already investing heavily in virtual pop stars and mixed reality concerts. Deepfake representations will not be far behind. After all, they’re already reading the news! The Chinese national news service (Xinhua) has been using an AI news anchor for the past two years.

9. Your personal AI avatar

In 2019, Biz Stone co-founder of Twitter and Lars Buttler, CEO of San Francisco-based The AI Foundation, announced that they were working on a new technology that would allow anyone to create an AI avatar of themselves. The AI avatar would look like them, talk like them and act like them, autonomously. In comparison, creating personal avatars using deepfake technology (i.e. manipulating already existing video) could be a lot easier to do. It remains to be seen how long it will take before we have the capability to have our own autonomous AI avatars, but creating our own personal AI video chatbot using deepfake tech is just around the corner!

I hope that you liked my top deepfake video predictions! But, what do you think? Will AI altered deepfakes and AI generated video avatars soon compete for our attention? Or will one negate the need for the other? And how long do you think it will be before consumers are over-targeted by personalised AI generated video? Be the first to comment below and win a tube of Pringles!

This article was first posted on Linkedin. If you’re interested in this sort of thing, I also wrote about deepfakes for The National a couple of years ago. You can find that article here.


April 14, 2021
brazils-national-artificial-intelligence-strategy-1280x587.jpg

Brazil’s national artificial intelligence strategy – a Estratégia Brasileira de Inteligência Artificial or EBIA in Portuguese – was published in the Diário Oficial da União, the government’s official gazette, last week. The publication of the Brazilian national AI strategy follows a year of public consultation (via an online platform), seeking recommendations from the technology industry and academic experts and benchmarking (2019-2020) and a year of planning and development. The strategy focuses on the policy, investment and initiatives necessary to stimulate innovation and promote the ethical development and application of AI technologies in Brazil, to include education and research and development.

As a country that has struggled with both racial equality and gender equality, it’s no surprise that ethical concerns and policies are made a priority by EBIA. Therefore, core to the strategy is that AI should not create or reinforce prejudices, putting the onus on the developers of artificial intelligence systems to follow ethical principles, meet regulatory requirements and ultimately the responsibility for how their systems function in the real world. Ethical principles will also be applied by the government in issuing tenders and contracts for solutions and services powered by AI. The strategy also embraces the OECD’s five principles for a human-centric approach to AI.

Brazil's national artificial intelligence strategy chart

It’s important when reviewing the new EBIA to take into account the Brazilian Digital Transformation Strategy (2018),or E-Digital, which puts in place some foundational policy relevant to AI. E-Digital defines five key goals 1) promoting open government data availability; 2) promoting transparency through the use of ICT; 3) expanding and innovating the delivery of digital services; 4) sharing and integrating data, processes, systems, services and infrastructure; and 5) improving social participation in the lifecycle of public policies and services. This last goal was clearly embraced in the development of EBIA by including the year-long public consultation as part of the process.

More to follow on Brazil’s national artificial intelligence strategy…

Download A Estratégia Brasileira de Inteligência Artificial (EBIA) summary (PDF, Portuguese)

Also read about last year’s publication of the Indonesia National AI Strategy (Stranas KA) and Saudi Arabia’s National Strategy for Data & AI.


April 4, 2021
Can-the-GCC-leapfrog-the-West-in-AI.jpg

Budgets fueled by oil revenues and a relative lack of legacy systems offer distinct advantages to technology master planners. So, can the GCC leapfrog the West in AI adoption?

First time visitors to Saudi Arabia, the United Arab Emirates or any of the other Gulf Cooperation Council (GCC) states, cannot fail to be impressed by the pristine international airports, awe-inspiring highways and comprehensive digital government systems.

The region’s state-of-the-art infrastructure and ability to roll-out advanced technology owe much, not only to oil revenues but also to the lack of legacy infrastructure and systems. This has allowed the Gulf states to leap-frog and embrace new technologies faster than many countries in the West. Now they’re hoping to do the same with artificial intelligence, by embracing AI faster than anyone else.

If the past month’s news is anything to go by, the GCC has recently switched its adoption of emerging technologies up a gear.

UAE reveals 4IR development strategy

Notably, amongst the many tech-related government announcements in March, the UAE last week revealed its new industrial development strategy, ‘Operation 300bn’.  The plan aims to create a new industrial ecosystem consisting primarily of high-tech and Fourth Industrial Revolution (4IR) ventures. The past five years have seen the Emirates push technological innovation to the top of the national agenda. The UAE was one of the first countries to announce a national AI strategy in 2017 and the primary motivation behind its widely publicised Mars Hope Probe is actually to help catalyse innovation at home.

‘Operation 300bn’, which aims to increase the industrial sector’s contribution to the UAE’s GDP from AED 133 billion ($36bn) to AED 300 billion ($82bn) by 2031, confirms the central position of an advanced technology agenda at the heart of the country’s policymaking.

Qatar and Saudi Arabia have also increased their 4IR focus during the past few years, with Saudi Arabia forming the Saudi Data and AI Authority (SDAIA) in 2019 and announcing its national AI strategy in October last year. This month Qatar signaled readiness to proceed with its own AI strategy, forming a new committee to help drive implementation.

Fast-tracking digital transformation

Meanwhile, we’ve seen both public and private sectors increase the rate of adoption of AI and other emerging technologies, further accelerated by the onset of Covid-19.

According to new results released from Dell Technologies’ 2020 Digital Transformation Index, Saudi Arabia and the UAE seem to be accelerating ahead of the rest of the world in implementing digital transformation and cutting-edge technologies. The research found that 90 percent of organisations in the two countries fast-tracked digital transformation programmes last year, ahead of the index’s global benchmark of 80 percent.

This fast adoption is evidenced by news of some massive technology projects that we’ve heard about during the past few weeks.

DP World, Dubai’s multinational logistics and container terminal operator, has now implemented a fully-automated Terminal Operating System for one of its key container terminals in Jebel Ali Port. The home-developed system includes autonomous systems and remote control functionality for all of the facilities in the terminal.

In the energy sector, Aramco Trading Company, or ATC, which is responsible for transporting Saudi Aramco’s oil supplies to worldwide markets, and developer Maana have implemented an AI maritime fleet optimisation application purpose-built for the oil and gas industry. The application runs a digital twin of ATC’s global maritime operations, using AI to automatically optimise schedules across the fleet with a single click and offer scenarios and insights to aid planning.

Desert smart cities

There was also no shortage of smart city news this month, with Kuwait, Saudi Arabia and the UAE, in particular, forging ahead with initiatives to improve the lives of city residents, boost competitiveness and develop urban sustainability. Dubai International Airport’s use of iris scanner ID systems for automated passport control made headlines in February. This month, a Dubai 2040 Urban Master Plan was announced to leverage city planning and new technologies to create greater urban sustainability.

In Kuwait, Hong Kong group Wai Hung and Investment Projects General Trading Company signed a deal to build one million smart parking spaces in nine countries across the Middle East.  While, in neighbouring Saudi Arabia, the holy city of Makkah (Mecca) is deploying solar-powered smart bins to collect and autonomously sort empty plastic bottles.

Abu Dhabi’s AI powerhouse Group 42 announced a new partnership with the UK’s Serco Group to develop AI and IoT solutions for facilities management and support the outsourcing company’s shift towards data-driven operations. We may well see the future impact of this partnership reach far beyond the Gulf.

In another Group 42-backed initiative announced this month, Abu Dhabi’s first public trial of driverless vehicle services will begin by the end of 2021. Initially, three autonomous vehicles will provide transport services to tourists and residents visiting the Yas Mall area, but the plan is to increase both the coverage and the number of AVs involved during 2022.

Building the Gulf’s first quantum computer

Quantum computing has already been identified as an area of opportunity by GCC states, with a number of quantum computing research groups being formed in universities in Qatar, Saudi Arabia and the United Arab Emirates.  This year, King Abdullah University of Science and Technology (KAUST) has been recruiting for a Professor of Devices for Quantum Technologies, who will ultimately lead the university’s efforts to build quantum devices.

However, in Abu Dhabi, the newly formed Technology Innovation Institute (TII) is already building its own quantum computer at its Quantum Research Centre, in collaboration with Barcelona-based deep-tech startup Qilimanjaro. TII is the applied research arm of the emirate’s Advanced Technology Research Council, which both formulates research policy and orchestrates projects, resources and funding.

It’s research and development ventures such as this that symbolise the latest dreams of Gulf policy-makers. Over the years, the Gulf states have proved to be astute buyers of advanced technology, while taking none of the risks necessary to develop innovation at home.

Today, along with ambitious policies to embrace emerging technologies, build smart cities and leverage AI, there is also now momentum behind policies that actively encourage home-grown technology development. The region’s nascent R&D sector has already become an early beneficiary of this policy shift and it’s a sector that the world can expect to hear much more from during the coming years.

This article was originally published as ‘A letter from the Gulf’ in The AI Journal.


March 31, 2021
impact-of-ai-in-the-middle-east.gif

The impact of AI in the Middle East special report is out from Middle East Economic Digest (MEED), which includes features covering innovation, digital transformation in the construction industry, and update on Qatar’s national artificial intelligence strategy and MEED’s own Digital Transformation Index.

I was name-checked in the ‘Creating an artificial intelligence ecosystem‘ feature by Jennifer Aguinaldo, which explores the region’s quest to drive home-grown innovation and create an AI ecosystem that does more than simply buy technology from overseas. All the national AI strategies developed by countries around the region include plans to encourage innovation, incentivise startups and nurture local research and development. However, it is Saudi Arabia and the United Arab Emirates that have fast-tracked more initiatives, policy and supporting government programmes over the past few years.

As is normally the case with Middle East Economic Digest, the impact of AI in the Middle East report is behind the paywall. If you are a MEED subscriber, you can read Jennifer’s full article here.


January 10, 2021
artificial-intelligence-ethical-concerns-2020.jpg

My New Year’s Linkedin poll about changes in how people feel about their ethical concerns regarding AI doesn’t prove much, but it does show that 2020 did little to ease those concerns.

Opinions and our level of understanding about artificial intelligence can vary a great deal from person to person. For example, I consider myself a bit of a technophile and an advocate of many technologies including AI, with a higher than average level of understanding. However, I harbour many concerns about the ethical application, usage and the lack of governance for some AI technologies. My knowledge doesn’t stop me having serious concerns, nor do those concerns stop me from seeing the benefits of technology applied well. I also expect my views on the solutions to ethical issues to differ from others. AI ethics is a complex subject.

So, my intention in running this limited Linkedin poll over the past week (96 people responded) was not to analyse the level of concern that people feel about AI, nor the reasons behind it, but simply whether the widepread media coverage about AI during the pandemic had either heightened or alleviated people’s concerns.

The results of the poll show that few people (9%) felt that their ethical concerns about AI were alleviated during 2020. Meanwhile, a significant proportion (38%) felt that 2020’s media coverage had actually heightened their ethical concerns about AI. We can’t guess the level of concern among the third and largest group – the 53% that voted 2020 ‘hasn’t changed anything’ – however, it’s clear that 2020 media coverage about AI brought no news to alleviate any concerns they might have either.

Artificial intelligence ethical concerns poll 2020

Media stories about the role of AI technologies in responding to the coronavirus pandemic began to appear early on in 2020, with governments, corporations and NGOs providing examples of where AI was being put to work and how it was benefiting citizens, customers, businesses, health systems, public services and society in general. Surely, this presented a golden opportunity for proponents of AI to build trust in its applications and technologies?

Automation and AI chat bots allowed private and public sector services, including healthcare systems, to handle customer needs as live person-to-person communications became more difficult to ensure. Meanwhile, credit was given to AI for helping to speed up data analysis, research and development to find new solutions, treatments and vaccines to protect society against the onslaught of Covid-19. Then there was the wave of digital adoption by retail companies (AI powered or not) in an effort to provide digital, contactless access to their services, boosting consumer confidence in services and increasing usage of online ordering and contactless payments.

On the whole, trust in the technology industry remains relatively high compared to other industries, but, nevertheless, trust is being eroded and it’s not a surprise that new, less understood and less regulated technologies such as AI are fueling concerns. Fear of AI-driven job losses is a popular concern, but so are privacy, security and data issues. However, many people around the world are broadly positive about AI, in particular those in Asia. According to Pew Research Center, two thirds or more of people surveyed in India, Japan, Singapore, South Korea and Taiwan say that AI has been a good thing for society.

Since the beginning of the pandemic, AI’s public image has had wins and losses. For example, research from Amedeus found that 56% of Indians believe new technologies will boost their confidence in travel. Meanwhile, a study of National Health Service (NHS) workers in London found that although 70% believed that AI could be useful, 80% of participants believed that there could be serious privacy issues associated with its use. However, despite a relatively high level of trust in the US for government usage of facial recognition, the Black Lives Matter protests of 2020 highlighted deep concerns, prompting Amazon, IBM and Microsoft to halt the sale of facial recognition to police forces.

Overall, I don’t think that AI has been seen as the widespread buffer to the spread of Covid-19 as it, perhaps, could have turned out to be. Renowned global AI expert Kai-Fu Lee commented in a webinar last month that AI wasn’t really prepared to make the decisive difference in combating the spread of the new coronavirus. With no grand victory over Covid-19 to attribute to AI, its role over the past year it’s understandable that there was no grand victory for AI’s public image either. Meanwhile, all the inconvenient questions about AI’s future and the overall lack of clear policies that fuel concerns about AI remain, some even attracting greater attention during the pandemic.

This article was first posted on Linkedin.


September 5, 2020
saudi-national-artificial-intelligence.jpg

The Saudi national artificial intelligence strategy is to be launched at the Global AI Summit, which will now take place virtually from 21-22 October*, according to a statement from the Saudi Data and Artificial Intelligence Authority (SDAIA) on Friday. It was disclosed in August that the national AI strategy presented by the authority (since named the National Strategy for Data & AI) had been approved by King Salman bin Abdulaziz Al Saud. PWC has forecast that AI could contribute $135 billion (or 12.4%) to Saudi Arabia’s GDP by the year 2030.

Established by royal decree in August 2019, the SDAIA was given the mandate to drive the national data and AI agenda for transforming the country into a leading data-driven economy, and has developed Saudi Arabia’s national AI strategy over the past year. Although the details of the plan have been kept under wraps, the new strategy is expected to contribute to 66 of the country’s strategic goals, which are directly or indirectly related to data and AI.

The SDAIA has already reached a number of milestones since its inception, establishing three specialised centres of expertise: the National Information Center, the National Data Management Office and the National Center for AI. It has also begun building one of the largest data clouds in the region by merging 83 data centres owned by over 40 Saudi government bodies. More than 80 percent of government datasets have so far been consolidated under a national data bank.

Meanwhile, the authority has been using AI to identify opportunities for improving the Kingdom’s government processes, which may result in some $10 billion in government savings and additional revenues.

Originally slated for March 2020, the Global AI Summit will discuss AI, its applications, impact on social and economic development, plus global challenges and opportunities. The event aims to connect key decision makers from government and public sector, academia, industry and enterprise, tech firms, investors, entrepreneurs and startups.

October’s virtual summit will be organised into four tracks:

    • Shaping the new normal;
    • AI and governments;
    • Governing AI; and
    • The future of AI.

The Global AI Summit aims to tackle the challenges faced by countries around the world, from technical to ethical. Details of the agenda and speaker platform for the Global AI Summit have yet to be announced, although the presentation of the Saudi national artificial intelligence strategy is bound to be a highlight.

*Updated 17 September 2020

Also read: Saudi national AI strategy announced with investment target of $20 billion – 21 October 2020


August 21, 2020
ai-in-the-middle-east.png

A great roundup about artificial intelligence in the Middle East by Damian Radcliffe, Carolyn S Chambers Professor in Journalism at the University of Oregon, which quotes me commenting on Saudi Arabia and the United Arab Emirates. With IT spending in the Middle East and Africa (MEA) forecast by IDC to reach $83 billion this year, AI is going to become an increasing focus.

IDC also predicts that investment in AI systems across MEA will hit $374.2 million this year, up from $261.8 million in 2018 and a projected expenditure of $310.3 million in 2019. However, with many AI technologies in high demand since the arrival of the Covid-19 pandemic, one has to wonder how this will affect IDC’s forecasts – not just in the MEA region, but globally too.

Saudi Arabia and the UAE had both begun investing in new AI technologies for government use, planning how to encourage AI-powered innovations and looking at regulatory requirements for their Fourth Industrial Revolution future. However, the advent of coronavirus has certainly fueled both interest and investment in artificial intelligence, with public and private sectors investing in automation, data analysis, robotics, health and safety systems, plus technologies to enhance contactless delivery of consumer services.

Despite forcing the cancellation of many high tech events around the region, the pandemic has also, arguably, fast tracked government plans and policies to harness AI and create a business environment conducive to driving successful digital economies. The UAE is reported to have improved plans for leveraging AI consistent with its national AI strategy, while Dubai announced a new comprehensive drone law in July. Meanwhile, Saudi Arabia approved its own national artificial intelligence strategy in August – and muted that it would soon introduce a comprehensive law to govern commercial and recreational drone use in the Kingdom.

For more on artificial intelligence in the Middle East read Damian’s full article here.


August 16, 2020
Indonesia-National-AI-Strategy.jpg

The Indonesia National AI Strategy, now known as Stranas KA (Strategi Nasional Kecerdasan Artifisial), has been published. The new strategy was announced by the Minister of Research and Technology and head of the BRIN (the National Research and Innovation Agency) Bambang PS Brodjonegoro in an television address made last Monday to mark the country’s 25th National Technology Awakening Day. The minister also launched an electronic innovation catalogue, helping Indonesian technology developers to market their offerings and sell to government procurement offices.

Transforming Indonesia into a Fourth Industrial Revolution economy has become focus for the government over the past few years and the necessity of creating a digital-savvy workforce has become a top priority. Stranas KA aims to tie together many of the country’s digital initiatives and maps closely to Visi Indonesia 2045, the country’s broad economic, social, governance and technology development strategy. The National Artificial Intelligence Strategy Framework provides an at-a-glance view of how these different goals are held in context.

Stranas KA aims to support five national priorities, where the government believes that artificial intelligence could have the biggest impact on national progress and outcomes.

Health services – With 268 million people living across 6,000 of Indonesia’s total 17,504 islands, delivering a consistent standard of healthcare is a national challenge. The archipelago also faces increased risks from global disease outbreaks such as SARS and, recently, Covid-19. The country’s response to the pandemic has already somewhat accelerated plans for smart hospitals and health security infrastructure.

Bureaucractic reform – With a civilian civil service of about 4 million, reforming the government’s highly centralised administration remains a significant challenge. Indonesia is lagged in implementation of digital services, according to the United Nations E-Government Development Index (EGDI), ranking below Borneo, Malaysia, Singapore, Thailand and Vietnam. President Joko Widodo has promised to create a citizen-centric digitised service government (Pemerintahan Digital Melayani) in the next five years.

Education and research – Education is integral to Visi Indonesia 2045 and the move towards online schooling during the Covid-19 pandemic has laid bare the country’s digital divide. The pressures of the digital economy are also recognised by development plans. According to the government, Indonesia needs a digital workforce of 113 million by 2030-2035.

Food security – According to President Widodo, food security remains Indonesia’s top priority and the Food Security Agency focuses on three main areas: food availability, food accessibility and food utilisation. Food, agriculture and fisheries government departments and agencies have already begun using satellite technology, machine learning and smart farming to better plan, forecast and manage agricultural production and natural resources.

Mobility and smart cities – The number of people living in Indonesia’s urban areas is now close to 60 percent and is expected to rise to 70 percent of the total population by the year 2050. The government currently plans to develop 98 smart cities and 416 smart districts, under Indonesia’s 100 Smart Cities Plan.

Indonesia National AI Strategy, August 2020

Meanwhile, the Indonesia national AI strategy identifies four key focus areas:

    1. Ethics and Policy
    2. Talent Development
    3. Infrastructure and Data
    4. Industrial Research and Innovation

Indonesia is already one of South East Asia’s biggest investors in artificial intelligence, with IDC’s 2018 Asia-Pacific Enterprise Cognitive/AI survey finding that 25 percent of large organisations in the country have adopted AI systems (compared with 17% in Thailand, 10% in Singapore and 8% in Malaysia).

Smart cities, one of Stranas KA’s five top priority areas, have been identified as a fundamental building block for Indonesia’s Industry 4.0 future. Last year President Widodo announced plans to create a new futuristic smart city capital on the island of Borneo, to replace Jakarta. The new capital will rely heavily on sustainable smart city systems, cleantech and infrastructure run by emerging technologies such as 5G, AI and IoT (Internet of Things). Originally slated for completion by 2024 (pre-pandemic) and estimated to cost $33 billion, the project reportedly received an offer by Japanese multinational investor SoftBank Group to invest up to $40 billion.

The Indonesia National AI Strategy details a programme roadmap for both its four key focus areas and the five national priorities, for which it considers plans as short-term (2020-2024) and longer-term (2025-2045). All in all, the strategy document identifies 186 programmes, including many that aim to develop the plans, pilot schemes, policies and regulations, plus checks and balances, necessary to drive the overall strategy.

Underpinning the acceleration of Indonesia’s artificial intelligence journey, Stranas KA includes plans for national standards, regulations and an ethics board to ensure that usage of AI is in accordance with the country’s Pancasila values system.

The development of the 194-page National Artificial Intelligence Strategy was coordinated by the Agency for the Assessment and Application of Technology or BPPT, a non-ministerial government agency under the coordination of the Ministry for Research and Technology, and was widely anticipated to be announced in July or August. A wide variety of public and private sector organisations contributed to the plan including government ministries, universities, industry associations and national telecom providers.

Although many of the programmes and initiatives detailed in the Indonesia National AI Strategy can be found in existing government strategies, plans and policy, Stranas KA is nevertheless highly ambitious. The success of the overall plan will likely rest heavily on how many of the foundation programmes it is able to get off the ground during the next 4-5 years.


February 10, 2020
what-will-ai-teach-our-children-1200.jpg

As our world becomes AI First, we’ll soon see a new generation of AI natives – those that have never known a world without AI assistance – with their own set of needs, behaviours and preferences.

My daughter learned to recite the alphabet from Youtube when she was three and taught both her mother and grandmother how to use Netflix at age four. It was then when she discovered Google Voice Search and was delighted when she searched for the children’s rhyme There was an old woman who swallowed a fly and instantly discovered a video of the song. Since then, of course, she’s become a user of Amazon Alexa, Google Home and — now seven years old —has her own tablet, but nevertheless still borrows mobile devices from anyone that will allow her to amuse herself with apps and voice queries. For parents these days, this is the new normal.

The unprecedented accessibility of today’s technology begs many questions for parents, educators, healthcare professionals and society as a whole. Until the arrival of the iPad’s tap and swipe interface, literacy served parental control very well. If your child couldn’t type — or at least read — then they could not do very much with the Internet, discover content, participate in digital messaging or, most importantly, use digital devices to get into any trouble.

In the 80s, access to computers was mostly limited to those that wanted to learn MS DOS commands. With the proliferation of Microsoft Windows in the late 90s, users had to, at least, be able to read. In the 2000s, rich visual cues for point-and-click navigation on the Internet had begun to take over, but this still required a basic level of technical expertise to engage. Fast forward to 2019 and many homes have multiple, always-on devices that can be activated by voice commands. The only requirement the system makes of the user, is that they can speak a few words.

In the early 2000s, educational institutions, government departments and child welfare groups began campaigning in earnest for child safety on the Internet, raising awareness, for the most part, of dangers facing children from the age of 9 years old upwards that might have been using the Internet unsupervised. Today, with the increasing popularity of artificial intelligence-powered virtual assistants and other smart devices, your child could be accessing the Internet at age three or four. At first, they won’t be able to do very much with that access, but they learn fast!

So, now our globally-networked, AI-powered technology has become accessible even to tiny tots, what impact does this have on parenting, learning and a child’s cognitive development?

Throughout most of the past two decades, the American Academy of Pediatrics stood by its strict recommendation to parents of absolutely no screen time of any kind before the age of 2 years old. For parents with iPads and TV sets in the house trying to enforce this increasingly controversial rule, this was both frustrating and perplexing. It was hard to understand what the harm was in a one year-old watching an hour of TV or nursery rhymes on an iPad. In 2016, the AAP repealed its no-screen rule and instead introduce a more practical set of guidelines for parents raising children in a multi-media environment.

Unfortunately for the AAP, it is likely that their new set of technology guidelines for parents will be obsolete quite soon. AI voice technologies are being rapidly adopted around the world, with the likes of Alexa and Google Assistant being incorporated into a wider and wider range of devices and becoming commonplace in households globally.

As any family that has these devices at home will already know, children can turn out to be the biggest users of virtual assistants, both via mobile devices and via smart speakers. Whilst the language barrier prevents one and two year olds accessing the technology, today’s parents can expect that it won’t been too long after hearing baby’s first words that baby starts talking to AI.

Although circumstances obviously vary from child to child, according to their development and affinity to the technology, having always-on AI voice in the room raises its own set of questions.

For example, when does a child become aware than an AI voice device is not actually human? Is feeling empathy for a software programme a problem?

Should we, in the process of teaching our young children to be courteous, insist that they use pleases and thank yous when giving voice commands? If not, what are the implications of children growing up, from an early age, getting used to giving commands, while most parents are trying to teach them to be more polite?

Young children today are our first generation of AI natives. They will be the first generation to grow up never having known a world that wasn’t assisted by artificial intelligence. As the digital native generations before them, their needs and behaviours will be shaped by and in tune with prevailing technologies.

Whilst we can expect many false starts, artificial intelligence is going to be widely embraced by education systems to teach, tutor, test and grade school children and their work. In fact, it will prove to be pivotal to 21st century education.

Today, China is far into the lead in piloting new AI-powered school programmes. Some 60,000 schools in China — or nearly a quarter of those in the country — are currently piloting an AI system which grades student papers, identifies errors and makes recommendations to students on improvements such as writing style and the structure or theme of essays. A government programme led by scientists, the AI system is not intended to replace human teachers, just improve efficiency and reduce time spent on reviewing and marking student papers. Teachers can then invest more time and energy in teaching itself.

Chinese after-school tutoring platform Knowbox has raised over $300 million in funding since its launch in 2014, to help school students learn via apps that provide highly personlised curated lessons. It’s already working with 100,000 schools in China and has its sights set on the global education market.

Meanwhile, China is in the advanced stages of developing curricula on AI theory and coding for primary and secondary schools. Guangdong province, which borders Hong Kong and Macau, introduced courses on artificial intelligence to primary and middle school students from September 2019. The programme will be piloted in about 100 schools in the province, but by 2022 all primary and middle schools in the region’s capital Guangzhou will have AI courses incorporated into their regular curriculum.

Singapore launched its Code for Fun (CFF) schools programme in 2014 in selected schools, at first targeting about 93,000 students. Developed by the Ministry of Education and IMDA (the Infocomm Media Development Authority) the 10-hour programme teaches children core computing and coding concepts via simple visual programming-based lessons. All primary schools in Singapore will have adopted the programme by 2020.

Children growing up during the next decade, will simply take AI for granted, as a pervasive new wave of AI-powered services supports their every want and need. However, just as this new generation will find it hard to understand what life was like before AI, older generations will find some of the new habits and behaviours of AI natives unfathomable.

For better or for worse, the drivers for AI development and deployment are economic and commercial. So, we can expect brands and commercial services to continue be at the forefront of innovation in AI. Which means, just as previous generations have been characterised as being self-involved — beginning with the original ‘Me Generation’ of Baby Boomers, so AI natives are likely to struggle to explain themselves in a world that seemingly revolves around them.

There’s been much public comment over the past ten years to suggest that Millennials — the age group born between 1981 and 1996 — have developed to be more narcissistic than previous generations. The familiar argument credits social media and ecommerce with driving the need for young people’s excessive attention and instant gratification. Although, it is true that every generation of adults seem to view the population’s youth as narcissistic.

“The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise. Children are now tyrants, not the servants of their households. They no longer rise when elders enter the room. They contradict their parents, chatter before company, gobble up dainties at the table, cross their legs, and tyrannise their teachers.”

– Socrates, 5th Century B.C. Greek philosopher.

University researchers in Europe and the U.S. have been trying to ascertain whether there has been a clear increase in narcissism for the past decade, but the truth has been found to be less straightforward than common prejudices.

A study by a joint European-U.S. university research team published in Psychological Science, suggested that there was a ‘small and continuous decline’ in narcissism among college students from 1992 to 2015. A recent study led by, then University of Mannheim, researcher Eunike Wezel and due to be published in the Journal of Language and Social Psychology found that, overall, narcissism seems to decline with age.

What is clear, is that young people in our globally-connected and information-rich world do appear to be better educated and more worldly-wise than previous generations, often having more confidence and being far more concerned with climate change, the destruction of our environment and the future of our planet.

So, as our technology becomes AI first, we can hope that ubiquitous access to knowledge, education and tools to empower individual aspirations is going to be a positive thing.

On the home front, a big part of the problem with parental control is that until very recently computer systems have never been developed with the under-tens in mind: let alone the under-fives. In the past, due to the technical knowledge required and the convenient literacy barrier, software developers rarely had to take children into account. This is now changing quite swiftly.

Amazon introduced a child-focused version of its Echo smart speaker a year or two ago, with a parental control dashboard which gives parents the options to limit access, set a cut-off for bedtime and choose what Alexa skills their children are permitted to use. It also released a ‘Say the Magic Words’ skill to help teach children good manners.

Meanwhile, Google is continuing to develop the capabilities of Family Link, a parental control hub for family Google accounts introduced in 2017. It boasts features such as setting screen time limits, approving Android Apps and even the ability to lock children’s devices remotely. Google also allows parents to set up Google Home voice profiles for their children.

Both Google and Amazon allow virtual assistant users to turn-off payment features to avoid accidental Barbie doll or remote-controlled toy orders.

The arrival of AI in our homes presents new challenges for parents, not entirely unlike the arrival of the television, video games, cable TV or the home broadband Internet connection. At first parents and child experts alike will struggle to put the benefits and risks of AI voice devices into context. Many children will succeed at this faster than either one.

This story first appeared on My AI Brand (Medium)


February 6, 2020
google-meena-chatbot.jpg

The tech giant’s new chatbot could make AI-powered communication more conversational and even more profitable

Since the launch of Apple’s Siri a decade ago, more than 1.5 billion virtual assistants have been installed on smartphones and other devices. There can be few electronics users who don’t recognise the enormous promise of conversational AI. However, our seemingly hard of hearing virtual assistants and awkward artificial intelligence chatbot conversations have also proven the technology’s limitations.

Anyone who uses AI assistants is sure to experience frequent misunderstandings, irrelevant answers and way too many ‘I don’t know’ responses, while many corporate chatbots simply serve up pre-defined bits of information whether you ask for them to or not. So, while we have seen massive advances in natural language processing (NLP) during recent years, human-to-AI conversations remain far from ‘natural’.

But that may soon change.

Last week, a team from Google published an academic paper on ‘Meena’, an open-domain chatbot developed on top of a huge neural network and trained on about 40 billion words of real social media conversations. The result, Google says, is that Meena can chat with you about just about anything and hold a better conversation than any other AI agent created to-date.

One of the things that Google’s development team has been working on is how to increase the chatbot’s ability to hold multi-turn conversations, where a user’s follow-up questions are considered by AI in context of the whole conversations so far. The team’s solution has been to build the chatbot on a neural network, a set of algorithms modeled loosely on the way the human brain works, which is designed to recognise patterns in data. This neural network was then trained on large volumes of data to create 2.6 billion parameters, which inform those algorithms and so improve Meena’s conversation quality.

Creating conversational computer applications that can pass for human intelligence has been a core theme for both computer science and science fiction since the fifties. Alan Turing, the famous British World War II codebreaker and one of the founding fathers of AI theory, developed a test to measure if a computer system can exhibit intelligent behaviour indistinguishable from that of a human in 1950. Since then, the Turing Test has been somewhat of a Holy Grail for computer scientists and technology developers.

However, Google’s quest to develop a superior chatbot is far from academic. The global AI chatbot market offers one of the best examples for how AI can drive revenue for businesses. Business and government organisations worldwide are investing in chatbots, in an effort to enhance customer service levels, decrease costs and open up new revenue opportunities. According to research company Markets and Markets, the global market for conversational AI solutions is forecast to grow from $4.2 billion (Dh15.4bn) in 2019 to $15.7bn by the year 2024.

Chatbot solutions built for large enterprises have the ability to carry on tens of thousands of conversations simultaneously, drawing on millions of data points. Global advisory firm Gartner Group has found AI chatbots used for customer service can lead to reductions in customer calls, email and other enquiries by up to 70 per cent.

All this industry growth and customer service success is taking place despite the innumerable issues that users encounter when trying to have customer service conversations with AI chatbots. As consumers, we are now conditioned to dealing with technology that doesn’t quite work. If the benefits outweigh the frustration, we’re happy to work around the problem. We rephrase our questions when a chatbot can’t interpret our request or choose from the options offered, rather than try to solicit further information. Or, if we feel the conversation is just too much effort for the reward, we just give up.

The latent opportunity for virtual customer assistants is that they could play an active role in defining needs and preferences in the moment, whilst in conversation with the customer, helping to create highly personalised services. Today, programmers have to limit the options that customer service chatbots offer or too many conversations result in dead-ends, unmet requests and frustrated customers. So, choices offered to customers by chatbots, are often as simple as A, B or C.

If developers can increase a chatbot’s ability to hold a more natural human conversation, then chatbots may have the opportunity to solicit more actionable data from customer conversations, resolve a wider range of customer issues automatically and identify additional revenue opportunities in an instant.

Given how fast the chatbot technology market is growing, the payback from enabling AI chatbots to bring customer conversations to a more profitable conclusion could register in the billions of dollars.

This story was first published in The National.