policy Archives — Carrington Malin

March 31, 2020
uae-voip-during-coronavirus.png

CIO Middle East recently asked me about recent moved by the UAE’s Telecommunications Regulation Authority (TRA) to unblock certain VoIP and video call apps. In March 2020, the TRA lifted a long-held ban on a few popular apps such as Microsoft Teams and Zoom as it announced measures to promote remote working.

It’s an interesting subject since, although the UAE is one of the most technologically advanced nations on the planet, pressure from telecom service providers has effectively blocked the usage of most of the world’s most popular voice and video call apps, such as Facetime, Skype and Whatsapp. So, now having allowed access to a number of video call apps, what will happen when the coronavirus emergency ends? Will the TRA block those apps again or does this mark the beginning of a new era of policy?

Read the full article here.

 


March 13, 2020
amazon-prime-drone-delivery.jpg

A pandemic could be a tipping point for a technology we’ve been promised for nearly a decade.

China’s use of drones and robots in its fight to contain Covid-19 is now well publicised, with medical authorities and security forces employing autonomous agents to limit human contact and so the spread of the virus.

However, e-commerce companies and other technology firms have been talking about the promise of commercial drone services for some years now, where drones can deliver prepared food, groceries, medicine and other online purchases directly to the consumer in a matter of minutes.

In reality, while the technology is largely ready, commercial drone delivery – a new market that, according to Markets and Markets, could be worth more than $27 billion (Dh99bn) by 2030 — has long been held up by lack of government policy and regulation. Could the demands of fighting Covid-19 actually fast-track the introduction of drones to the masses?

The benefits of drone delivery have already been clearly demonstrated by China’s response to the coronavirus emergency. In an effort to both increase the speed of medical deliveries and remove unnecessary human contact, drones have been used widely in the country’s virus-stricken provinces.

Chinese firms, such as e-commerce giant JD.com, drone delivery start-up Antwork, drone maker MicroMultiCopter and China’s largest private courier company SF Express, have all deployed drones to help deliver medical supplies and transport medical samples for analysis. By using drones, healthcare authorities are assured of faster and “contactless delivery”.

China has also used drones fitted with thermal cameras to scan crowds and identify those that may be in need of medical treatment; drones have been carrying sanitisers to spray densely populated communities; and police security drones have reminded city pedestrians to wear protective face masks. Chinese drone manufacturer DJI mounted loudspeakers on its drones to help police disperse crowds in public places.

Predictably, China’s use of drones has also sparked new concerns about electronic surveillance and infringements on human rights from private tech companies.

However, for those watching the development of drone delivery services, it appears the vast majority of drone usage during China’s health emergency has been state-sponsored and limited in scope. Although drones have been used for emergency food delivery by government authorities, commercial services haven’t moved beyond pilot projects.

Food and shopping delivery trials have been conducted by AntWork, Alibaba, JD.com and others, but, despite the new public demand for contactless delivery, no fully commercial services have yet been rolled out. Delivery app Meituan Dianping began delivering grocery orders in China using autonomous vehicles last month, as part of a contactless delivery initiative, but has not yet been able to launch its planned drone delivery service.

Elsewhere in the world, drone delivery trials have been taking place for years. Amazon began talking about plans for e-commerce deliveries using unmanned aerial vehicles (UAVs) in 2013 and launched its Prime Air aerial delivery system brand in 2016, making its first drone delivery to a customer in the city of Cambridge, England. However, despite announcements over the past year that Prime Air would be ready to begin commercial services in the UK within a matter of months, no service has been rolled out yet.

Irish drone delivery firm Manna plans to launch a new food delivery pilot this month, servicing a suburb of about 30,000 South Dublin residents. American ice cream brand Ben & Jerry’s, UK food delivery firm Just Eat, and Irish restaurant chain Camile Thai have all signed up to take part in the pilot project.

Meanwhile, Alphabet-owned Wing, which last year became the first Federal Aviation Administration certified air carrier, has been testing drone delivery more extensively in Australia, Finland and the US, completing over 80,000 flights. Since then, logistics giant UPS has also received US government approval to operate a drone carrier in preparation for making commercial, medical and industrial deliveries.

As governments across the globe look for ways to prevent the spread of a new highly contagious virus, fast-tracking contactless delivery options seems to make enormous sense. Contactless deliveries of all kinds could prove to be a vital tool to limit exposure to infected individuals, deliver essential medicine and food to high-risk locations and allow people to self-isolate, while still being able to get their groceries.

No doubt, logistics and e-commerce industries are hoping the coronavirus crisis brings more government focus to solving these regulatory obstacles.

This story was first published in The National.


February 4, 2020
davos-wef-2020.jpg

The WEF worked with more than 100 companies and tech experts to develop a new framework for assessing risk and AI.

Companies are implementing new technologies faster than ever in the race to remain competitive, often without understanding the inherent risks.

In response to a growing need to raise awareness about the risks associated with artificial intelligence, the World Economic Forum, together with the Centre for the Fourth Industrial Revolution Network Fellows from Accenture, BBVA, IBM and Suntory Holdings, worked with more than 100 companies and technology experts over the past year to create the ‘Empowering AI Toolkit’. Developed with the structure of a company board meeting in mind, the toolkit provides a framework for mapping AI policy to company objectives and priorities.

Developed with the structure of a company board meeting in mind, the toolkit provides a framework for mapping artificial intelligence policy to company objectives and priorities.

Any board director reading through WEF’s Empowering AI Toolkit will find it valuable not because it delivers any silver bullets, but because it can provide much-needed context and direction to AI policy discussions – without having to hire expensive consultants.

The new framework identifies seven priorities, like brand strategy and cybersecurity, to be considered from an ethics, risk, audit and governance point of view. The toolkit was designed to mimic how board committees and organisations typically approach ethics, policy and risk.

Artificial intelligence promises to solve some of the most pressing issues faced by society, from ensuring fairer trade and reducing consumer waste, to predicting natural disasters and providing early diagnosis for cancer patients. But scandals such as big data breaches, exposed bias in computer algorithms and new solutions that threaten jobs can destroy brands and stock prices and irreparably damage public trust.

Facebook’s 2018 Cambridge Analytica data crisis opened the world’s eyes to the risks of trusting the private sector with detailed personal data. The fact that an otherwise unknown London analytics company had drawn data on 50 million Facebook users without their permission not only drew public backlash, it sent Facebook’s market value plunging $50 billion within a week of the episode being reported.

In addition to Facebook’s Cambridge Analytica woes, there have been a number of high-profile revelations that artificial intelligence systems used by both government and business have applied hidden bias when informing decisions that affect people’s lives. These include a number of cases where algorithms used by big companies in recruitment have been biased based on the race or gender of job candidates.

There is some awareness that new technologies can wreak havoc if not used carefully – but there isn’t enough. And it can challenge corporate boards to predict where a pitfall may present itself on a company’s path to becoming more tech-savvy.

Despite all the warning signs, there remains an “it can’t happen here” attitude. Customer experience company Genesys recently asked more than five thousand employers in six countries about their opinions about AI and found that 54 per cent were not concerned about the unethical use of AI in their companies.

Many corporations have established AI working groups, ethics boards and special committees to advise on policy, risks and strategy. A new KPMG survey found that 44 per cent of businesses surveyed claimed to have implemented an AI code of ethics and another 30 per cent said that they are working on one.Since AI is an emerging technology, new risks are emerging too. Any company could use a road map.

One of today’s biggest AI risks for corporations is the use of, as WEF calls them, ‘inscrutable black box algorithms’. Simply put, most algorithms work in a manner only understood by the programmers who developed them. These algorithms are often considered to be valuable intellectual property, further reinforcing the need to keep their inner-workings a secret and thus removed from scrutiny and governance.

There are already a number of collaborations, groups and institutes that are helping to address some of these issues. The non-profit coalition Partnership on AI, founded by tech giants Amazon, DeepMind, Facebook, Google, IBM and Microsoft, was established to research best practices to ensure that AI systems serve society. Last year, Harvard Kennedy School’s Belfer Center for Science and International Affairs convened the inaugural meeting of The Council on the Responsible Use of Artificial Intelligence, bringing together stakeholders from government, business, academia and society to examine policymaking for AI usage.

However, the speed and ubiquitous nature of artificial intelligence mean that even accurately defining certain risks remains a challenge. Even the best policies must allow for change. The good news is that WEF’s new AI toolkit is available free-of-charge and so could prove to be of immediate value to commercial policymakers the world over.

This story was first published in The National.