ATRC Archives — Carrington Malin

March 18, 2023
tii-atrc.jpg

In a week full of big technology news, globally and regionally, the Abu Dhabi government’s leading applied research centre has announced that it has developed one of the highest performing large language models (or LLMs) in the world. It’s a massive win for the research institute and proof positive for Abu Dhabi’s fast growing R&D ecosystem.

Technology firms have gone to great lengths since the public beta of ChatGPT was introduced in November to make it clear that OpenAIGPT series of models are not the only highly advanced LLMs in development.

Google in particular has moved fast to try and demonstrate that its supremacy in search is not under threat from OpenAI’s models. Tuesday, the search giant also announced a host of AI features that will be brought to its Gmail platform, apparently in answer to Microsoft‘s plans to leverage both its own AI models and OpenAI’s into products across its portfolio. However, announced the same day, OpenAI revealed that it has begun introducing its most advanced and human-like AI model to-date: GPT-4.

Whatever way you look at it, OpenAI’s GPT-3 was an incredible feat of R&D. It has many limitations, as users of its ChatGPT public beta can attest, but it also showcases powerful capabilities and hints at the future potential of advanced AI models. The triumph for OpenAI though, and perhaps the whole AI sector in general, was the enormous publicity and public recognition of AI’s potential. Now everyone thinks they understand what AI can do, even though they are sure to be further educated by GPT-4 and the new wave of applications built on new advanced AI models heading their way.

So, what does this emerging wave of LLMs mean for other research labs and R&D institutions developing their own AI models around the world? To begin with, the bar to entry into LLMs is set high, in terms of both the technology and the budget required.

Advanced AI models today are trained, not born. GPT-3 was trained on hundreds of billions of words, numbers and computer code. According to a blog from San Francisco-based Lambda Labs, training GPT-3 might take 355 ‘GPU-years’ at a cost of $4.6 million for a single training run. Meanwhile, running ChatGPT reportedly costs OpenAI more than $100,000 per day. R&D labs competing in the world of LLMs clearly need deep pockets.

Then, it perhaps goes without saying, but institutions planning to develop breakthrough LLMs, must also have the right talent. And of course, global competition for the type of top researchers needed to develop new AI models is fierce, to say the least!

Just as critical to having the right talent and the right budget, is having the right vision. R&D institutions are often bogged down in bureacracy, while most tech firms are, necessarily, focused on short term rewards. In this game, to win, the players must have the vision to invest in developing AI models that are ‘ahead of the curve’ and the commitment to stick with it.

Therefore, for those following Abu Dhabi’s R&D story, it is not an entirely unexpected discovery that the Technology Innovation Institute (TII) has been investing heavily in the development of LLMs.

Formed in 2020, as the Abu Dhabi Government’s Advanced Technology Research Council‘s applied research arm, TII was founded to deliver discovery science and breakthrough technologies that have a global impact. An AI research centre was created in 2021, now called the AI and Digital Science Research Centre (AIDRC), to both support AI plans across the institute’s domain-focused labs and develop its own research. Overall, TII now employs more than 600, based in its Masdar City campus.

This week TII announced the launch of Falcon LLM, a foundational large language model with 40 billion parameters, developed by the AIDRC’s AI Cross-Centre Unit. The unit’s team previously built ‘NOOR’, the world’s largest Arabic natural language processing (NLP) model, announced less than one year ago.

However, Falcon is no copy of GPT, nor other LLM’s recently announced by global research labs and has innovations of its own. Falcon uses only 75 percent of GPT-3’s training compute (i.e. the amount of computer resources needed), 80 percent of the compute required by Google’s PaLM-62B and 40 percent of that required by DeepMind‘s Chinchilla AI model.

According to TII, Falcon’s superior performance is due to its state-of-the-art data pipeline. The AI model was kept relatively modest in size with unprecedented data quality.

Although no official third party ranking has been published yet, it is thought that Falcon will rank in the world’s top 5 large language models in a classical benchmark evaluation and may even rank number one for some specific benchmarks (not counting the newly arrived GPT-4).

Large language models have proved to be good at generating text, creating computer code and solving complex problems. The models can be used to power a wide range of applications, such as chatbots, virtual assistants, language translation and content generation. As demonstrated by OpenAI’s ChatGPT public beta testing, they can also be trained to process natural language commands from humans.

Now that the UAE now has one of the best and highest performing large language models in the world, what is the potential impact of TII’s Falcon LLM?

First, like all LLM’s, Falcon could be used for a variety of applications. Although plans for the commercialisation of the new model have not been announced, Falcon could provide a platform for both TII and potential technology partners to develop new use cases across many industry sectors and many functional areas. For development teams in the region, it’s a plus to have the core technology developer close at hand.

Second, Falcon also has technological advantages that businesses and government orgnisations can benefit from, which aren’t available via existing global platforms. The model’s economic use of compute, means that it lends itself for use as an on-premise solution, far more than other models that use more system capacity. In addition, if you’re a government organisation, implementing that on-premise solution means that no national data is going to be transferred outside of the country for processing.

Finally, Falcon is intellectual property developed in the UAE and a huge milestone for a less than three year-old research institute. The emirate is funding scientifically and commercially significant research and attracting some of the brightest minds from around the world to make it happen.

Of equal importance, if Falcon is anything to go by, at both a government policy level and an institutional level, Abu Dhabi has the vision and the drive to develop breakthrough research.

I don’t think that yesterday’s announcement will be the last we will hear about Falcon LLM! Stay tuned!

This article first appeared in Middle East AI News.