UAE’s Falcon Mamba breaks new ground in artificial intelligence
In an interview with The Circuit, Dr. Hacid describes the new AI platform’s ability to handle enormous files without overloading memory capacity. He also explains how the research center decides where to devote its resources
At a time that Microsoft is investing $1.5 billion with the UAE’s G42 artificial intelligence firm and Nvidia is consulting on new computer chip development, the Gulf state is turning into a regional research hub for commercial applications of AI technology.
Leading much of the UAE’s international collaboration is Dr. Hakim Hacid, Chief Researcher at the Technology Innovation Institute’s Artificial Intelligence and Digital Science Research Center. It was in his lab, part of the Abu Dhabi’s Advanced Technology Research Center, that scientists developed Falcon Mamba, a new platform for AI architecture that can process massive amounts of data and was launched in August.
In an interview with The Circuit, Dr. Hacid describes Falcon Mamba’s ability to handle enormous files without overloading memory capacity. The unique design makes it faster and more reliable for tasks that involve heavy data, such as analyzing video content and large-scale scientific data, where existing models struggle. He also explains how the government-owned research center decides where to devote its resources.
What is the scope of the Technology Innovation Institute’s research activities?
We are targeting the different priority sectors of the UAE. So you have transportation, healthcare, education, defense and security. Anything that is related to or is matching and can be mapped to these priority sectors, we go with it, definitely.
How do you decide that this is an important technology that you want to invest in?
Of course we do some research in the background to understand the potential of the technology. We are in the R&D context so we also take risks from time to time. This is what happened, for example, with generative AI. We have a lot of researchers who are capable of understanding the future of such technology and the potential. So, it’s not something that I would say is deterministic, but it’s more about mixing the expertise, the technical expertise, and also the business understanding of the environment and the ecosystem that helps us to decide.
What is Falcon Mamba and why did you decide to work on it?
You see we have different architectures on the ground. So the main architecture that everybody is following is the transformer-based, so all the models are built on a transformer-based architecture. We believe that we did not yet have the full potential of these models. So we need to look into how we could actually somehow open up this potential, so we have the hypothesis that the quality and the performance of this model is related to the data. So we work a lot on the data, but then on the architecture side, most of the things have been done. So most of the model providers, they have more or less the same things, where we can change here and there, few things soon. We thought that it would be interesting also to look into different ways and explore different architectures completely. So this is what we have done with the Falcon Mamba which is not built on the transformer side. It’s actually transformer-free. There is no transformer inside so far, and it actually relies on what we call the state space models that will allow you to actually control or to learn the changes of states for your architecture, which gives you actually more sort of flexibility and better management of all the resources that you have.
What are the applications for Falcon Mamba?
When you have a large amount of data that you need to handle, for example, when it comes to video and audio, if you have a one-hour or two-hour audio or video, it’s much, much bigger than the text that you have. Mamba is good for managing memory. Wherever you go large, it doesn’t go exponentially large when it comes to memory. So the target is when you have time series, videos, for example, audio, and it’s like genomes, for example, will be also a good application.
You launched in August and what has happened with Falcon Mamba?
We have a lot of people who are using it because it was the first big model that was, let’s say, actually launched using this architecture. So a lot of people are using it. A lot of people are taking it and fine-tuning it for several sorts of applications. From our side, we have been working, actually, on a bigger model than the one that we had before, with a better architecture, because of the opportunity of building that allowed us to learn a lot of things when it comes to the Mamba architecture. So there is a model that will be coming, hopefully soon.
What are the ongoing trends within the AI sphere that you have witnessed and in which you’re putting in some resources?
Well, we have, of course, the reasoning part that’s very important. So we want to have models that can actually think, that can reason on the questions and the prompt they receive. So before giving you the answer, it’s not just a matter of probabilistic calculations on the next token. But we need, we want also to integrate a way of thinking and reasoning to constrain the generation itself. You also have the stream of the multi-modality that is following. We continue in that we have proposed the model that handles images, understands images. Now we are working on things related to video, for example, and we should get this kind of thing soon. There is also the model safety. We invest a lot on that. So now our models are much more safer, so they are able to understand when a prompt will lead to a risk for the user, for example. So we are able to let the model know that it shouldn’t answer the questions that may result in harming the user or any human being.
How do you advise clients and potential clients on what to use when it comes to AI?
I think it’s a matter of trying and failing so we learn from these things. We don’t have a deterministic approach, again, to consume and use this AI. It’s a matter of getting the AI trying different ways of making it usable in your context. And then, of course, give the time and make sure that the people who control the data, for example, are part of this process, because the data is the key at the end of the day. And if these people are not, let’s say, confident, or they are not comfortable in having this kind of AI, it will be complicated to put it in this.