As geopolitical events shape the world, itβs no surprise that they affect technology too β specifically, in the ways that the current AI market is changing, alongside its accepted methodology, how itβs developed, and the ways itβs put to use in the enterprise.
The expectations of results from AI are balanced at present with real-world realities. And there remains a good deal of suspicion about the technology, again in balance with those who are embracing it even in its current nascent stages. The closed-loop nature of the well-known LLMs is being challenged by instances like Llama, DeepSeek, and Baiduβs recently-released Ernie X1.
In contrast, open source development provides transparency and the ability to contribute back, which is more in tune with the desire for βresponsible AIβ: a phrase that encompasses the environmental impact of large models, how AIs are used, what comprises their learning corpora, and issues around data sovereignty, language, and politics.Β
As the company thatβs demonstrated the viability of an economically-sustainable open source development model for its business, Red Hat wants to extend its open, collaborative, and community-driven approach to AI. We spoke recently to Julio Guijarro, the CTO for EMEA at Red Hat, about the organisationβs efforts to unlock the undoubted power of generative AI models in ways that bring value to the enterprise, in a manner thatβs responsible, sustainable, and as transparent as possible.Β
Julio underlined how much education is still needed in order for us to more fully understand AI,Β stating, βGiven the significant unknowns about AIβs inner workings, which are rooted in complex science and mathematics, it remains a βblack boxβ for many. This lack of transparency is compounded where it has been developed in largely inaccessible, closed environments.β
There are also issues with language (European and Middle-Eastern languages are very much under-served), data sovereignty, and fundamentally, trust. βData is an organisationβs most valuable asset, and businesses need to make sure they are aware of the risks of exposing sensitive data to public platforms with varying privacy policies.βΒ
The Red Hat responseΒ
Red Hatβs response to global demand for AI has been to pursue what it feels will bring most benefit to end-users, and remove many of the doubts and caveats that are quickly becoming apparent when the de facto AI services are deployed.Β
One answer, Julio said, is small language models, running locally or in hybrid clouds, on non-specialist hardware, and accessing local business information. SLMs are compact, efficient alternatives to LLMs, designed to deliver strong performance for specific tasks while requiring significantly fewer computational resources. There are smaller cloud providers that can be utilised to offload some compute, but the key is having the flexibility and freedom to choose to keep business-critical information in-house, close to the model, if desired. Thatβs important, because information in an organisation changes rapidly. βOne challenge with large language models is they can get obsolete quickly because the data generation is not happening in the big clouds. The data is happening next to you and your business processes,β he said.Β
Thereβs also the cost. βYour customer service querying an LLM can present a significant hidden cost β before AI, you knew that when you made a data query, it had a limited and predictable scope. Therefore, you could calculate how much that transaction could cost you. In the case of LLMs, they work on an iterative model. So the more you use it, the better its answer can get, and the more you like it, the more questions you may ask. And every interaction is costing you money. So the same query that before was a single transaction can now become a hundred, depending on who and how is using the model. When you are running a model on-premise, you can have greater control, because the scope is limited by the cost of your own infrastructure, not by the cost of each query.β
Organisations neednβt brace themselves for a procurement round that involves writing a huge cheque for GPUs, however. Part of Red Hatβs current work is optimising models (in the open, of course) to run on more standard hardware. Itβs possible because the specialist models that many businesses will use donβt need the huge, general-purpose data corpus that has to be processed at high cost with every query.Β
βA lot of the work that is happening right now is people looking into large models and removing everything that is not needed for a particular use case. If we want to make AI ubiquitous, it has to be through smaller language models. We are also focused on supporting and improving vLLM (the inference engine project) to make sure people can interact with all these models in an efficient and standardised way wherever they want: locally, at the edge or in the cloud,β Julio said.Β
Keeping it smallΒ
Using and referencing local data pertinent to the user means that the outcomes can be crafted according to need. Julio cited projects in the Arab- and Portuguese-speaking worlds that wouldnβt be viable using the English-centric household name LLMs.Β
There are a couple of other issues, too, that early adopter organisations have found in practical, day-to-day use LLMs. The first is latency β which can be problematic in time-sensitive or customer-facing contexts. Having the focused resources and relevantly-tailored results just a network hop or two away makes sense.Β
Secondly, there is the trust issue: an integral part of responsible AI. Red Hat advocates for open platforms, tools, and models so we can move towards greater transparency, understanding, and the ability for as many people as possible to contribute. βIt is going to be critical for everybody,β Julio said. βWe are building capabilities to democratise AI, and thatβs not only publishing a model, itβs giving users the tools to be able to replicate them, tune them, and serve them.βΒ
Red Hat recently acquired Neural Magic to help enterprises more easily scale AI, to improve performance of inference, and to provide even greater choice and accessibility of how enterprises build and deploy AI workloads with the vLLM project for open model serving. Red Hat, together with IBM Research, also released InstructLab to open the door to would-be AI builders who arenβt data scientists but who have the right business knowledge.Β
Thereβs a great deal of speculation around if, or when, the AI bubble might burst, but such conversations tend to gravitate to the economic reality that the big LLM providers will soon have to face. Red Hat believes that AI has a future in a use case-specific and inherently open source form, a technology that will make business sense and that will be available to all. To quote Julioβs boss, Matt Hicks (CEO of Red Hat), βThe future of AI is open.βΒ
Supporting Assets:Β
Tech Journey: Adopt and scale AI