Red Hat Enterprise Linux AI (RHEL AI) is a foundation model platform for developing, testing, and running generative AI models that enterprise applications are utilizing.

It comes as a single bootable image — complete with IBM’s open source Granite LLMs and the InstructLab model alignment tools — that can be easily redeployed to servers across hybrid cloud environments, from on-premises data centers to edge environments to the public cloud.

According to Red Hat, the cost of procuring, training, and fine-tuning LLMs while also aligning the models to a company’s own data security requirements can be quite high. With RHEL AI, Red Hat is attempting to make these models more accessible, efficient, and flexible.

The inclusion of InstructLab will make it possible for different employees to contribute their specific domain expertise to the company’s generative AI initiative without everyone needing to be a data science expert, Red Hat explained.

“For gen AI applications to be truly successful in the enterprise, they need to be made more accessible to a broader set of organizations and users and more applicable to specific business use cases,” said Joe Fernandes, vice president and general manager of Foundation Model Platforms at Red Hat. “RHEL AI provides the ability for domain experts, not just data scientists, to contribute to a built-for-purpose gen AI model across the hybrid cloud, while also enabling IT organizations to scale these models for production through Red Hat OpenShift AI.”

Another benefit of RHEL AI is that it provides tools for tuning and deploying models to production servers, as well as an on-ramp to OpenShift AI for training, tuning, and serving the models at scale. 

RHEL AI is available today through the Red Hat Customer Portal, and it includes the benefits of other Red Hat subscriptions as well, like product distribution, 24×7 support, extended model lifecycle support, and Open Source Assurance legal protections. It can be run on-premise or uploaded to AWS or IBM Cloud as a “bring your own subscription” (BYOS) offering. Plans are in the works to offer BYOS on Azure and Google Cloud by Q4 of 2024, and to bring it to IBM Cloud as a service later this year. 

“IBM is committed to helping enterprises build and deploy effective AI models, and scale with speed,” said Hillery Hunter, CTO and general manager of innovation at IBM Infrastructure. “RHEL AI on IBM Cloud is bringing open source innovation to the forefront of gen AI adoption, allowing more organizations and individuals to access, scale and harness the power of AI. With RHEL AI bringing together the power of InstructLab and IBM’s family of Granite models, we are creating gen AI models that will help clients drive real business impact across the enterprise.”