IBM has released Granite 3.0. its third-generation Granite flagship language models under the permissive Apache 2.0 license.

The Granite 3.0 models were trained on over 12-trillion tokens on data taken from 12 different natural languages and 116 different programming languages, using a two-stage training method, leveraging results from several thousand experiments designed to optimise data quality, data selection, and training parameters.

IBM is also announcing an updated release of its pre-trained Granite Time Series models, the first versions of which were released earlier this year. These new models are trained on three-times more data and deliver strong performance on major time series benchmarks.

IBM is also introducing a new family of Granite Guardian models that permit application developers to implement safety guardrails by checking user prompts and LLM responses for a variety of risks. The Granite Guardian 3.0 8B and 2B models provide the most comprehensive set of risk and harm detection capabilities available in the market today.

In addition to harm dimensions such as social bias, hate, toxicity, profanity, violence, jailbreaking and more, these models also provide a range of unique RAG-specific checks such as groundedness, context relevance, and answer relevance.

IBM also unveiled the upcoming release of the next generation of watsonx Code Assistant, powered by Granite code models, to offer general-purpose coding assistance across languages like C, C++, Go, Java, and Python, with advanced application modernization capabilities for Enterprise Java Applications..