Generative artificial intelligence (AI), and specifically large language models (LLMs), will change how organisations design jobs, resource tasks and allocate responsibilities across the enterprise.
However, LLMs come with a unique set of risks when compared with other AI implementations.
Generative microapps are an emerging technology that can enable organisations to demonstrate the value of generative AI while minimising the business’s risk exposures. Generative microapps are applications that act as a proxy between a user and an LLM, such as ChatGPT or Bard.
Nader Henein, vice-president analyst at Gartner, discusses how enterprises can leverage microapps to augment knowledge workers using generative AI and boost employee productivity.
How could generative microapps augment the human workforce?
Gartner predicts that by 2026, 50% of office workers in Fortune 100 companies will be AI-augmented in one form or another, either to boost productivity or to raise the average quality of work.
For example, consider an LLM that is supplemented with a proprietary research database. As an author drafts a new piece of research, a microapp embedded in the word processing program would read each section and use its prebuilt prompt library to ask the LLM for examples of supporting research and data, as well as examples of contradicting research. Responses would be verified for accuracy by the microapp and then provided in the form of suggestions or comments in the word processor.
This augment would boost the author’s capabilities beyond what is humanly possible. No one person could be aware of every piece of published research in the database, but an LLM supplemented with enterprise data can provide that capability.
General-purpose microapps will become commonplace within the applications used every day in the workforce, such as word processors, email and conferencing tools. Organisations will develop specialised microapps, initially as augments for their high-value employees. These will become commoditized for all knowledge workers within a few years. A new industry focused on developing specialised generative microapps will grow and thrive.
Can you provide a bit more insight into generative microapps?
Rather than a user interfacing directly with an LLM, a microapp has a preprogrammed set of prompts that address a focused number of tasks on behalf of the user.
There is no conversational/chat interface. The prompts are used to query the model and receive responses in a predefined format. This makes it easier for the logic within the microapp to validate each response before passing it back to the user.
Generative microapps can be stand-alone, but in most instances, they will be embedded as extensions to productivity platforms commonly used by knowledge workers.
How do generative microapps mitigate key risks of LLMs?
There are three key risks that are unique to LLMs: access control, accuracy and devaluation. Microapps address each of these:
- Access control: Organisations have come to rely on access control, in which an access rule is created and applied 100% of the time. If the rule fails, the system simply denies access. However, if an LLM is supplemented with different types of enterprise data, there is no guarantee that access rules will be followed. Generative microapps act as a proxy for the enterprise LLM, so they do not allow the user to directly interact with the model through chat. As such, they cannot be coerced into exposing restricted data.
- Accuracy: “Hallucinations” is the term describing how models will occasionally provide fictitious – yet confident and convincing – answers. Through rigorous prompt engineering, the preset prompts embedded in microapps can limit hallucinations. Furthermore, the microapp can enforce that the answers provided are in a format that the app can validate before passing onto the user.
- Devaluation: Organisations may not be willing to pay the same amount for products and services provided by an LLM rather than a group of trained and seasoned professionals. Purpose-built microapps are developed to act as augments for knowledge workers. This will improve the average quality of the work and boost productivity, thereby helping mitigate skills shortages. Since the work is still carried out by professionals, the business model is shielded from devaluation risks.