2024 – the Year Ahead in AI
By Anupam Datta, Chief Scientist and President, TruEra
In the past twelve months, AI, and especially generative AI, has captured the mainstream imagination. We have seen both great optimism and great anxiety about which doors to the future AI can open. Looking forward into 2024, this optimism and concern will continue to both intensify and broaden, as new uses of AI manifest in the ways that we work and live.
As both an academic and an AI Observability software executive, here are the major trends that I see taking shape:
1. RAG apps are going to cross the chasm from experimentation to production
Applications that leverage retrieval-augmented generation (RAG), an AI framework for improving responses by grounding a model on external knowledge sources, generated significant excitement last year and were the most popular types of app for experimentation. We saw many use cases in customer service, product support, marketing, and more. However, many of these uses cases were often only experiments, used by forward-thinking researchers and developers to build skills or test out case examples. In 2024, we are already seeing signs of growing confidence in RAG app development, as well as app testing and evaluation, to move more of these apps into a production environment.
2. LLM agents are going to become more reliable
Another use case gaining rapid traction is LLM-powered agents. Agents are a system that can use an LLM to reason through a problem, create an approach to address the problem, and then execute that plan with the help of tools. They can be used to create a question answering app that, for example, could answer more complex questions about things like the specific results of a public company’s quarterly financial statement and how it compared to prior quarters. Due to their higher complexity, agent-based apps often struggled with reliability issues. As tools become more sophisticated in terms of providing feedback to those apps, they will increase their ability to consistently answer complex questions.
3. Multi-modal models and apps are going to become mainstream
Chat GPT works in the single mode of text. Midjourney works in the mode of images. Multimodal AI systems train with and make use of a variety of inputs and outputs, including numerical data sets, video, audio, speech, images, and text. While multi-modal AI is rudimentary today and requires massive computational power, it holds great promise. In manufacturing, it could potentially oversee and optimize manufacturing processes. In healthcare, it could improve diagnosis through taking into account a variety of patient data. As rapid advancement occurs across each mode of AI, multi-modal AI will become easier to implement.
4. Enterprises will adopt a multi-provider strategy for GenAI
While the foundation model providers such as OpenAI and Google Vertex AI are competing in hopes of becoming the sole provider of Gen AI tools to organizations, we are already seeing companies pick and choose their providers carefully, based on use case. Enterprises are often carefully weighing strengths and weaknesses, to take a best of breed approach to the AI engine powering their apps. This also provides appropriate risk management, ensuring that they are not overdependent on any one vendor. Tools that help enterprises to achieve this multi-provider strategy will likely see strong success in the coming year.
5. Governance will become a key consideration for GenAI adoption
President Biden’s signing of the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence on October 30th changed the game, as it put forth a coordinated, federal government-wide approach toward Responsible AI. The EO focuses heavily on testing, auditing, and reporting, with forthcoming additional impact coming from an array of technical standard-setting and guidance to be determined over the next year. Such standards will intentionally influence the development and use of AI in the private sector.
The United States is not the only source of Responsible AI momentum. The European Commission finalized its Artificial Intelligence Act in December. This will set into motion the evolution of processes, resources, and technology stacks to accommodate the need to demonstrate AI quality and transparency, such as demonstrating fairness in automated credit lending or hiring.
We are already seeing companies move quickly to set up the appropriate testing and guardrails to ensure that they can both develop effective AI apps as well as meet regulatory guidelines.
The coming year will likely be one of great growth – in AI use cases, in the AI toolset, and the AI tech stack. With great change comes great opportunity, and we look forward to seeing a flourishing of creativity and innovation in the year to come.
Sign up for the free insideBIGDATA newsletter.