2024 AI Predictions - Part 3
February 07, 2024
Share this

With a focus on GenAI, industry experts offer predictions on how AI will evolve and impact IT and business in 2024. Part 3 covers the technologies that will drive AI.

Start with: 2024 AI Predictions - Part 1

Start with: 2024 AI Predictions - Part 2

Go to: predictions about AIOps

Go to: predictions about AI in software development

QUANTUM COMPUTING

The promise of greater computing power is on businesses doorsteps and, in 2024, has the potential to take digital investments to the next level. Case in point: the question on every leader's mind in 2023 was: how soon will I see the return on my AI investment? The answer may lie in quantum computing, which enables computer-based applications to produce reliable results even with smaller data sets. In 2024, we will see greater progress made to allow these applications to perform far more complex operations and help solve previously "unsolvable" problems faster than conventional solutions.
Scott Likens
Global AI and US Innovation Technology Leader, PwC

EDGE COMPUTING

The only way to scale AI will be to distribute it, with the help of edge computing. I predict that the convergence of edge and cloud AI is the way to deliver AI at scale with the cloud and edge offloading computational tasks to the other side as needed. For instance, the edge can handle model inferences while the cloud may handle model training or the edge may offload queries to the cloud depending on the length of a prompt and so on. When it comes to a successful AI strategy, it's not practical to have a cloud-only approach. Companies need to consider an edge computing strategy — in tandem with the cloud — to enable low-latency, real-time AI predictions in a cost effective way without compromising on data privacy and sovereignty.
Priya Rajagopal
Director of Product Management, Couchbase

As generative AI models are trained and use cases expand, in 2024 we will enter the next generation of edge and scaled computing through the demands of inference (putting the generative AI models to work locally). AI-driven supercomputing environments and databases will not stop or get smaller as they require 24/7 runtime, but they may seek to be closer to the end user in contrast to the large model training locations that can be asynchronously located. While this evolution is certainly something to watch, the next generation computing at the edge is still underway and likely another one to two years from materializing and understanding what it will actually look like, but we believe modern, state of the art power dense data centers will be required to support.
Tom Traugott
SVP of Strategy, EdgeCore Digital Infrastructure

Neuromorphic Computing

Neuromorphic computing has the potential to launch AI's capabilities outside the realm of traditional computing's binary principles. But before the technology can be actualized for business use, professionals and researchers must band together to design hardware and software standards for the technology. In 2024, we will see businesses upskilling key employees and connecting with researchers to align on enterprise applications. This foundation will be critical for businesses to uplevel AI applications in coming years.
Scott Likens
Global AI and US Innovation Technology Leader, PwC

Retrieval Augmented Generation

RAG (Retrieval Augmented Generation) will be the rage in 2024. Companies will take foundational LLMs (Large Language Models), train and tune them with their own data, and churn out chatbots, copilots and similar utilities for external and internal productivity gains.
Mark Van de Wiel
Field CTO, Fivetran

The hallucination problem will be largely solved, removing a major impediment to the adoption of generative AI. In 2023, "hallucinations" by large language models were cited as a major barrier to adoption. If generative AI models can simply invent facts, how can they be trusted in enterprise settings? I predict however, that several technical advances will all but eliminate hallucinations as an issue. One such innovation is Retrieval Augmented Generation (RAG), which primes the large language model with true, contextually relevant information right before prompting it with a user query. This technique, still in its infancy, has been shown to dramatically decrease hallucinations and is already making waves. According to a recent State of LLM Apps 2023 report, 20% of the apps use vector retrieval — a key ingredient of RAG. In the coming year, I predict that RAG, along with better trained models, will begin to rapidly solve the hallucination problem — ultimately paving the way for widespread adoption of generative AI within enterprise workflows.
Adrien Treuille
Director of Product Management and Head of Streamlit, Snowflake

There is going to be a rapid shift from infrastructure-based GenAI to local GenAI because right now, that's not really possible. The average startup doesn't have thousands of dollars to throw at a cloud provider and it will prove almost impossible to run by yourself but that is changing quickly with the innovation around local generative AI. With it going local, you will have a complete RAG stack under your control with your access controls. That way, you won't have to expose your proprietary data in any way. When we go from centralized, API-based LLMs to local LLMs, it will happen quickly. The ones that will work will be adopted like wildfire. Just be mindful of the downside as de-centralized LLMs introduce the concept of bad actors in the loop.
Patrick McFadin
VP of Developer Relations, DataStax

LARGE GRAPHICAL MODELS

LGMs become the next household gen AI tech in the enterprise — Today, nearly every organization is experimenting with LLMs in some way. Next year, another major AI technology will emerge alongside LLMs: Large Graphical Models (LGMs). An LGM is a probabilistic model that uses a graph to represent the conditional dependence structure between a set of random variables. LGMs are probabilistic in nature, aiming to capture the entire joint distribution between all variables of interest. They are particularly suitable for modeling tabular data, such as data found in spreadsheets or tables. LGMs are useful for analyzing time series data. By analyzing time series data through the novel lens of tabular data, LGMs are able to forecast critical business trends, such as sales, inventory levels and supply chain performance. These insights help guide enterprises to make better decisions. This is game changing because existing AI models have not adequately addressed the challenge of analyzing tabular, time-series data (which accounts for the majority of enterprise data). Instead, LLMs and other models were created to analyze text documents. That's limited the enterprise use cases they're really capable of supporting: LLMs are great for building chatbots, but they're not designed to support detailed predictions and forecasting. Those sort of use cases offer organizations the most business value today — and LGMs are the only technology that enables them. Enterprises already have tons of time-series data, so it'll be easy for them to begin getting value from LGMs. As a result, in 2024, LGM adoption will take off, particularly in retail and healthcare.
Devavrat Shah
Co-CEO and Co-Founder, Ikigai Labs and AI Professor at MIT

EXPLAINABLE AI

Explainable AI will play a key role in the broader acceptance and trust of AI systems as adoption continues to increase. The next frontier in AI for physical operations lies in the synergy between AI, IoT, and real-time insights across a diversity of data. In 2024, we'll see substantial advancements in predictive maintenance, real-time monitoring, and workflow automation. We may also begin to see multimodal foundation models that combine not just text and images, but equipment diagnostics, sensor data, and other sources from the field. As leaders seek new ways to gain deeper insights into model predictions and modernize their tech stack, I expect organizations to become more interested in explainable AI (XAI). XAI is essential for earning trust among AI users — it sheds light on the black-box nature of AI systems by providing deeper insights into model predictions and it will afford users a better understanding of how their AI systems are interacting with their data. Ultimately, this will foster a greater sense of reliability and predictability. In the context of AI Assistants, XAI will reveal more of the decision-making process and empower users to better steer the agent toward desired behaviors. In the new year, I anticipate XAI will advance both the functionality of AI Agents and the trust of AI systems.
Evan Welbourne
Head of AI and Data, Samsara

OPEN SOURCE TOOLS

LLM providers will increase data transparency and embrace open-source tools. The term "open source" has evolved over time into something differentiated from its initial meaning. Traditionally, open-source tools gave insight into code used to build something. Now, with the rapid emergence of LLMs, we're able to accelerate past the code and get right to the end result without needing full observability of how a model was produced. This is where we see fine-tuning of models using domain-specific data for models being critical, and I foresee companies putting a concerted effort around transparency around how open-source models are trained heading into 2024. In my opinion, LLMs are easier to build than one may think given the availability of open source tools. That said, the important element that cannot be overlooked for building generative AI models is understanding how you uplevel the model to be specific to each use case, and giving users visibility into where training data was derived and how it'll be used in the future. One way to ensure data transparency is to encourage your team to embrace data lineage best practices, which I think we'll see become more of a norm heading into 2024. Incorporating data lineage into your data pipelines allows for heightened tracking and observability, catalyzing the potential for increased transparency.
Julian LaNeve
CTO, Astronomer

Go to: 2024 AI Predictions - Part 4, covering AI challenges.

Share this

The Latest

May 09, 2024

App sprawl has been a concern for technologists for some time, but it has never presented such a challenge as now. As organizations move to implement generative AI into their applications, it's only going to become more complex ... Observability is a necessary component for understanding the vast amounts of complex data within AI-infused applications, and it must be the centerpiece of an app- and data-centric strategy to truly manage app sprawl ...

May 08, 2024

Fundamentally, investments in digital transformation — often an amorphous budget category for enterprises — have not yielded their anticipated productivity and value ... In the wake of the tsunami of money thrown at digital transformation, most businesses don't actually know what technology they've acquired, or the extent of it, and how it's being used, which is directly tied to how people do their jobs. Now, AI transformation represents the biggest change management challenge organizations will face in the next one to two years ...

May 07, 2024

As businesses focus more and more on uncovering new ways to unlock the value of their data, generative AI (GenAI) is presenting some new opportunities to do so, particularly when it comes to data management and how organizations collect, process, analyze, and derive insights from their assets. In the near future, I expect to see six key ways in which GenAI will reshape our current data management landscape ...

May 06, 2024

The rise of AI is ushering in a new disrupt-or-die era. "Data-ready enterprises that connect and unify broad structured and unstructured data sets into an intelligent data infrastructure are best positioned to win in the age of AI ...

May 02, 2024

A majority (61%) of organizations are forced to evolve or rethink their data and analytics (D&A) operating model because of the impact of disruptive artificial intelligence (AI) technologies, according to a new Gartner survey ...