In the final part of APMdigest's 2025 Predictions Series, industry experts offer predictions on how AI will evolve and impact technology and business in 2025. Part 2 covers the challenges presented by AI, as well as solutions to those problems.
2025: Cracks in the AI foundation
This year, enterprises saw AI move from promise to practice. In 2025, they will have their data foundations tested as AI usage rises, threats increase, and ROI comes into question. As experimentation gives way to reliance, only companies with an unshakable data strategy — prioritizing sovereignty, encryption, and resilience — can guarantee the success of their AI implementation.
Rajesh Ganesan
President, ManageEngine
AI AGENTS DON'T DELIVER
More Companies Will Claim "AI Agent" Offerings, But Few Will Deliver - In 2025, the number of companies claiming to offer AI agents will surge 100-fold, yet far fewer will have customers that capture real and measurable value from these solutions. As foundation models commoditize, enterprise interest in yet-another-LLM will decrease significantly. Focus will instead shift to deploying generative AI effectively through well-planned and projected high-value implementations. Only those investing in cohesive, process-driven AI deployments will unlock the full potential of AI agents, setting a new standard that distinguishes genuine innovation from mere claims.
Prince Kohli
CTO, Automation Anywhere
GENAI DISILLUSIONMENT
GenAI Disillusionment Will Come to a Head: In 2025, we'll witness a growing disillusionment with hastily implemented GenAI solutions as organizations grapple with the complexities of integrating these powerful but often misunderstood technologies. This reality check will stem from challenges related to data quality, inadequate risk controls, and difficulties in demonstrating clear business value, leading many companies to reassess their AI strategies. Consequently, we'll see a shift towards more measured, security-first approaches to AI adoption, with a focus on cloud-native solutions that can provide the necessary visibility and control to effectively harness AI's potential while mitigating its associated risks.
Gil Geron
CEO, Orca Security
CHATBOT HYPE SLOWS
Chatbots, No Longer All the Hype: As soon as generative AI came about, everyone began building chatbots and it was a major trend across industries — especially in retail, healthcare and travel, but we've already started to see the conversation and quick adoption of GenAI chatbots slow down and I expect that to continue. One reason being a lack of trust for both companies and their customers. Customers do not trust that chatbots will give them accurate information, and in some cases, they do not feel comfortable sharing personal information with an AI-powered tool. Companies also do not trust AI to provide accurate, unbiased one-on-one interactions with their customers. GenAI does a great job of sorting, storing and filtering through mass amounts of data, but we have not reached a place where businesses are willing to give up their brand ownership to AI. If we're building something that people have not adapted to yet, it is just not going to work. In theory, the concept of chatbots sounds very interesting, but we've seen over and over and over again that people just don't trust AI chatbots in a lot of use cases.
Mohamed (Mo) Cherif
Senior Director of Generative AI, Sitecore
CHALLENGE: RESPONSIBILITY
To more responsibly address the impact of AI and automated systems, organizations will adopt balanced overarching AI policies that account for the evolving obligations they have to all stakeholders, including shareholders, employees, customers, and business partners.
Companies must come to grips with the scale at which AI works as well as the consequences (both positive and negative) of using it — particularly if AI tools are implemented haphazardly across the enterprise without the benefit of an overarching AI policy.
Introducing a new technology like AI doesn't change a company's basic responsibilities. However, organizations must be careful to continue living up to those obligations. Many people, for instance, believe that a corporation's sole responsibility is to maximize short-term shareholder value, with little or no concern for the long term. But in that scenario, everybody loses, including stockholders who don't realize their short term profits will have long term consequences.
The problem that AI introduces is the scale at which automated systems can cause harm. AI magnifies issues that are easily rectified when they affect a single person. These are thorny challenges that can only be solved by remembering that corporate responsibility extends beyond your shareholders. Your employees are also stakeholders, as are your customers and business partners. Today, it's anyone participating in the AI economy — and we need a balanced approach to the entire ecosystem.
Laura Baldwin
President, O'Reilly Media
CHALLENGE: TRUST
Transparency will be the key to building trust in AI: Transparent workflows will be essential to making AI trustworthy over the next year, allowing people to look "under the hood" and see how decisions are made. When it comes to AI trust, transparency is absolutely essential. If users can't see how an AI solution came to a certain decision, they're going to be skeptical about letting it into critical parts of the business. That's why workflows have a huge role to play in giving users a transparent view of each step of the process. If you ask, "What's our annual recurring revenue (ARR)?" and the AI spits out a number, workflows should let you dig into how that number was arrived at. You'd be able to see which workflow ran, the query made in Salesforce, and the raw results that came back. Transparency builds trust, especially in complex environments. For companies investing in AI in 2025, it's this transparency that makes all the difference between a tool that's useful and one that's just a black box.
Eoin Hinchy
CEO and Co-Founder, Tines
CHALLENGE: SKEPTICISM
Skepticism about GenAI's impact will rise as we go past the peak of the hype cycle, and some businesses will struggle to quantify the benefits. A whopping 72% of consumers say they don't use GenAI services, according to a new survey from OpenAI, one of the standard bearers of the movement. Enterprises are still grappling with implementation times, the skillsets required, proving ROI, data accuracy, and governance around who has access to which data types, all adding to the skepticism.
Saket Saurabh
CEO and Co-founder, Nexla
Businesses will grow increasingly skeptical of AI offerings. We'll see a shift towards industry-specific AI tools as businesses grow weary of generic solutions that fail to deliver on their promises. This skepticism will be fueled by the proliferation of rushed AI products that create more challenges than they solve, leading to stalled adoption and investment among wary buyers.
Jim Palmer
Chief AI Officer, Dialpad
Skepticism Around Current AI Deployments Will Foster More Thoughtful and Scalable Future AI Development: AI has become a crucial topic for executives and technology leaders over the past two years, especially following the surge of generative AI tools and models in late 2022 and early 2023. However, there is growing skepticism about whether some current AI deployments are truly production-ready solutions or just conceptual and document-level implementations driven by hype — especially in the healthcare, finance, legal, and manufacturing industries. In 2025, enterprises will increasingly evaluate their end-customer needs to accurately gauge AI's potential impact, assess the relevance of new AI trends for their businesses, and determine whether their development efforts will yield a return on investment. This evaluation process will lead to more enterprises focusing their AI efforts on delivering tangible and scalable AI that will solve a particular need or challenge for their customers and businesses.
Bhavani Vangala
VP of Engineering, Onymos
CHALLENGE: EMPLOYEE CONCERNS
CIOs need to prepare for agentic AI to flip workplaces on its head: As businesses integrate AI into everyday processes, organizations must prioritize communication and reskilling their workforce now, and continue education throughout 2025. CIOs know technologies like AI agents are poised to change the workplace, but they need to get ahead of workers' fears that it is coming in to replace them. AI's role is to augment their jobs, not take them. Businesses that fail to proactively address employee concerns around introducing agentic AI especially, risk resistance and inefficiency in implementing these technologies. We've seen the data and it's clear: early adopters of GenAI are the current winners, their employees are the winners, and the early adopters of AI agents are sure to follow a similar course.
Carter Busse
CIO, Workato
CHALLENGE: DATA ACCESS
In 2025, I expect enterprises to shift their AI prioritization to focus on better data infrastructure, as the outcomes of many of the AI PoCs in 2024 were that although good use cases for AI were found, many companies' data is still siloed on-premise and needs to be cleaned up and moved into a modern data platform in the cloud first.
Gordon McKenna
VP of Alliances and Cloud Evangelist, Ensono
Overcoming Data Access Challenges Becomes Critical for AI Success: In 2025, organizations will face increasing pressure to solve data access challenges as AI workloads become more demanding and distributed. The explosion of data across multiple clouds, regions, and storage systems has created significant bottlenecks in data availability and movement, particularly for compute-intensive AI training. Organizations will need to efficiently manage data access across their distributed environments while minimizing data movement and duplication. We'll see an increased focus on technologies that can provide fast, concurrent access to data regardless of its location while maintaining data locality for performance. The ability to overcome these data access challenges will become a key differentiator for organizations scaling their AI initiatives.
Haoyuan Li
Founder and CEO, Alluxio
CHALLENGE: DATA QUALITY
Clean Data Will Play A Critical Role In Utilizing AI To Its Fullest Potential: Good, clean data is imperative for enterprises to leverage AI to its fullest potential. In all cases, AI is inherently affected by the quality of data used. For example, if a company is leveraging AI in a chatbot feature, they must consider what data is being used to train the generative AI model and ask critical questions. Where did the model obtain its data? What kind of data is included? Has the data been evaluated and vetted to ensure its accuracy? Poor quality, inaccurate, or incomplete data can cause multiple issues in AI training and output, ultimately negating the benefits that the AI was initially meant to create.
Nitesh Bansal
CEO, R Systems
CHALLENGE: CONSISTENCY
2025 is the Year for Consistency in AI Apps: As AI apps become central to everyday interactions in 2025, consistency and reliability will take precedence. Engineers will face growing pressure to ensure their solutions provide accurate, user-friendly experiences that avoid misinformation or errors, safeguarding both user trust and brand reputation.
Avthar Sewrathan
AI Product Lead, Timescale
CHALLENGE: AI ASSET MANAGEMENT
AI Asset Management Challenges Emerge: As businesses scale AI models, challenges with size, portability, and discoverability arise, sparking innovations in compression and model management for greater interoperability.
Robert Elwell
VP of Engineering, MacStadium
CHALLENGE: VOLUME OF CONTENT
The ease of AI-driven content creation will create an unexpected challenge: an overwhelming volume of content. As people are inundated with more information than they can consume, we'll see AI increasingly deployed to filter and prioritize this content, helping individuals and businesses focus on what's essential.
Yoav Abrahami
Chief Architect and Head of Velo, Wix Code
CHALLENGE: REGULATORY UNCERTAINTY
Resisting the regulatory rush: Because of the prominence and popularity of AI-powered tools, the race for regulation has begun, though the finish line remains elusive. The EU has put forth a comprehensive AI act, but that won't begin to take effect for a while still. However, meaningful global AI regulation faces a crucial geopolitical reality: without alignment between the US and China, arguably the world's current AI superpowers, any regulatory framework will struggle to gain traction. And, despite the many impressive strides that have been made in AI technology recently, this tech is still in its infancy. Regulating it too early could potentially hamper competitiveness and kill innovation before it has the chance to bloom. We're likely to see a period of regulatory uncertainty in the coming year, but one thing is clear: effective AI governance will require not just technical expertise, but adept diplomatic maneuvering between global powers.
Nikola Mrkšić
CEO and Co-Founder, PolyAI
AI GOVERNANCE
AI guardrails will go hand-in-hand with GenAI adoption: As more businesses increasingly integrate GenAI into their operations, AI guardrails, such as policies governing data collection and usage and compliance with regulatory standards, will play a pivotal role in ensuring its ethical and effective use in 2025. Designed to let GenAI systems operate within ethical, legal, and technical boundaries, AI guardrails are crucial for building trust in AI applications and ensuring it's used safely across various sectors. Organizations adopting GenAI must establish robust guardrails to navigate the technology's complexities. By doing so, they can maximize AI's transformative potential while mitigating risks.
Soumendra Mohanty
CSO, Tredence
Mad scramble for AI guidelines and frameworks: With GenAI tools now ubiquitous, 2025 will see a frantic scramble to rein in AI — just as we saw with social media. The focus will not only be on protecting users but also on having frameworks to safeguard AI from other AI.
Michael Adjei
Director, Systems Engineering, Illumio
AI ADOPTION BARRIERS FADE AWAY
The AI adoption hurdles of today will fade away, if not vanish entirely. By 2025, many of the current barriers to AI adoption, such as issues with trust, errors, synthetic data, and AI rights, will be largely resolved for numerous applications. For instance, we will see advancements in creating chatbots that consistently cite their sources, improving the reliability of how users talk to their data. Synthetic data might be an issue if you're talking about training future generations of models, but the current generation of models are actually quite good for a whole set of applications that we can build right now.
Benoit Dageville
President and Co-Founder, Snowflake
AI BACKLASH MITIGATED
A lot of AI "backlash" or negativity will be mitigated one successful use case at a time. AI hallucinations are the biggest blocker to getting generative AI tools in front of end users. Right now, a lot of generative AI is being deployed for internal use cases only because it's still challenging for organizations to control exactly what the model is going to say, and to ensure that the results are accurate. However, there will be improvements, especially in terms of keeping AI outputs within acceptable boundaries. For example, organizations can now run guardrails on the output of these models to constrain what generative AI can or can't say, what tone is or isn't allowed, etc. Models increasingly understand these guardrails, and they can be tuned to protect against things like bias. In addition to establishing guardrails, access to more data, to diverse data, and to more relevant sources will improve AI accuracy.
Baris Gultekin
Head of AI, Snowflake
Go to: 2025 AI Predictions - Part 3