AI was 2024’s hot topic, so how is it evolving? What are we seeing in AI today, and what do we expect to see in the next 12-18 months? We asked Andrew Brust, Chester Conforte, Chris Ray, Dana Hernandez, Howard Holton, Ivan McPhee, Seth Byrnes, Whit Walters, and William McKnight to weigh in. 

First off, what’s still hot? Where are AI use cases seeing success?

Chester: I see people leveraging AI beyond experimentation. People have had the opportunity to experiment, and now we’re getting to a point where true, vertical-specific use cases are being developed. I’ve been tracking healthcare closely and seeing more use-case-specific, fine-tuned models, such as the use of AI to help doctors be more present during patient conversations through auditory tools for listening and note-taking. 

I believe ‘small is the new big’—that’s the key trend, such as hematology versus pathology versus pulmonology. AI in imaging technologies isn’t new, but it’s now coming to the forefront with new models used to accelerate cancer detection. It has to be backed by a healthcare professional: AI can’t be the sole source of diagnoses. A radiologist needs to validate, verify, and confirm the findings. 

Dana: In my reports, I see AI leveraged effectively from an industry-specific perspective. For instance, vendors focused on finance and insurance are using AI for tasks like preventing financial crime and automating processes, often with specialized, smaller language models. These industry-specific AI models are a significant trend I see continuing into next year.

William: We’re seeing cycles reduced in areas like pipeline development and master data management, which are becoming more autonomous. An area gaining traction is data observability—2025 might be its year. 

Andrew: Generative AI is working well in code generation—generating SQL queries and creating natural language interfaces for querying data. That’s been effective, though it’s a bit commoditized now. 

More interesting are advancements in the data layer and architecture. For instance, Postgres has a vector database add-in, which is useful for retrieval-augmented generation (RAG) queries. I see a shift from the “wow” factor of demos to practical use, using the right models and data to reduce hallucinations and make data more accessible. Over the next two or three years, vendors will move from basic query intelligence to creating more sophisticated tools.

How are we likely to see large language models evolve? 

Whit: Globally, we’ll see AI models shaped by cultural and political values. It’s less about technical developments and more about what we want our AIs to do. Consider Elon Musk’s xAI, based on Twitter/X. It’s uncensored—quite different from Google Gemini, which tends to lecture you if you ask the wrong question. 

Different providers, geographies, and governments will tend to move either towards free-er speech, or will seek to control AI’s outputs. The difference is noticeable. Next year, we’ll see a rise in models without guardrails, which will provide more direct answers.

Ivan: There’s also a lot of focus on structured prompts. A slight change in phrasing, like using “detailed” versus “comprehensive,” can yield vastly different responses. Users need to learn how to use these tools effectively.

Whit: Indeed, prompt engineering is crucial. Depending on how words are embedded in the model, you can get drastically different answers. If you ask the AI to explain what it wrote and why, it forces it to think more deeply. We’ll see domain-trained prompting tools soon—agentic models that can help optimize prompts for better outcomes.

How is AI building on and advancing the use of data through analytics and business intelligence (BI)?

Andrew: Data is the foundation of AI. We’ve seen how generative AI over large amounts of unstructured data can lead to hallucinations, and projects are getting scrapped. We’re seeing a lot of disillusionment in the enterprise space, but progress is coming: we’re starting to see a marriage between AI and BI, beyond natural language querying. 

Semantic models exist in BI to make data more understandable and can extend to structured data. When combined, we can use these models to generate useful chatbot-like experiences, pulling answers from structured and unstructured data sources. This approach creates business-useful outputs while reducing hallucinations through contextual enhancements. This is where AI will become more grounded, and data democratization will be more effective.

Howard: Agreed. BI has yet to work perfectly for the last decade. Those producing BI often don’t understand the business, and the business doesn’t fully grasp the data, leading to friction. However, this can’t be solved by Gen AI alone, it requires a mutual understanding between both groups. Forcing data-driven approaches without this doesn’t get organizations very far.

What other challenges are you seeing that might hinder AI’s progress? 

Andrew: The euphoria over AI has diverted mindshare and budgets away from data projects, which is unfortunate. Enterprises need to see them as the same. 

Whit: There’s also the AI startup bubble—too many startups, too much funding, burning through cash without generating revenue. It feels like an unsustainable situation, and we’ll see it burst a bit next year. There’s so much churn, and keeping up has become ridiculous.

Chris: Related, I am seeing vendors build solutions to “secure” GenAI / LLMs. Penetration testing as a service (PTaaS) vendors are offering LLM-focused testing, and cloud-native application protection (CNAPP) has vendors offering controls for LLMs deployed in customer cloud accounts. I don’t think buyers have even begun to understand how to effectively use LLMs in the enterprise, yet vendors are pushing new products/services to “secure” them. This is ripe for popping, although some “LLM” security products/services will pervade. 

Seth: On the supply chain security side, vendors are starting to offer AI model analysis to identify models used in environments. It feels a bit advanced, but it’s starting to happen. 

William: Another looming factor for 2025 is the EU Data Act, which will require AI systems to be able to shut off with the click of a button. This could have a big impact on AI’s ongoing development.

The million-dollar question: how close are we to artificial general intelligence (AGI)?

Whit: AGI remains a pipe dream. We don’t understand consciousness well enough to recreate it, and simply throwing compute power at the problem won’t make something conscious—it’ll just be a simulation. 

Andrew: We can progress toward AGI, but we must stop thinking that predicting the next word is intelligence. It’s just statistical prediction—an impressive application, but not truly intelligent.

Whit: Exactly. Even when AI models “reason”, it’s not true reasoning or creativity. They’re just recombining what they’ve been trained on. It’s about how far you can push combinatorics on a given dataset.

Thanks all!

Source