Advertisment

Generative AI and Kafka: Confluent’s Vision for Data Streaming

Straight-talk from Confluent leaders on tackling cloud-native Kafka challenges, driving generative AI with data streaming, and revolutionizing real-time insights.

author-image
Manisha Sharma
New Update
Confluent

Confluent has emerged as a game-changer in the data streaming world, driving the evolution of Apache Kafka from on-premises systems to fully managed cloud-native services. In this interview, we sat down with Will LaForest, Field CTO, and Rohit Vyas, Regional Vice President, to unpack the challenges of building cloud-native Kafka services, the innovations powering generative AI, and Confluent’s plans to redefine real-time data streaming.

Advertisment

What are the challenges you faced in building a fully managed cloud-native Kafka services? How does Confluent engineering solve these things?

Will LaForest: When we began Confluent’s journey to create a fully managed service there was an eagerness to get to the market quickly, and in the process, we went about it in the wrong way and did exactly what one might expect—we built a Kafka service by deploying it in the cloud, provided some minimal functionality, and, unsurprisingly, it just wasn’t that great. As a pivot, we decided to scrap everything and rebuild Confluent Cloud by fundamentally re-engineering its core.

From this experience, we learnt that you can’t simply take Apache Kafka and shift it to the cloud, just like you can’t compare S3 Storage with a hosted clustered file system—they’re entirely different. We had to apply the same principles to Confluent, disaggregating all the layers and rebuilding everything to be cloud-native. This was our first big challenge and a major learning curve.

Advertisment

As a result, we developed the Kora engine. At this turning point, we understood that everything we build for the cloud has to be cloud-native and leverage the underlying cloud native primitives.

Rohit Vyas: Confluent is the only data streaming platform company that offers a committed SLA on core Kafka. No one else does, and this is a significant value proposition, backed by a tremendous engineering effort. As Will mentioned, the Kora engine orchestrates everything behind the scenes—it’s our secret sauce.

We are still fully committed to expanding our work, with over 15% of our global workforce based in India and driving meaningful projects around the world. The contribution goes beyond revenue to play an important role in Confluent's engineering activities. We are constantly tackling new challenges as we push the boundaries with Kafka. Our team is constantly improving the technology and bringing new use cases to market. For example, Confluent handles a significant amount of India's UPI traffic as well as key sales events for top fashion e-retailers such as Meesho. In new-age tech, we’re augmenting AI with streaming data, including Generative AI, with our systems. Engineers here thrive on solving challenges, defining new roadmaps, and pushing the limits of what’s possible. That’s where we are.

Advertisment

How are platforms like Confluent integrating with Gen AI models, and what role do they play in powering AI drive? 

Rohit Vyas: When we talk about AI, it's clear that the concept isn’t new. Analytical AI has been around since as early as 2013-2014. However, generative AI is a recent development with which responses to prompts are synthesized in real-time, meaning that the input data, training data, model inference, and governance all need to be real-time as well. This essentially demands that data engineering operates on a “prompt time” basis, as the prompt is processed immediately.

It’s crucial to recognize that while language models can be complex, their fundamental task is to break down the prompt into tokens, which are then inferred against existing data to generate a response. At the same time, the data that the model produces must be governed carefully to avoid unintended or inappropriate responses. Ensuring this level of governance is key to maintaining accuracy and trustworthiness in AI outputs.

Advertisment

At Confluent, our technology addresses these needs across four key pillars: stream, connect, govern, and process. Governance is embedded within our platform with support for comprehensive governance measures like data lineage, tracking how data is ingested, processed, accessed, and modified. This level of oversight contributes significantly to a reliable generative AI architecture. 

Will LaForest: Data streaming has been utilized in AI for some time, with one early and well-known example being Uber’s machine learning platform, Michelangelo, which used traditional machine learning and deep-learning. However, GenAI has transformed how we leverage models. Instead of everyone constantly training and retraining their own specific models, which is prohibitively expensive for large language models, a general purpose model is trained on a massive dataset up front. Business-specific context, though, must still be frequently updated. For example, we are working with an airline in building a chat agent for customer interactions. General models like OpenAI or Llama lack specific knowledge about the airline’s customers, flights, maintenance records, and loyalty programs. This contextual information has to be constantly fresh and used to augment the general purpose model at inference time for accurate responses.

In fact, I’d argue that if you’re not using data streaming for generative AI, you’re doing it wrong—it will either be incredibly expensive and time consuming to implement or it simply won’t deliver the desired outcomes. The industry is already moving in this direction.

Advertisment

How important is real time data processing for training and deploying modern AI models?

Will LaForest: To this day, most AI models don’t perform training in real-time, although we have seen some examples where incremental updates of models are happening much more rapidly. Kafka and data streaming are still widely used, as they’re the most effective way to gather and deliver data across the enterprise. However, while training typically happens in batches, the invocation of these models often need to occur in real-time. For example, when you submit a prompt, you expect a near-instant response, not a delay of several seconds. Even with traditional machine learning, real-time inferencing is becoming essential. An example of this is fraud detection.

 

Advertisment

In Industry 4.0, we see predictive maintenance as a common use case. Time is money, so the faster a potential issue is detected, the sooner you can take action and prevent productivity losses. In these scenarios, real-time data plays a vital role in enabling fast, accurate insights.

Rohit Vyas: Another emerging focus in generative AI is Retrieval-Augmented Generation (RAG) which involves adding a level of context, checks, and balances to a pre-trained model. This process relies heavily on data streaming, as batching isn’t practical for RAG.

Will LaForest: In the airline industry example I spoke of, real-time updates to flight statuses and customer interactions are essential, and in such cases, batch processing simply isn’t even an option.

Advertisment

Rohit Vyas: Data streaming has become a must-have technology. Life happens in real time, and the processing of real-time signals can’t be an afterthought; it must occur immediately.

Beyond AI, we can consider the two key states within any enterprise: the operational and the analytical. Operational functions “run the business,” while analytical insights “build the business.” If insights from the analytical side can’t quickly flow back into operational actions, opportunities for improvement are lost. For digital companies, whether in e-retail or food delivery, knowing which coupon codes are in high demand or when inventory is low must happen in real-time to avoid revenue impacts. Consider examples like Swiggy, Zomato, Instacart, and Uber—all of which depend on data streaming. Their services simply couldn’t function effectively without it. In my experience working with companies worldwide, data streaming has become absolutely essential to operations.

Will LaForest: Across really all sectors including telecom, manufacturing, retail, travel, or in digital native startups, data streaming technology is driving AI innovation. It’s the most effective way to tap into the operational data silos where critical business data resides—such as reservation systems or inventory management. Delivering this data to training tools and applying it to models in real-time is essential. As we shift towards generative AI, and architectures like RAG become critical, allowing companies to achieve more with fewer resources is important. Imagine needing a whole team just to manage Kafka—that’s where streamlined fully managed data streaming becomes invaluable.

How cost effective is self managing data streaming?

Will LaForest: Not very. Self-managing Kafka is expensive both financially and in terms of human capital. Beyond infrastructure costs, it requires dedicated personnel who often become siloed in maintenance roles, limiting their career growth and preventing them from contributing to more strategic initiatives.

Rohit Vyas: That’s why we have Confluent—our fully managed cloud service offloads much of that work. We operate across Azure, GCP, and AWS equally, handling the heavy lifting so customers don’t have to. Our Kora engine enables significant offloading, allowing one or two engineers to accomplish what would otherwise require a team of 20 or 30 because much of the workload is managed in our service. This approach helps us achieve more with less.

Although Kafka is open source by design, and we intended it to be that way, there are various flavors of Kafka in the market. Others can repackage it and present it in different forms, but Confluent is the only company that can offer an SLA on Kafka. We built it, we understand it, and we’ve made it cloud-native. So, for companies of any size and in any industry segment that want to do more with less, stay at the forefront of data streaming, and benefit from enterprise-grade support and a strong roadmap, Confluent is the right choice.

 

What were the challenges or are you still facing any challenges?

Will LaForest: For the past 60 years, the approach to data has remained largely unchanged. Data has been stored in databases and queried in batches. While numerous advancements have been made with faster, more scalable databases, new query languages, and improved indexing, the fundamental model has stayed the same: store data, then query it.

Data streaming, however, is a fundamentally different paradigm. Instead of bringing questions and analysis to static data, we’re bringing data to the questions. This shift in mindset is what makes data streaming so special—it’s transformative. But for those who have worked with traditional data methods for a long time, this change can initially be difficult to grasp.

This is one reason I’m so excited about India. Among all our markets, data streaming adoption in India is exceptional. I see people entering the workforce here who inherently understand this new paradigm. To me, India is an engine for growth in data streaming.

Ultimately, the challenge is shifting from batch processing and point-to-point connections to continuous data processing. It takes a mindshift to adjust, but once you do, the benefits are tremendous.

What's next we can expect from Confluent? 

Rohit Vyas: We are on a mission to bring data in motion to industries across all sectors. There’s hardly any industry, company, or government agency that doesn’t need streaming data. The world is our oyster, and we’re here to make an impact. We believe that fast data beats slow data. Our goal is to reduce the gap between data and decision-making. The more informed our users are, the better they can perform in their business, which ultimately benefits the broader community. Whether it’s UPI, Digi Yatra, or other government services, scale is essential. The government is putting data to work, and the more real-time it is, the happier the citizens are. That’s what we’re striving for.

Will LaForest: One of the things I love about Confluent is our laser focus on one mission: building the best data streaming platform so our customers can succeed. This means not only continuing to develop our four pillars but also making it easier for all kinds of companies—from small startups to large enterprises like OpenAI and Citibank—to access these massive business benefits. We’re committed to continually improving the experience for them, regardless of size, to help them achieve success.

Also Read:

SAP’s Vision for APJ: Liher Urbizu on AI, Cloud, and Transformation

TVS Electronics: Shaping the Future of India’s EMS Sector

AI-Powered Payroll: How Intuit Boosts Efficiency and Accuracy for SMBs

Zoom Docs and Workvivo Acquisition: Insights from Sameer Raje