DevConf.IN 2026

To see our schedule with full functionality, like timezone conversion and personal scheduling, please enable JavaScript and go here.
09:00
09:00
60min
Badge pickup and networking (VYAS - G)
(Workshop) VYAS - G - Room#VY004
09:00
60min
Badge pickup and networking (VYAS - G)
VYAS - G - Room#VY003
09:00
60min
Badge pickup and networking (VYAS - G)
VYAS - G - Room#VY015
09:00
60min
Badge pickup and networking (VYAS - G)
VYAS - G - Room#VY016
09:00
60min
Badge pickup and networking (VYAS - G)
VYAS - 1 - Room#VY124
09:00
60min
Badge pickup and networking (VYAS - G)
VYAS - 1 - Room#VY102
09:00
60min
Badge pickup and networking (VYAS - G)
VYAS - 1 - Room#VY103
09:00
60min
Badge pickup and networking (VYAS - G)
VYAS - 1 - Room#VY104
09:00
60min
Badge pickup and networking (VYAS - G)
(Booths) VYAS - G - Open area
09:00
60min
Morning Mixer: Badge Pickup (G Floor) & Networking Coffee (8th Floor)

09:00 AM onwards | Badge Pickup (VYAS-G): Early arrivals get a Surprise Memento!
09:00 - 09.45 AM | Networking Coffee (VYAS-8): Fuel up and meet the community before the rush
09:45 AM | Settling In: Wrap up networking and take your seats for the Inaugural Ceremony and Opening Keynote

Opening, Keynotes, Closing: Floor8 Terrace
10:00
10:00
60min
Opening Keynote: Floor8 Terrace
(Workshop) VYAS - G - Room#VY004
10:00
60min
Opening Keynote: Floor8 Terrace
VYAS - G - Room#VY003
10:00
60min
Opening Keynote: Floor8 Terrace
VYAS - G - Room#VY015
10:00
60min
Opening Keynote: Floor8 Terrace
VYAS - G - Room#VY016
10:00
60min
Opening Keynote: Floor8 Terrace
VYAS - 1 - Room#VY124
10:00
60min
Opening Keynote: Floor8 Terrace
VYAS - 1 - Room#VY102
10:00
60min
Opening Keynote: Floor8 Terrace
VYAS - 1 - Room#VY103
10:00
60min
Opening Keynote: Floor8 Terrace
VYAS - 1 - Room#VY104
10:00
60min
Opening Keynote: Floor8 Terrace
(Booths) VYAS - G - Open area
10:00
10min
Inaugural Ceremony: Light, Peace, and Purpose

10:00 AM The Invocation and Honors World Peace prayer, welcoming key leadership, ceremonial Diya lighting, and MIT-style felicitation.
10:07 AM The Gratitude Acknowledgments for MIT-RH, community partners, and the audience.
10:10 AM The Launch Official "Declaration of Opening" for DevConf.IN.

Opening, Keynotes, Closing: Floor8 Terrace
10:10
10:10
25min
Building India’s Sovereign AI Stack with Local Talent & Open Global Collaboration
Vincent Caldeira

India is redefining the global AI landscape by treating artificial intelligence not just as a technology, but as a sovereign Digital Public Infrastructure (DPI). This keynote explores India’s unique strategy: democratizing access to compute, data, and models to ensure AI serves the public good, mirroring the transformative success of UPI and Aadhaar. We will examine this "full-stack" approach through two critical lenses.

First, we discuss leveraging global open-source innovation. By adopting open standards and models, India is building a transparent, secure AI infrastructure that avoids vendor lock-in and fosters global collaboration while maintaining strategic autonomy.

Second, we focus on local talent. With the world's fastest-growing developer community, the mission is to pivot from consumption to creation—building "sovereign" Indian language models and high-impact applications for diverse sectors like healthcare and agriculture.

However, scaling this vision faces challenges in compute availability, high-quality datasets, and the need for deep-tech skills. As we approach the India AI Impact Summit, this session issues a direct call to action for engineers: help transition India from an AI adopter to an AI shaper. Join us to discover how open collaboration and local innovation can build a self-reliant, inclusive AI future.

AI, Data Science, and Emerging Tech
Opening, Keynotes, Closing: Floor8 Terrace
10:35
10:35
5min
Keynote switch
Opening, Keynotes, Closing: Floor8 Terrace
10:40
10:40
15min
Infrastructure to Intelligence: Building Scalable AI with Collaborative Ecosystems
Vijay Seshadri

Scaling AI in production requires more than powerful models - it demands thoughtful system design and architectural trade-offs. In this session, we explore how JioHotstar builds AI systems that serve millions by balancing three critical constraints: accuracy, cost, and latency. We'll examine how models fit within larger systems, why traditional validation approaches fail for stochastic AI components, and the engineering patterns needed to operationalize intelligence at scale. From data pipelines to personalized discovery engines, we'll share a practical blueprint for building scalable AI applications that work in the real world. Join us for an engineering-focused look at transforming infrastructure into intelligent systems through open-source ecosystems and pragmatic architectural choices.

AI, Data Science, and Emerging Tech
Opening, Keynotes, Closing: Floor8 Terrace
10:55
10:55
5min
Open up tracks and booths
Opening, Keynotes, Closing: Floor8 Terrace
11:00
11:00
390min
Workshops, sessions, booths on G and 1 Floors
Opening, Keynotes, Closing: Floor8 Terrace
11:00
15min
Break
(Workshop) VYAS - G - Room#VY004
11:00
15min
Break
VYAS - G - Room#VY003
11:00
15min
Break
VYAS - G - Room#VY015
11:00
15min
Break
VYAS - G - Room#VY016
11:00
15min
Break
VYAS - 1 - Room#VY124
11:00
15min
Break
VYAS - 1 - Room#VY102
11:00
15min
Break
VYAS - 1 - Room#VY103
11:00
15min
Break
VYAS - 1 - Room#VY104
11:00
30min
Break & Booth setup
(Booths) VYAS - G - Open area
11:15
11:15
15min
Cognitive A11y: Designing Neuroaccessible docs for Neurodivergent minds
Kalyani Desai, Yash Guddeti

Not all users read, process, or understand information the same way, and documentation must reflect that. Many users struggle with dense, complex documentation not because the content is wrong, but because their brains process information differently. Cognitive accessibility focuses on designing documentation that works for neurodivergent minds, readers with ADHD, dyslexia, autism, and varied processing styles.

This talk explores simple, high-impact techniques to create neuroaccessible docs: reducing cognitive load, improving structure, writing for clarity, and designing flows that support different thinking patterns. By understanding how brains read, skip, scan, and absorb information, writers can build documentation that is not only inclusive but significantly easier for everyone to use.

User Experience and Design Engineering
VYAS - G - Room#VY003
11:15
15min
Empowering the Future of ML: Scaling the Kubeflow Community in 2026
Amita Sharma, Gaurav Kamathe

As a cornerstone of Production AI, Kubeflow continues to redefine how enterprises orchestrate machine learning workflows. However, the project's true strength lies in its ecosystem. In this lightning talk, we’ll share Red Hat’s vision for Kubeflow community engagement in 2026. Join Amita Sharma (Kubeflow Trainer lead) and the team as we discuss our roadmap for fostering contributor growth, enhancing upstream collaboration, and what you can expect from the Kubeflow presence at this year's community booth. Whether you're a seasoned contributor or an ML enthusiast, learn how you can help shape the future of open-source AI.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY124
11:15
75min
Full-Stack Generative AI: A Hands-On, Persona-Based Lab on AI platform
Ritesh Shah, Mitesh Sharma

Enterprise adoption of Generative AI requires building a robust, secure, and scalable service. This demands a unified effort from platform engineering, development, and operations. How do you manage infrastructure, integrate models with apps, and maintain governance? In this unique, hands-on lab, you will experience the complete AI lifecycle by rotating through four critical personas on Red Hat OpenShift AI. As a Platform Engineer, you'll deploy model serving runtimes and prepare GPU infrastructure for hosting private, scalable LLMs. As an AI Application Developer, you'll connect a real-world app to a private LLM endpoint and use a code assistant to accelerate development. As a DevOps Practitioner, you'll use agentic AI to intelligently monitor the health and resource consumption of your AI Platform/ cluster. As a Technical Decision Maker, you'll use dashboards to analyze model usage, track costs, and make informed governance decisions. Participants will walk away with a holistic, practical understanding of how to build, integrate, and manage a complete, production-ready Generative AI service on a single platform.

AI, Data Science, and Emerging Tech
(Workshop) VYAS - G - Room#VY004
11:15
15min
Getting Started with Open-Source Contributions: A Beginner's Guide
Jayapriya Pai, S Ashwin

Open-source contributions can seem daunting, but they're a great way to learn, grow, and give back to the community. This talk aims to give ideas for beginners to get started with open-source contributions. The talk will cover the basics, best practices, and provide actionable tips to help newcomers navigate the world of open-source.

This talk is based on mentoring experience in GSoC 2024 and 2025 as well as experience in working in open source projects. With one speaker working as a maintainer and the other as a contributor, this talk brings a balanced perspective, shaped by their GSoC mentor-mentee experience - on how to start and build your own open-source journey.

Open Track
VYAS - 1 - Room#VY104
11:15
15min
MLOps is the New DevOps: How Sysadmins are the Next AI Operators
Hemant Wadhwani

Are you a Sysadmin, DBA, or seasoned DevOps Engineer worried about where your career fits in the age of generative AI? The truth is, your skills in production reliability, infrastructure automation, and data management are not obsolete—they are the most critical 80% needed for the hottest job in tech: MLOps Engineer.

This lightning talk is designed to empower you as an AI Operator without requiring you to become a Data Scientist. In 15 minutes, we will demonstrate how to leverage your existing expertise in containers (Docker), cloud infrastructure, and CI/CD pipelines to build, deploy, and scale machine learning models. Learn how to translate your current operations skills into a practical MLOps strategy.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY102
11:15
15min
Memory-Efficient AI: How PEFT and PyTorch Enable Accessible LLM Fine-Tuning
Parshant Sharma

The proliferation of large language (LLMs) with billions of parameters has created a significant barrier to entry for fine tuning, full fine tuning of a 7B parameter model requires over 80GB of GPU memory and produces multi gigabyte checkpoints for each task. Parameter Efficient Fine Tuning (PEFT) addresses this challenge by training only 0.1-2% of model parameters while achieving performance comparable to full fine tuning, reducing memory requirements by 3-4x and checkpoint sizes from gigabytes to megabytes.
This talk will explore how PyTorch's architectural features including module system, autograd engine enable practical PEFT implementation.
This talk will demonstrate popular methods including LoRA, Prefix Tuning, showing how PyTorch's nn.ModuleDict enables dynamic adapter management, how custom CUDA extensions optimize performance.
Attendees will gain knowledge of implementing PEFT methods and leveraging PyTorch's advanced features for efficient model adaption, making large scale AI accessible with limited computational resources.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY103
11:15
15min
Sustaining Yesterday’s Linux: Delivering Security for Today’s Workloads
Suyash Nalawade

Across the Linux ecosystem, long-term sustaining engineering has become essential. Vendors like Red Hat, openSUSE, Ubuntu, Debian, and Oracle are increasingly investing in extended lifecycle maintenance so enterprises, governments, and regulated institutions can continue running older Linux releases without compromising on security or compliance.

In this talk, I’ll share how sustaining engineering teams support legacy distributions that range from decade-old releases to the latest major versions. I’ll walk through the real engineering work involved in identifying, triaging, and backporting modern CVE and bug fixes into aging codebases—while preserving ABI/API stability for mission-critical workloads. In urgent situations, teams have even delivered high-severity fixes within five business days, ensuring customers stay protected from breaches, downtime, and certification risks.

Drawing from my experience working on printing, networking, security, and cryptography components, I’ll highlight the unique challenges of patching outdated kernels, libraries, and dependency chains that look nothing like upstream. I’ll also show how this work helps governments and large institutions retain regulatory approvals and safeguard secure infrastructure, even when upgrading is operationally difficult or impossible.

Cybersecurity and Compliance
VYAS - G - Room#VY015
11:15
15min
Want to extend GitOps to Infrastructure and Workloads? Here's how.
Eeshaan Sawant

Most teams have mastered GitOps for Kubernetes, but infrastructure and non-Kubernetes workloads are still handled through disconnected delivery pipelines. In this talk, I will highlight the limitations of traditional GitOps implementations, and why just adding one more tool doesn't make sense. We will then see how PipeCD–an open-source CNCF project, brings these worlds together by establishing a "same interface for different platforms" principle that simplifies your entire delivery process.

Whether you’re trying to extend GitOps beyond Kubernetes or simplify delivery across a diverse stack, this session guarantees to give you a clear path forward!

Cloud, Edge, and Sustainable Computing
VYAS - G - Room#VY016
11:30
11:30
15min
Break
VYAS - G - Room#VY003
11:30
15min
Break
VYAS - G - Room#VY015
11:30
15min
Break
VYAS - G - Room#VY016
11:30
15min
Break
VYAS - 1 - Room#VY124
11:30
15min
Break
VYAS - 1 - Room#VY102
11:30
15min
Break
VYAS - 1 - Room#VY103
11:30
15min
Break
VYAS - 1 - Room#VY104
11:30
360min
DevConf.IN 2026 Final Booths List with abstracts (Day#1)
Rajan Shah

DevConf.IN 2026 Final Booths List with abstracts

Consolidated List
1. Red Hat India driven open source initiatives (community projects / meetups)
2. Fedora Project Community Corner
3. LogOut Project : Privacy Garage
4. MongoDB User Group Pune (MUG Pune)
5. Login Without Limits: Passwordless Across Consoles and Clouds
6. unifAI: no code agent orchestrator
7. Empowering Developer Innovation: Experiencing Backstage
8. k0s Project Booth
9. Secure Flow Booth
10. OKD (Origin Kubernetes Distribution): Community and Hands-On Demos
11. FOSS United Pune: Open Source Onboarding & Community Showcase
12. Build Open Source Document Workflows with ONLYOFFICE

Find full abstract details at https://drive.google.com/file/d/1lmdB0D52KELjmjK24-LkRPoTzY5cMRHu/view?usp=sharing

(Booths) VYAS - G - Open area
11:45
11:45
45min
A Quantum Computing Talk Without Qubits
Ram Iyengar

Are you a Software Engineer who failed physics, but loves distributed systems? Then you're at the right talk!

Quantum Computing is often explained with dead cats and spinning coins. This talk is designed to demystify a high-hype topic by avoiding bloch spheres and physics equations. The focus is entirely on the software supply chain and infrastructure implications of quantum computing. Our discussion will be limited to yaml files and container logs.

I specifically aim to highlight open source contributions from the Red Hat ecosystem, particularly the integration of PQC into the Linux userspace (Fedora/RHEL) and the operationalization of quantum workloads via Kubernetes Operators. This bridges the gap between "Quantum Scientist" and "Backend Developer."

The talk will highlight the following efforts:

  1. RHEL 10 & Fedora: How the OS layer is adopting NIST-approved algorithms (ML-KEM, ML-DSA) in core libraries like OpenSSL and GnuTLS. We'll show you how to switch your crypto-policies to "Quantum Ready" with a single command.
  2. OpenShift Quantum Operators: We will look at the OpenShift Qiskit Operator. We will spin up a Jupyter notebook environment on Kubernetes that connects to real quantum backends, effectively treating a Quantum Processing Unit (QPU) as just another accelerator, like a GPU.
  3. Liboqs (Open Quantum Safe): The open-source library powering much of this revolution, and how to use it to test if your application crashes when the keys get suddenly bigger.

As software engineers, we don't build the quantum hardware; we build the software that survives it. The arrival of fault-tolerant quantum computers threatens to break the encryption that glues the internet together (RSA, ECC). This isn't sci-fi; it’s a supply chain ticket waiting to happen.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY102
11:45
45min
Design Systems at Scale: Maintaining UI Consistency in Dynamic Plugin Architecture
Mitesh Kumar

Modern developer portals face a unique challenge: how do you maintain a consistent, cohesive user experience when your frontend is composed of dynamically loaded plugins from multiple teams and vendors? This talk explores real-world solutions from Red Hat Developer Hub (RHDH), an enterprise platform built on Backstage.

We'll dive into the architectural patterns and engineering practices that enable RHDH to deliver a unified design experience across a dynamic plugin ecosystem:

  • Design system integration: Leveraging PatternFly to create a shared design language that spans core platform and third-party plugins
  • Component contracts and APIs: Defining clear boundaries and interfaces that enforce consistency without sacrificing plugin flexibility
  • Runtime theming and customization: Enabling organizations to apply brand identity across all plugins without modifying source code
  • Developer experience: Building plugin scaffolding and documentation that guides contributors toward consistent UI patterns
  • Performance at scale: Strategies for code-splitting, shared dependencies, and efficient module federation

Attendees will learn practical strategies for building extensible frontend architectures that scale across teams while maintaining design coherence. Whether you're building developer platforms, plugin systems, or micro-frontends, these patterns apply to any scenario where UI consistency meets architectural flexibility.

User Experience and Design Engineering
VYAS - G - Room#VY003
11:45
45min
How to attack AI systems (and how to defend them) !!!!
Huzaifa Sidhpurwala

AI systems today demonstrate impressive capabilities—but they also introduce a rapidly expanding attack surface. Modern machine learning pipelines, from data collection and training to inference, are vulnerable to a wide range of adversarial manipulations. This talk provides a practitioner-focused exploration of how attackers compromise AI systems, using real research and case studies. Equally important, the session outlines defensive strategies grounded in current academic and industry work.

Attendees will leave with a clear, realistic understanding of how adversarial attacks work, what defenses are actually effective today, and how to architect AI systems that remain trustworthy even under adversarial pressure.

AI, Data Science, and Emerging Tech
VYAS - G - Room#VY015
11:45
45min
Lightweight Observability with Performance Co-Pilot and Grafana
Ayushi Tiwari

Looking to monitor containers without deploying heavyweight observability stacks? This talk shows you how to build a fast, resource-efficient performance monitoring setup using Performance Co-Pilot (PCP) and Grafana - perfect for local development, testing, or lightweight production use cases.

With containers becoming the default unit of software delivery, visibility into their performance is more important than ever. Yet, most developers and sysadmins skip observability during early stages because traditional tools are too complex or resource-intensive for smaller environments.

What You’ll Learn

This talk introduces a minimal, practical observability pipeline that runs entirely on your laptop or dev machine. We’ll walk through:

  • Installing and configuring PCP to collect system metrics like CPU usage, memory pressure, disk I/O, and network activity.
  • Streaming and storing performance data locally for real-time and historical analysis.
  • Visualizing metrics in Grafana with intuitive dashboards that help answer real questions: Why is my container slow? Is my system under load? Where’s the bottleneck?
  • Observing container workloads directly from the host, without needing Kubernetes or Prometheus. We'll cover how to access per-container resource usage from a system-level view using standard Fedora/RHEL tools.

Who Should Attend

This session is ideal for:

  • Developers running containers locally and looking to improve visibility into resource usage.
  • System administrators exploring alternatives to heavyweight monitoring stacks.
  • Anyone curious about observability fundamentals—sampling, storing, and visualizing performance metrics—with tools that are built into the Linux ecosystem.

Why It Matters

You’ll leave with a working observability stack you can take home—easy to install, easy to understand, and powerful enough to support real-world container debugging and performance monitoring. No prior experience with PCP or Grafana is required.

Whether you're building software, running test environments, or supporting edge workloads, this talk will help you see your system more clearly—without the noise.

Cloud, Edge, and Sustainable Computing
VYAS - G - Room#VY016
11:45
45min
MPI Meets Machine Learning: Unlocking PyTorch distributed for scaling AI workloads
Mansi Agarwal

The world of High-Performance Computing (HPC) and modern deep learning share a core DNA: the demand for near-linear scaling across hundreds of nodes. The core challenges remain the same—managing communication, balancing load, and coordinating resources but the abstractions and tooling are now defined by PyTorch Distributed.

This talk bridges the gap between traditional HPC paradigms and PyTorch's distributed computing ecosystem, designed specifically for deep learning workloads. We'll explore how familiar HPC concepts like collective operations, point-to-point communication, and process groups, manifest in PyTorch's distributed APIs. We'll discover how PyTorch builds upon battle-tested communication backends (NCCL, Gloo, MPI) while introducing novel primitives optimized for gradient synchronization and model parallelism. We then move beyond basic data parallelism to explore advanced memory-saving techniques like Fully Sharded Data Parallel (FSDP), PyTorch's native answer to memory scaling and touch upon the nascent Tensor and Pipeline Parallelism APIs, demonstrating how these techniques compose to train massive models.

This session equips you with a comprehensive understanding of PyTorch's distributed architecture and reveals the inner workings of one of the most actively developed areas in modern ML infrastructure. By mapping distributed systems concepts to PyTorch's implementation, you'll see how familiar patterns from parallel computing manifest in PyTorch's ecosystem and where there is still room for innovation and improvement.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY103
11:45
45min
Scaling Generative AI Inference with llm-d
Dasharath Masirkar

Generative AI models are rapidly changing the landscape of application development, but deploying and serving these large models in production at scale presents significant challenges. llm-d is an open-source, Kubernetes-native distributed inference serving stack designed to address these complexities. This session will introduce developers to llm-d, demonstrating how it provides "well-lit paths" to serve large generative AI models with the fastest time-to-value and competitive performance across diverse hardware accelerators. Attendees will learn about llm-d's architecture, key features, and how to leverage its tested and benchmarked recipes for production deployments, focusing on practical applications and best practices.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY124
11:45
45min
Understanding the Linux Block Layer-From Single Queue to Multi-Queue Architecture
Nitin U. Yewale

The Linux block layer plays a fundamental role in how data moves between user applications and storage devices. Over the years, storage hardware—SSDs, NVMe devices, high-throughput RAID systems—has evolved dramatically, exposing limitations in the traditional single-queue block layer. This talk will explore the architecture of the Linux block layer, highlight the performance bottlenecks of the legacy single-queue design, and explain how the modern multi-queue block layer (blk-mq) addresses these challenges.

In addition to the architectural deep dive, this session will walk the audience through what happens under the hood when a user performs a simple file copy operation using the cp command. We will trace the flow across various Linux subsystems—including the VFS layer, page cache, I/O scheduler, block layer, and storage drivers—giving attendees a holistic understanding of Linux I/O processing.

Open Track
VYAS - 1 - Room#VY104
12:30
12:30
45min
Lunch break
(Workshop) VYAS - G - Room#VY004
12:30
45min
Lunch break
VYAS - G - Room#VY003
12:30
45min
Lunch break
VYAS - G - Room#VY015
12:30
45min
Lunch break
VYAS - G - Room#VY016
12:30
45min
Lunch break
VYAS - 1 - Room#VY124
12:30
45min
Lunch break
VYAS - 1 - Room#VY102
12:30
45min
Lunch break
VYAS - 1 - Room#VY103
12:30
45min
Lunch break
VYAS - 1 - Room#VY104
13:15
13:15
75min
Carry Your Cluster with You: Bootc OS with Pre-Baked Microshift and Workload
Hrushabh Sirsulwar, Andreas Spanner, Abhishek Tiwary

Running Kubernetes workloads in disconnected, remote, or bandwidth-restricted environments is difficult—especially when cluster components and application images must be pulled before anything can start. MicroShift, a lightweight and upstream-friendly Kubernetes distribution, is ideal for edge deployments, but it still depends on pulling images from a registry on first boot.

This hands-on workshop demonstrates a community-driven approach using bootc embedded containers to build offline-ready Linux OS images. By embedding MicroShift and required application container images directly into the bootc build, systems can start up fully functional without any network access or registry pulls.

You will learn how to:

Understand how bootc enables immutable and reproducible Linux OS images
Embed MicroShift community edition containers and app images inside the OS during build time
Boot the system and run MicroShift instantly—no external registry required
Use preloaded images for real workloads on day one
Apply this workflow to any bootc-compatible Linux OS (Fedora, CentOS Stream,RHEL )

Design offline-first appliances for ships at sea, mines, rural deployments, air-gapped environments, and industrial edge systems

Maintain and update embedded-container images efficiently

Participants will walk away with clear, reproducible methods to build self-contained, offline-first MicroShift systems that can be deployed anywhere—from remote field devices to industrial edge nodes—using only upstream community tooling.

Cloud, Edge, and Sustainable Computing
(Workshop) VYAS - G - Room#VY004
13:15
15min
Nomad: Lightweight Orchestration That Complements Kubernetes
Shaheen Sayyed

When it comes to orchestration, Kubernetes tends to steal the spotlight — but it’s not the only way to run workloads at scale. HashiCorp Nomad offers a simpler, lighter approach to scheduling containers, VMs, and even raw binaries — without the operational overhead.

We’ll explore what makes Nomad a practical alternative (or complement) to Kubernetes. You’ll learn the key building blocks — jobs, groups, clients, and allocations — and see how Nomad’s minimalist architecture can run production-grade workloads on a single binary. We’ll end with a live demo of deploying and scaling a containerized web app, showing that “easy to run” doesn’t mean “less capable.”


Key Takeaways

  • Understand Nomad’s lightweight architecture and how it differs from Kubernetes.
  • See a live demo of deploying and scaling a containerized service in minutes.
  • Discover where Nomad fits — from small teams to hybrid and edge environments.
Cloud, Edge, and Sustainable Computing
VYAS - G - Room#VY003
13:15
15min
Prompt Engineering 2.0
Vedant Kasbekar

Prompt Engineering 2.0
making prompt short and simple
As AI models evolve rapidly and stronger versions become freely available, knowing how to utilise their full context window
How to know where to use a long context window and where to use a short context window
How Gemini and Claude are better than Cursor and all other vibe coding platforms

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY103
13:15
15min
Sustainable Coding: Why Your "Hello World" Matters
Angad Tewari

When we toss a plastic bottle in the trash, we see the waste. But when we write an inefficient Python loop? It looks clean. It feels like magic. No such thing as a free lunch

So as we say, "there is no such thing as a free lunch", every line of code you write triggers a switch in a server, demanding electricity, generating heat, and consuming water for cooling. As new developers, you are often taught to "just make it work." In this talk, we will shift that mindset to "make it efficient."

We will expose the "invisible exhaust" of modern software and debunk the myth that the Cloud is just fluffy water vapour. We will learn how basic coding choices between data types also can drastically reduce your energy footprint. Finally, we will demo tools like CodeCarbon that act as a "Fitbit" for your emissions, empowering you to build a career as a future-friendly architect.

Open Track
VYAS - 1 - Room#VY104
13:15
15min
The Cybersecurity Trifecta: Privacy, Governance, and Data Sovereignty in Action
Dr. Dhanashri Wategaonkar

In a time of digital change, cybersecurity has expanded beyond network defense to include data sovereignty, strong governance, and privacy protection. "The Cybersecurity Trifecta: Privacy, Governance, and Data Sovereignty in Action" examines how these three pillars work together to create a safe and reliable online environment. The presentation focuses on useful tactics and tools that businesses can use to improve their security posture, comply with regulations, and maintain control over their data in an international digital environment. Sovereign cloud frameworks, risk-based governance models, encryption and privacy-enhancing technologies, and Zero Trust architecture are some of the major themes. In order to create systems that are not only safe but also compliant, transparent, and resilient, attendees will acquire a comprehensive grasp of how to match cybersecurity practices with operational, ethical, and legal needs.

Cybersecurity and Compliance
VYAS - G - Room#VY015
13:15
15min
Unpacking the Llama Stack: Architecting Next-Gen AI Applications
Aditya Patil

The rapid rise of open, modular AI architectures is reshaping how developers build intelligent systems. At the forefront of this movement is the Llama Stack — a flexible, production-ready ecosystem designed to help teams build, deploy, and scale LLM-powered applications with confidence.

This talk delivers a practical, in-depth exploration of the Llama Stack, breaking down its components, capabilities, and real-world performance. Attendees will get a clear understanding of how the stack simplifies model orchestration, enhances security, improves observability, and accelerates production deployment.

Attendees will walk away with actionable insights on:
• What the Llama Stack is, and how its modular architecture empowers teams to build scalable AI systems.
• How the Llama runtime compares to conventional inference pipelines in terms of performance, extensibility, and developer experience.
• Security and governance features that make the Llama Stack enterprise-ready by default.
• Real-world case studies: When the Llama Stack outperforms custom or closed-source solutions—and when it doesn’t

Whether you're an AI engineer, backend developer, architect, or engineering leader, this talk will help you evaluate when and why the Llama Stack should be your foundation for building modern AI applications.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY102
13:15
15min
Writing, running, and testing awesome Ansible content with natural language and AI
Shatakshi Mishra

Ansible content developers lose hours each day to context switching, which kills productivity and increases the risk of human errors.

We've integrated an AI-powered Model Context Protocol (MCP) server directly into the Ansible VS Code extension to address this problem. The result is a single, unified development experience that goes beyond an ordinary AI code assistant. Adding MCP server capabilities to the Ansible VS Code extension gives you an intelligent development environment that allows you to work within the context of all your existing Ansible content, including playbooks, roles, and inventories. As a result, teams can reduce fragmentation in their workflows to gain productivity and standardise and accelerate Ansible content development.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY124
13:30
13:30
15min
Break
VYAS - G - Room#VY003
13:30
15min
Break
VYAS - G - Room#VY015
13:30
15min
Break
VYAS - G - Room#VY016
13:30
15min
Break
VYAS - 1 - Room#VY124
13:30
15min
Break
VYAS - 1 - Room#VY102
13:30
15min
Break
VYAS - 1 - Room#VY103
13:30
15min
Break
VYAS - 1 - Room#VY104
13:45
13:45
45min
Architecting Efficient Agents with Semantic Routing and Mixture-of-Tools
Vincent Caldeira

Let’s be honest: Agentic AI works beautifully in 30-second demos, but often falls apart the moment you push it into production. We’ve all seen it: agents that get confused by their own context windows, use a sledgehammer (Reasoning Models) to crack a nut (simple lookups), or hallucinate tool parameters because they were overwhelmed by JSON schemas. The problem isn't usually the model's IQ but the Agentic system architecture.

In this session, we will move beyond basic Agentic patterns to explore the system architecture required for robust enterprise agents. Drawing on recent, cutting-edge research, we will dissect why a single "God Model" fails and how a Mixture-of-Agents (MoA) approach succeeds.

We will explore:

  • The "When to Reason" Problem: How to use Semantic Routing to classify user intent, dynamically routing queries to the most cost-effective model (e.g., routing logic puzzles to reasoning models vs. factual lookups to standard LLMs) to slash latency and token costs.  

  • The "Junk Drawer" Problem: Why dumping 90+ tools into a context window breaks agent performance. This includes using a Mixture-of-Tools strategies, where we use diverse, expert agents with specialized tool access rather than one overloaded generalist.  

  • The "Inner Monologue": How to implement Metacognition and governance loops in integration with the Model Context Protocol (MCP). We will demonstrate how agents can "think about their thinking," self-correcting their plans and validating facts before presenting an answer to the user.  

Stop building black boxes. Come learn how to architect agents that are auditable, efficient, and actually know which tool to use.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY102
13:45
45min
Bridging DevOps and MLOps: Unifying Pipelines with KitOps and GitOps
Neel Shah

This session explores how KitOps seamlessly integrates with GitOps tools such as Flux and ArgoCD to create unified definition processes for native AI workloads. Seriously, Learn how KitOps ModelKits unlocks repeatable packaging of models, code and datasets in any environment while GitOps provides automated, auditable deployment processes. Attendees will see hands-on demonstrations of version-controlled machine learning elements, automatic rollbacks, and environment recovery. Learn how these integrations eliminate configuration drift, enforce consistent audit trails and support compliance with enterprise requirements all with minimal operational overhead. By connecting modern DevOps methods with the rigorous demands of MLOps, this talk will demonstrate how cloud-native AI teams can rapidly deliver reliable, scalable and secure ML solutions.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY103
13:45
45min
Building Real-Time Control Software: Lessons from Hyperloop
Atharva Arbat

Building real-time control software sounds exciting until your WebSocket connection drops mid-run. As Navigation and Control Lead at Vegapod Hyperloop, I built the base station software that monitors and controls our pod during test runs. This talk covers the mistakes I made - choosing the wrong communication protocols, underestimating latency requirements, and learning why reliability matters more than fancy features. I'll walk through our architecture using Rust, Qt, React, and UART, and explain how we achieved 11ms latency between pod and base station. If you're building anything real-time, this talk will save you months of debugging.

Open Track
VYAS - 1 - Room#VY104
13:45
45min
Building Secure & Reliable Edge Images Using RHEL for Edge Blueprinting & OSTree
Rajesh Dulhani, Dipayan Dutta

Modern edge environments demand OS images that are small, secure, and reliable—even when deployed across thousands of geographically distributed devices. Traditional approaches that rely on manual configuration or mutable operating systems break quickly at scale. RHEL for Edge solves this by using Image Builder, Blueprinting, and OSTree to create immutable, version-controlled, and reproducible edge images.

This session provides a hands-on walkthrough of designing and building production-grade edge images. We will cover how to create minimal and secure blueprints, embed system configurations, and use OSTree for atomic updates and safe rollbacks. The talk also explains best practices for image signing, validating supply-chain integrity, and keeping fleets consistent even over weak or intermittent networks.

By the end of this session, attendees will understand the full workflow—from blueprint design to deployment—and walk away with patterns they can immediately apply in manufacturing, telco, retail, and remote-site infrastructures.

Cloud, Edge, and Sustainable Computing
VYAS - G - Room#VY016
13:45
45min
Democratized Deception: How Threat Actors Are Weaponizing AI to Scale Cybercrime
Deepak Koul, Deep Gandhi

In early 2024, a finance officer at the global engineering firm Arup wired $25 million to scammers after joining what appeared to be a legitimate video call with his CFO and colleagues. None of them were real. Every face and voice on that call was AI-generated, a deepfake ensemble so convincing it bypassed every human instinct for suspicion.

(Source: https://www.theguardian.com/world/2024/feb/05/hong-kong-company-deepfake-video-conference-call-scam)

The same AI tools that write your code, documentation and summarise your meetings and Slack threads are now being used to deceive, clone, and exploit at industrial scale. This isn’t just another evolution of hacking, it’s the democratization of deception.

Generative AI has reshaped the threat landscape - automating phishing, creating synthetic identities, generating polymorphic malware, and scaling disinformation campaigns.
Reports from Google’s Threat Intelligence Group, Microsoft’s Digital Defense Report, and Mandiant all confirm this trend: AI has lowered the barrier to entry for sophisticated deception, enabling faster, smarter, and more targeted attacks.

This session exposes the anatomy of AI-enabled threat activity, tracing how criminal and state actors are using LLMs and diffusion models in the wild. It also outlines a defense playbook, behavioral anomaly detection, AI-aware phishing simulations, and governance models for responsible internal AI use.

Because the future of cybersecurity won’t be won by whoever builds the bigger model, it’ll be won by those who recognise deception faster than machines can fabricate it.

Key Takeaways

Understand how AI is transforming the tactics of modern cyber adversaries.
Learn to detect linguistic, behavioural, and media-based AI deception.
Apply a practical mitigation framework combining AI governance, behavioral analytics, and security awareness.

Cybersecurity and Compliance
VYAS - G - Room#VY015
13:45
45min
Lost in Silence: Understanding When Screen Readers Don’t Speak Up
Ayushi Midha, Aishwarya Urne

In the world of digital accessibility, silence is seldom neutral. For users of screen readers, a missing announcement can mean confusion, distrust, or exclusion—yet the issue often remains invisible to most developers and designers. In this talk, we’ll explore how the absence of spoken feedback impacts accessibility, trust, and user experience, particularly for people navigating dynamic web interfaces. Drawing on real-world examples and practical scenarios, we’ll examine the root causes of “silent failures” (such as missing ARIA live regions, improper roles, stale live-region content) and discuss how seemingly minor markup decisions ripple into major accessibility barriers. Attendees will walk away with a clear framework for diagnosing and remedying these silent gaps: from audit strategies and end-user testing through to implementable code patterns, best practices for dynamic announcements, and how to integrate these into your UI/component library workflow. Whether you’re working in React, Web Components, or template-driven apps, you’ll gain actionable insights to ensure your components announce properly, your users feel heard, and your silent UI becomes truly inclusive.

Key take-aways:

Why missing announcements matter: the user-experience impact beyond visual cues

Common pitfalls in dynamic content, live regions, and state changes

A developer-friendly approach to audit, test, and fix announcement gaps

How to bake these practices into your component library or design system (for example, your team’s work with PatternFly and a custom theme)

Tips for collaboration between developers, UX designers, and accessibility testers to maintain accessibility as your product evolves

User Experience and Design Engineering
VYAS - G - Room#VY003
13:45
45min
ModelPack: Packaging ML models as OCI artifacts made easy
Avinash Singh, Andrew Block

AI models have become the next wave of cloud native applications. And, in a cloud native world, containers have become the de facto method of delivery. However, instead of bundling the model inside a container, what if there was a way to publish models directly while reusing many of the same technologies?

OCI artifacts have emerged as this solution and an increasing number of technologies have adopted this approach for packaging and distributing content. When considering OCI artifacts for AI models, several questions still remain open.
Packaging machine learning models is complex, often requiring teams to use proprietary package types or cobble together open source tools. These inconsistent environments, manual processes, and proprietary formats lead to deployment failures, delays, increased operational costs, and vendor lock-in.

ModelPack, an emerging Open Source Project, solves these challenges by providing a standardized, consistent, reproducible, and portable packaging format for AI/ML models, that is vendor neutral. The result simplifies deployment, reduces errors, and ensures models work seamlessly across a variety of environments.

In this session, attendees will learn how to package, distribute and run AI/ML projects as OCI artifacts like a pro. By exploring the end to end lifecycle of an AI/ML model including the resources provided by the ModelPack project, attendees will not only see the benefits, but have a repeatable process that they can reuse in their own environments.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY124
14:30
14:30
15min
Break
(Workshop) VYAS - G - Room#VY004
14:30
15min
Break
VYAS - G - Room#VY003
14:30
15min
Break
VYAS - G - Room#VY015
14:30
15min
Break
VYAS - G - Room#VY016
14:30
15min
Break
VYAS - 1 - Room#VY124
14:30
15min
Break
VYAS - 1 - Room#VY102
14:30
15min
Break
VYAS - 1 - Room#VY103
14:30
15min
Break
VYAS - 1 - Room#VY104
14:45
14:45
45min
Agentic AI and Model Context Protocol's Security Vulnerabilities
Mohit Sewak, Ph.D.

We have moved from the era of Chatbots that "speak" to AI Agents that "do." By giving LLMs access to tools via the Model Context Protocol (MCP), we have unlocked incredible power—but also a catastrophic new attack surface. Recent benchmarks like InjecAgent reveal that over 50% of agentic tasks are vulnerable to injection, allowing attackers to hijack your agent to delete files, exfiltrate data, or execute code.

This talk moves beyond simple "jailbreaking" to explore the advanced vulnerabilities threatening the Agentic ecosystem. We will demonstrate how Indirect Prompt Injection (IPI) turns innocent data into malicious code, how Tool Poisoning compromises the supply chain, and how the "Confused Deputy" problem turns your helpful assistant into an insider threat. We will dissect the "Agentic Gap"—where cognitive load degrades safety training—and conclude by defining the critical shift from model safety to system-level security.

Outline:
1. The Paradigm Shift: From Informational Harm to Instrumental Harm
● The Evolution: We are shifting from Chatbots (Input/Output) to Agents (Observation/Thought/Action).
● The Threat Shift: Moving beyond "mean tweets" (reputational risk) to "operational compromise" (Instrumental Harm). We will discuss how an agent can be tricked into wiring funds or bricking a server.
● The "Buffer Overflow" of AI: How LLMs acting as Von Neumann machines fail to distinguish between user instructions (Code) and retrieved content (Data).

  1. The Mechanism of Failure: The "Agentic Gap"
    ● Cognitive Load: Drawing on recent research, we will explain the "Agentic Gap"—the phenomenon where a model's refusal training degrades significantly when it is under the "cognitive load" of tool execution.
    ● The "Artie" Persona: Using the "Artie the Intern" analogy to explain why models prioritize functional success (completing the task) over safety constraints when processing complex workflows.

  2. Advanced Taxonomy of MCP Vulnerabilities
    We will move beyond basic prompt injection to explore sophisticated attack vectors specific to the Model Context Protocol:
    ● Context Manipulation:
    ○ TopicAttack: How attackers use natural language transitions to "smooth talk" the agent into accepting malicious contexts.
    ○ WebInject: The use of steganography in images or metadata to hide commands that the agent's vision system interprets as instructions.
    ● Supply Chain & Tool Poisoning:
    ○ Schema Poisoning: Hiding malicious instructions inside the API "instruction manual" (tool definitions) that the agent reads.
    ○ Output-Based Poisoning: When a legitimate tool returns data (e.g., a weather report) containing a hidden payload that executes in the next step of the chain.
    ○ The "Evil Twin" Attack: Tool impersonation risks in the MCP ecosystem.

  3. Why Traditional Defenses Fail
    ● The Futility of "Better Prompts": Why defensive prompting and standard RLHF are mathematically insufficient against adversarial suffixes and automated red-teaming tools.
    ● The Detection Paradox: How large context windows and "Chain of Thought" reasoning can actually increase vulnerability to logic-based injection attacks.

  4. Conclusion: The Security Imperatives
    ● A brief overview of the necessary shift toward "Defense-in-Depth."
    ● Moving from "Chatbot Safety" to "Systems Security" (Architecture over Alignment).

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY124
14:45
75min
Building Autonomous Intelligent Data Agents: A Hands-On Workshop on Agentic AI
Kamesh Sampath, Nikhil Kulkarni, Yash Dixit, Aditya Kulkarni, Shubhankar Gayal

The next wave of AI goes beyond chat — it’s about autonomous agents that can reason, plan, and act on data. In this hands-on workshop, participants will learn how to design and implement intelligent data agents that combine large language models with real-world data and tools.

We’ll explore key concepts like orchestration, tool use, memory, and grounding, then progressively build a working agent using Python and common data-access patterns. You’ll see how to make your agent query structured data, call APIs, and respond dynamically to user intent.

By the end, you’ll understand how to design, extend, and deploy autonomous agents that bring intelligence directly to your data workflows — ready to adapt for analytics, automation, and real-time applications.


🎯 Key Takeaways

  • Understand the architecture and principles of autonomous, agentic AI systems.
  • Learn how to enable reasoning, planning, and tool use within an agent.
  • Gain hands-on experience building an intelligent data agent using Python.
  • Discover how to connect agents with APIs and structured data sources.
  • Walk away with a working prototype and patterns to extend for enterprise use.

📚 What You’ll Learn

  • The foundations of agentic AI and autonomous workflows.
  • How to structure prompts, manage memory, and orchestrate tool use.
  • How to integrate large language models with data and APIs.
  • How to evaluate, extend, and operationalize AI agents safely and effectively.

AI, Data Science, and Emerging Tech
(Workshop) VYAS - G - Room#VY004
14:45
45min
Corporate Success: The Unwritten Curriculum
Nimisha Mukherjee

The transition from academia to the corporate world often reveals a significant skill gap. This session is designed for technical developers and early-tenure employees, offering the high-impact wisdom they didn't receive in school, a standard onboarding, or "typical corporate gyan."

Drawing upon 29+ years of leadership and mentorship experience, this talk delivers a curated, practical set of tips and traps crucial for accelerating professional growth. We move beyond theory to uncover the unwritten rules of corporate success, covering essential hard and soft skills, including:

Hard Skills: Prioritizing business value, minimizing technical debt, and navigating effective Agile collaboration.

Soft Skills: Mastering the art of professional communication, managing up, influencing stakeholders, and decoding organizational dynamics.

Crucially, as we rely more on AI, we will explore the "what you don’t know" mindset—the meta-skill essential for becoming a highly effective prompt engineer and avoiding the pitfalls of misplaced AI dependence. This is the accelerated, high-ROI learning that bypasses years of costly trial and error.

Open Track
VYAS - 1 - Room#VY104
14:45
45min
Governing Global Vulnerabilities: Standards for Open Source Resilience
Yogesh Mittal

Effective Vulnerability Management is the bedrock of modern Cybersecurity and Compliance. This session provides an essential overview of the global vulnerability ecosystem, focusing on the strategic and process-driven work required for sustainable Open Source Resilience. We will explore the critical need for adopting standardized governance and data formats to help the vast Open Source Software (OSS) community manage the entire lifecycle of security vulnerabilities efficiently.

Attendees will learn about:

  • The Global Governance Structure: Understanding the federated design of key vulnerability programs and the leadership roles that enable a functional, scalable software ecosystem.
  • Community Mentorship Models: Best practices for establishing governance and mentorship programs that successfully integrate more open source projects into coordinated vulnerability disclosure frameworks.
  • Data Standards for Automation: The strategies for transforming vulnerability findings into actionable, machine-readable security data (e.g., VEX, CSAF) to fuel consumer automation and tooling.
  • Resilience through Standardisation: How adherence to open standards ensures a transparent and trusted flow of vulnerability information, drastically strengthening the global OSS supply chain.

Join us to explore how strategic governance and unified standards are key to moving the entire ecosystem toward proactive security resilience.

Cybersecurity and Compliance
VYAS - G - Room#VY015
14:45
45min
Kubernetes Confessions: Real Mistakes We’ve All Made (and What They Teach Us)
Shashank Pai

Kubernetes has a reputation for being complex, but the failures that bring real production clusters down are often shockingly simple and uncomfortably familiar. This session is a collection of real-world “Kubernetes confessions” gathered from years of helping teams troubleshoot outages: the app that appeared healthy but was invisible to monitoring, the cluster that couldn’t drain a node for half a day, the container that quietly slipped out of its sandbox, and the on-prem registry so slow it brought an entire release pipeline to a standstill.

Each story uncovers the small decisions and overlooked fundamentals that led to major incidents and, more importantly, the lessons each failure teaches us about how Kubernetes really behaves in production. If you’ve ever wondered how something tiny in your YAML, probes, drains, or privilege settings turned into something huge, this talk offers clarity, practical insights, and the playbook needed to avoid these mistakes in your own environment.

Cloud, Edge, and Sustainable Computing
VYAS - G - Room#VY016
14:45
45min
Kubernetes for Machine Learning: Managing and Scaling AI Workloads
Drishti Jain

Kubernetes and Machine Learning is an intersection that enables creating applications harnessing the power of Data. This talk I will explore the intersection of Kubernetes and machine learning and discuss best practices for managing and scaling AI workloads. Along with diving into End to end ML lifecycle using Kubeflow, dive into specific use cases for Kubernetes in machine learning, highlight challenges and solutions, and explore advanced features such as horizontal and vertical scaling, pod affinity and anti-affinity, and specialized resource types like GPUs and TPUs.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY103
14:45
45min
Platform Wars: The Battle Between Golden Paths and Spaghetti Pipelines
Prithvi Raj, Bharath N R

Internal developer platforms promised golden paths, streamlined reliable routes for delivering software. Yet too often, they lead platform engineers into a tangled web of YAML, Bash scripts, and tool sprawl. The true crisis today isn't implementing the idea of platform engineering but navigating through platform democracy challenges. The true Darth Vader moments in platform engineering appear when good intentions turn dark: developer experience suffers, and open-source tools quietly become villains to velocity.

The idea behind this session is to dive deeper into the open source tooling and cloud-native strategies that mitigate the platform crisis. We’ll examine how to maintain the Jedi discipline of paved paths or fall into spaghetti-pipeline chaos while chasing the perfect delivery workflow.

Through real-world stories and anti-patterns, we’ll explore the light and dark sides of OSS-powered platform engineering and help you uncover who the true Darth Vader of your stack might be.

Cloud, Edge, and Sustainable Computing
VYAS - G - Room#VY003
14:45
45min
Running Multi-Arch Builds & Tests in OpenShift CI/CD
Nick Moraitis

Our team is responsible for the entire CI/CD infrastructure powering OpenShift development, a large-scale system running across multiple OpenShift clusters.
Thousands of jobs and builds are executed daily, supporting both internal OpenShift teams and external organizations relying on our infrastructure.

One of the biggest challenges we tackled was enabling multi-architecture builds and test execution at this scale, something not natively supported by OpenShift builds.

In this talk, I'll delve into the solutions we implemented to make it work and the challenges we encountered.
From handling multi-arch base images to ensuring consistent test execution across architectures, I’ll share the technical solutions we developed and the lessons we learned.

Cloud, Edge, and Sustainable Computing
VYAS - 1 - Room#VY102
15:30
15:30
15min
Break
VYAS - G - Room#VY003
15:30
15min
Break
VYAS - G - Room#VY015
15:30
15min
Break
VYAS - G - Room#VY016
15:30
15min
Break
VYAS - 1 - Room#VY124
15:30
15min
Break
VYAS - 1 - Room#VY102
15:30
15min
Break
VYAS - 1 - Room#VY103
15:30
15min
Break
VYAS - 1 - Room#VY104
15:45
15:45
15min
AI-Powered Issue Triage: From Chaos to Clarity in Seconds
Tanwi Geetika

AI-Powered Issue Triage: From Chaos to Clarity in Seconds


Abstract

Tired of manual issue triage? Watch AI transform it from hours to seconds with an open-source system that auto-analyzes issues (GitHub, with Jira integration on the way), identifies root causes, suggests code fixes, and detects duplicates — all using your actual codebase as context.

Learn how we built it using Repomix, Google Gemini, and more. See it running on GitHub Actions, and explore our vision for a multi-platform bot service.
Real code, real analysis, real impact.


Description

Maintainers and engineering teams across GitHub and Jira spend a huge chunk of their time — often 40% or more — manually triaging issues. Reading unclear bug reports, searching the codebase, asking follow-up questions, hunting for duplicates.… all of this consumes hours every week.

In the Ansible ecosystem alone, this leads to hundreds of hours of repetitive triage work every month.

Our AI system functions like a skilled triage engineer, currently integrated with GitHub (with Jira and other platforms coming soon).


What It Can Do

  • Understands issues in context using the entire codebase
  • Finds likely root causes and affected components
  • Suggests concrete code fixes (file + line references)
  • Auto-labels issues with type + severity
  • Detects duplicate issues using semantic similarity
  • Protects against prompt injection
  • Runs only when triggered (labels / conditions)

Supported Platforms

  • Today: GitHub Actions
  • 2026: Jira, GitLab via a Bot-as-a-Service model

Key Components

  • Repomix for comprehensive codebase snapshots
  • Gemini 2.0 Flash (Pro/Exp configurable)
  • GitHub Actions integration layer
  • Security engine for prompt-injection detection
  • Duplicate finder + retry logic for stable outputs

Why This Matters

  • This is a working system
  • Teams can adopt the workflow immediately
  • Saves time and reduces repetitive triage
  • Improves consistency and reduces burnout
  • Transparent — runs entirely inside GitHub Actions

This talk explains how our AI Issue Triage solution brings context-aware intelligence into developer workflows by combining code analysis, classification, root-cause detection, and automated suggestions — all inside GitHub.

I’ll conclude with a live demo on a real GitHub project showing an end-to-end automated triage.


Audience Takeaways

  • How to integrate AI into existing dev workflows
  • Practical prompt engineering for code analysis
  • How to build secure AI systems (prompt injection defense)
  • How to scale from a tool → service architecture
  • Real-world benefits of AI automation

Target Audience

  • Primary: Open-source maintainers, DevOps engineers, platform teams
  • Secondary: Developers interested in AI/ML
  • Experience Level: Intermediate (GitHub, Python, CI/CD familiarity)

Additional Information

Session Materials

  • GitHub Repository: https://github.com/tanwigeetika1618/AI-Issue-Triage
  • Live demo environment.
  • Slides will include architecture diagrams, code snippets, and results

Engagement Plan

  • High-engagement live demo
  • Real-time issue analysis
  • Invite audience to try it afterward
  • Provide setup guide + documentation
AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY103
15:45
15min
Battery Range Prediction using Federated Learning on Edge
Vinod Pathangay, Sagar Sundaray

Accurate prediction of the battery range of electric vehicles requires periodic update of the prediction model as there are changes in battery parameters with time and variation in driving dynamics. Federated Learning (FL) offers the following two advantages for model update: (1) It aggregates learnings from data patterns of fleet of vehicles to provide a sophisticated model that has been trained on wide range of scenarios. (2) It protects the privacy of the vehicle user without sending raw data to the central repository for model updates. With simulated vehicle data and Flower FL framework, a range prediction solution has been developed in a manner so as to easily port to an embedded edge Texas Instruments platform. The edge component can run as a quality managed (QM) component where as the central model aggregation can run as a containerized application on-prem or cloud where communication is established using gRPC.

Cloud, Edge, and Sustainable Computing
VYAS - G - Room#VY016
15:45
15min
Brain Tumor Detection and Classification Using Deep Learning Models
Swaraj Sontakke

Brain tumors are life-threatening and require early and accurate diagnosis. MRI scans are widely used for detection. Brain tumors can look different from one another in MRI scans, so identifying them correctly is very important. To solve this problem, our project develops an automated deep learning-based system that detects and classifies brain tumors directly from MRI images into four types that are Glioma, Meningioma, Pituitary tumor, and No tumor
We trained and evaluated three powerful deep learning models using transfer learning — VGG-16, VGG-19, and EfficientNet-B1 — on a dataset of 7023 MRI images. The system applies resizing, normalization, and data augmentation to improve learning and reduce overfitting. During evaluation, VGG-16 achieved ~88% accuracy, VGG-19 achieved ~90% accuracy, and EfficientNet-B1 achieved the highest accuracy of ~94%, making it the most efficient model for multi-class tumor classification.
This work shows that deep learning can greatly support radiologists by speeding up diagnosis and increasing reliability. The results prove that EfficientNet-B1 is a strong option for real-world clinical applications because it provides high accuracy, low loss, and fast training with fewer parameters.
Key Takeaways:
• It reduces the need to manually check every scan and supports doctors in making decisions with more confidence.
• The project shows how deep learning can be used to solve a real medical problem using MRI image data.
• High accuracy means lower chances of misclassification, giving patients peace of mind about their results.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY124
15:45
15min
Centralized CI Monitoring with GitHub Actions
Anushka Shukla

In large open-source ecosystems, CI failures often go unnoticed across multiple repositories — leading to broken main branches, delayed fixes, and developer frustration.
The Ansible DevTools team at Red Hat manages over 15 active repositories (Python, TypeScript, Ansible Collections), and we faced exactly this problem.
We built a centralized CI observability system using GitHub Actions reusable workflows and organization-level Slack notifications, enabling instant visibility of failures across all critical repositories — with zero manual setup per repo.
This talk explains how we evolved a simple Slack alert into a scalable, maintainable, and low-noise CI health monitoring system that any DevOps team can adopt — without paid tools or heavy infrastructure.

Open Track
VYAS - 1 - Room#VY102
15:45
15min
Pod-to-Pod Ping: The Hidden Wiring of Kubernetes
Vibhor Chinda

Ever wondered what really happens when one pod pings another pod inside your Kubernetes cluster?

In this lightning talk, we’ll peel back the layers of the Kubernetes networking model to reveal the hidden wiring that makes pod-to-pod communication possible.

We’ll trace a simple ping command from one pod to another and unpack what happens under the hood, from CNI plugins and veth pairs to routing tables and bridge interfaces. By the end, you’ll walk away with a clear, mental map of how Kubernetes networking actually works — no buzzwords, just packets, pipes, and pure clarity.

Perfect for anyone who’s ever typed kubectl exec and wondered, Wait… how does this packet know where to go?

Open Track
VYAS - 1 - Room#VY104
15:45
15min
Shift-Left Security in Practice with Gitlab
Rashi Chaubal

Security is no longer something to “add at the end.” In modern DevOps, teams must embed security checks early and automatically — the essence of “shift-left security.”
In this session, we’ll explore how to implement practical, automated security testing in CI/CD pipelines using open tools that are natively integrated in gitlab, with GitLab CI/CD as an example platform.

We’ll demonstrate how to integrated open-source scanners like Semgrep, and OWASP ZAP work under the hood — all without needing enterprise licenses. The focus will be on principles and workflow design: where to start, how to keep pipelines fast, and how to give developers actionable feedback.

Attendees will leave with a ready-to-use blueprint to implement shift-left security in their own environments.

This talk is for developers, DevOps engineers, and security practitioners who want to make security a seamless, automated part of delivery — not a late-stage blocker.

Cybersecurity and Compliance
VYAS - G - Room#VY015
16:00
16:00
15min
Break
(Workshop) VYAS - G - Room#VY004
16:00
15min
Break
VYAS - G - Room#VY003
16:00
15min
Break
VYAS - G - Room#VY015
16:00
15min
Break
VYAS - G - Room#VY016
16:00
15min
Break
VYAS - 1 - Room#VY124
16:00
15min
Break
VYAS - 1 - Room#VY102
16:00
15min
New break
VYAS - 1 - Room#VY103
16:00
15min
Break
VYAS - 1 - Room#VY104
16:15
16:15
45min
AI for Legal : The story of hype, adoption and what comes next
Ashwin Vinay Phadke

Elevator Pitch:
We work with the worlds top legal firms and although machine learning has always been the central part of the automated review process, the recent shift in generative AI has definitely brought people back to conference rooms to understand and react on what they did not see coming so fast.

Abstract:
The legal industry has long wrestled with Artificial Intelligence. This talk charts the evolution of legal AI, beginning with the "Hype 1.0" era of early machine learning, characterized by tools like Technology Assisted Review (TAR). Adoption was slow, met with resistance, and limited to niche, high-cost applications.

We will analyze what was learned from this phase: namely, that AI solutions must be domain-specific, prove a clear ROI, and address deep-seated issues of data security and ethical compliance. This foundation was critical, but it was the arrival of Generative AI (GenAI) that triggered "Hype 2.0," fundamentally shifting the focus, thanks to unprecedented accessibility and natural language capabilities, from incremental efficiency to truly transformative power in drafting, research, and analysis.

The discussion will pivot to the current state of adoption, examining how firms and in-house teams are responsibly navigating the risks (hallucinations, client confidentiality) while leveraging tools for high-value tasks. Finally, we will look ahead at the upcoming changes: how GenAI is fundamentally reshaping the legal value chain, demanding new skills (prompt engineering), changing billing models (flat fees over billable hours), and setting the stage for the "AI Lawyer" of the future—one who is augmented, not replaced, but strategically elevated by technology. We will explore how data and data security becomes important, how do we create contexts that contain highly sensitive information (only known to certani parties) and how is performance measured considering there is no room for hallucinations in the legal domain. We will go through how retreival for legal works, how data security and integrity works, how to design prompts for legal AI systems and where it is making strides and where do we see a lot of friction. Do markets and languages matter.

Attendees will leave with a clear roadmap to mitigate current risks, capitalize on GenAI's strategic advantages, and ensure they remain at the forefront of the transforming legal profession.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY102
16:15
45min
Beyond ImagePullBackOff: A Stateless, Secret-less Distributed Registry for Edge
Prasanth Baskar

We have all seen it. Your GitOps tool reports Synced in seconds, but your edge node pods are stuck in ContainerCreating/ImagePullBackOff for minutes... hours. This is not just a low bandwidth problem. it is a fundamental design flaw. We are trying to apply a centralized, cloud-native architecture to a decentralized, distributed-systems problem.

This leads to slow startups, failed updates even worse pullSecrets on every node create a massive security risk. In this session with Harbor Satellite (subproject of goharbor) we will run a stateless distributed registry along with your container workloads

https://github.com/container-registry/harbor-satellite

Cloud, Edge, and Sustainable Computing
VYAS - G - Room#VY016
16:15
75min
Build Your First AI Voicebot in 75 Minutes: From LLM to Live Phone Calls
Shivaa Tripathi, Prashanth Dixit KS, Narayanababu Laveti D V

AI voicebots are transforming how businesses interact with customers, from 24/7 support agents to intelligent appointment schedulers. With LLMs like ChatGPT, Claude, Gemini, and Mixtra, building conversational AI has never been more accessible. What if your bot could actually answer phone calls? This hands-on workshop takes you from zero to a working AI voicebot in 75 minutes. We'll start by building a conversational agent powered by your choice of LLM (GPT4.1 mini or Gemini 2.5 flash). You'll design conversation flows, handle user intents, and create natural voice responses. Then comes the magic: we'll connect your voicebot to real telephone infrastructure. By the end of the session, you'll be able to call your own bot from your phone, engage in a conversation, and walk away with production-ready code that you can deploy immediately. No prior experience with AI or telephony is required. Just bring your laptop and curiosity.

What Participants Will Build
By the end of this workshop, each participant will have:
1. A conversational AI voicebot powered by LLMs (GPT4.1 mini or Gemini 2.5 flash)
2. Natural voice interaction with speech recognition and text-to-speech synthesis
3. Smart conversation handling with context memory and intent recognition
4. Live phone integration call your bot from any phone and talk to it
5. Deployable codebase ready for production use cases

Detailed Agenda

Part 1: Building Your AI Voicebot (30 minutes)
1. Introduction to conversational AI and voicebot architecture
2. Setting up your LLM backend (GPT4.1 mini or Gemini 2.5 flash)
3. Designing conversation flows and system prompts
4. Adding speech-to-text and text-to-speech for voice interaction
5. Testing your voicebot locally

Part 2: Making It Real Phone Integration (30 minutes)
- How phone calls connect to your code (webhooks & APIs)
- Connecting your voicebot to telephony infrastructure
- Handling inbound calls and real-time voice processing
- Call flow management: greetings, transfers, and fallbacks
- Live demo: Call your voicebot from your phone!

Part 3: Production Tips & Q&A (15 minutes)
- Error handling and graceful degradation
- Scaling your voicebot for real-world traffic
- Use cases: customer support, appointments, notifications
- Open Q&A and next steps

Prerequisites
• Basic Python knowledge (functions, APIs, JSON)
• Laptop with Python 3.12+ installed
• Code editor (VS Code recommended)
• Mobile phone for live testing
• All API keys, phone numbers, and infrastructure are provided; no signups required

Key Takeaways
• A working AI voicebot you can call from your phone
• Hands-on experience with LLM integration for voice applications
• Complete source code ready to customise and deploy
• Understanding of how to connect AI to real phone infrastructure
• Sandbox access for continued experimentation after the workshop

Target Audience: Developers curious about AI voice applications. Whether you're building customer service automation, exploring conversational AI, or just want to create something cool with LLMs, this workshop is for you. We assume basic knowledge of Python but no prior experience with AI or telephony.

AI, Data Science, and Emerging Tech
(Workshop) VYAS - G - Room#VY004
16:15
45min
DevOps at LLM Speed: Using an AI Copilot for Kubernetes and Jenkins
Karan Jagtiani

This session is a practical, demo-driven walkthrough of how I being a cloud architect use an open source AI copilot called Skyflo.ai to speed up day-to-day DevOps and SRE workflows, without sacrificing safety or compliance. We'll dive into a real incident in a live Kubernetes environment with a Jenkins deployment, where the audience will see the AI agent in action, and how I prompt the agent to triage the incident, decide on the next steps, and act on it. Attendees will see how teams are cutting incident response time by utilizing Skyflo and it's human-in-the-loop approach.

Key Takeaways:

  • Learn how I cut down incident response time by 50% at Storylane by using an AI copilot
  • See how to use an AI copilot to speed up your Kubernetes, Argo, Helm, and Jenkins workflows
  • Understand the human-in-the-loop architecture of Skyflo and behind the scenes of how it works

Live Demo Plan (70% time):

1) Trigger a deployment on Jenkins using Skyflo
2) Wait for the agent to report back with the status of the deployment
3) The pipeline will succeed, but the deployment in Kubernetes will fail
4) The agent will go through a process of discovery to find the root cause of the issue
5) The agent will decide on the next steps and act on it

Under-the-Hood (30% time):

  • Brief Architecture Tour: How LangGraph and MCP are used to power the Skyflo agent
  • Custom MCP Server: kubectl/helm/argo/jenkins with typed parameters and validation
  • Streaming over SSE: Live events over Redis pub/sub and server-sent events over MCP
AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY103
16:15
45min
Don’t Trust Opaque Clouds, Cryptographically Verify Instead!
Chris Butler, Pradipta Banerjee

The principle of "Don't trust, verify" is fundamental, yet cloud computing often forces users to place implicit trust in opaque infrastructure and the organizations that audit the clouds.
Now you have options to follow that principle.
Confidential computing provides a fundamental change in this paradigm: hardware based trusted execution environments (TEEs) allow the cryptographic isolation of a users workload from underlying infrastructure providers, and this isolation can be verified on demand using a remote attestation.
This talk will explain the fundamentals of confidential computing including TEE’s and how remote attestation can be used to verify the integrity of the TEE. After laying this foundation this talk will explore the overlapping projects in the ecosystem such as Trustee, Keylime, fs-verity, ConfidentialContainers; and what is required to assemble these projects in a way that allows you to cryptographic verify of your security posture.

Cybersecurity and Compliance
VYAS - G - Room#VY015
16:15
45min
Talking Chaos: An AI Co-Pilot for Resilience With LitmusChaos
Pritesh Kiri

Imagine telling your system, “Simulate a pod failure on the payments service,” or asking, “What chaos experiments have we run on service X in the last 30 days?” In this talk, we explore how natural language interfaces powered by AI and the Model Context Protocol (MCP) are reshaping the way we design, execute, and analyze chaos experiments using LitmusChaos. The audience will be introduced to a more intuitive experience, one where you no longer need to write YAML or memorize CRDs to validate resilience.

This session isn’t just about technology, it’s about accessibility. By replacing YAML and dashboards with human language, we’re lowering the barrier to entry for chaos engineering. Resilience testing becomes accessible to QA, product, and on-call teams, not just platform experts. Whether or not you’re currently using Litmus, this talk will provide a roadmap for building your own chaos copilot and a glimpse into a future where resilience is everyone’s responsibility.

AI, Data Science, and Emerging Tech
VYAS - G - Room#VY003
16:15
45min
Tech for Good- Open Source Solution for Climate and Healthcare.
Rahul Belokar, Priya R Belokar

Technology has the power to make a real difference — especially when it’s open and collaborative. In this talk, we’ll explore how open source tools and communities are tackling some of the world’s biggest challenges in climate change and healthcare. We’ll look at inspiring projects like OpenClimateFix, OpenMRS, and Open Data Cube, and see how they use data, AI, and open infrastructure to create real social impact. I’ll also share how tools like PyArrow, Snowflake, Starburst, and OpenShift can help build scalable, secure data pipelines for sustainable solutions. This session is all about how engineers and open source contributors can use technology not just to build systems — but to build a better future.

Open Track
VYAS - 1 - Room#VY104
16:15
45min
Why Your RAG is Failing , And How Open Source Can Fix It
Anindita Sinha Banerjee, Nitish Singh

Most Retrieval-Augmented Generation (RAG) systems fail long before the LLM even comes into play. The real issue is not the model, but the documents feeding it. Enterprise PDFs often have broken reading order, distorted tables, inconsistent formatting, embedded images, and scattered metadata. When this messy content enters the retrieval pipeline, even the strongest language model will struggle, leading to irrelevant answers or subtle hallucinations. This talk breaks down the reasons why RAG often collapses in real-world conditions, and shows how open-source tools can turn a fragile workflow into something reliable.

The first part of the session introduces Docling, an open-source document processing toolkit that converts complex PDFs, Word files, presentations, images, and audio into clean and structured content. It preserves layout, hierarchy, tables, and multimodal elements so that your RAG pipeline finally receives high-quality input. The second part covers OpenSearch, a fully open and scalable engine for vector indexing, hybrid retrieval, and metadata-driven search. Together, these tools offer a practical foundation for building RAG systems that are accurate, explainable, and robust at enterprise scale.

We will walk through the overall architecture, key design patterns, and lessons learned from real implementations. To make the concepts concrete, the session will end with a short demo that takes a messy PDF, processes it through Docling, indexes it in OpenSearch, and queries it within a RAG workflow that consistently returns the right context. Attendees will leave with a clear understanding of why many RAG systems fail today and a practical roadmap for building reliable RAG applications using open-source technology.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY124
17:00
17:00
15min
Break
VYAS - G - Room#VY003
17:00
15min
Break
VYAS - G - Room#VY015
17:00
15min
Break
VYAS - G - Room#VY016
17:00
15min
Break
VYAS - 1 - Room#VY124
17:00
15min
Break
VYAS - 1 - Room#VY102
17:00
15min
Break
VYAS - 1 - Room#VY103
17:00
15min
Break
VYAS - 1 - Room#VY104
17:15
17:15
15min
Anubis Will Judge You: AI-Powered Firewall Defending Web Traffic
shane Cardoz M

Tired of clicking through endless CAPTCHAs that break your user experience? , I know a solution , Anubis, an AI-powered web firewall designed to eliminate the need for frustrating human verification tests. Anubis defends websites and APIs from AI-driven scrapers and bots by leveraging cryptographic proof-of-work challenges alongside heuristic traffic analysis, allowing legitimate users seamless access without interruption.

This talk is for web developers, security engineers, and system administrators seeking effective, privacy-conscious bot defense without the usability drawbacks of CAPTCHAs. After attending, you will understand how to deploy a lightweight firewall at the web edge that filters AI-powered automation, preserves user experience, and hardens your applications against modern scraping threats using open-source technology.

Cybersecurity and Compliance
VYAS - G - Room#VY015
17:15
15min
Fix More Than Code: How Code Reviews Improve Teams, Skills, and Products.
Manish Bainsla

In an era of AI-generated PR descriptions and automated linting, the "Human API" of code reviews is under threat. We often treat reviews as a technical gate to pass, but for a software engineer, they are the highest-bandwidth channel for growth, mentorship, and domain mastery.

Drawing on my journey from Intern to Senior Engineer, I’ll share how code reviews served as my "fast-track" to technical knowledge and team leadership mindset. We’ll discuss why AI can check your syntax but can't understand your business logic—and why a simple suggestion like adding a specific logger can save hours of production downtime. This talk shares why we should move beyond "LGTM" and transform code reviews into a powerful tool for building cohesive teams and scalable products in the age of AI.

Open Track
VYAS - 1 - Room#VY104
17:15
15min
Image Mode: The Supported Way to Extend RHEL CoreOS
Prachiti Prakash Talgulkar

OpenShift relies on Red Hat Enterprise Linux CoreOS (RHCOS) as its foundation. RHCOS is tightly integrated with the rest of the platform and designed to be consistent, secure, and predictable during upgrades. But in real clusters, teams often need just a little more flexibility, maybe a custom driver, a small troubleshooting tool, or a monitoring agent that isn’t part of the default OS. Because RHCOS is immutable, these needs have historically been difficult to support

Image mode, also known as On-Cluster Layering (OCL), changes this. Image mode brings a cloud-native approach to OS management by treating the OS just like a container image: you define your configuration as code, build a unified OS image inside the cluster, and roll it out across nodes with the same safety and consistency OpenShift is known for. Need to add an agent? A driver? A tool? Apply a hotfix? Image mode makes these customizations fully supported, upgrade-safe, and declarative, without external pipelines or custom OS builds. Even with limitations, such as how /var is handled, image mode gives clear guidance on where and how to add custom content.

This talk introduces image mode from the ground up: what it is, how it works, and why it matters. We’ll walk through its architecture, the MachineOSConfig workflow, the in-cluster build process, and what the experience looks like for administrators and customers. Whether you’re enabling a critical operational tool or supporting a specialized workload, image mode provides a reliable, modern way to customize RHCOS while keeping your cluster stable and easy to manage.

Cloud, Edge, and Sustainable Computing
VYAS - G - Room#VY016
17:15
15min
Quantization at the Edge: Making a 4GB Model Run on 1GB RAM
Sneha Singh

Running generative AI on edge hardware is challenging because LLMs require large memory footprints far beyond what affordable ARM boards offer. Developers often give up or rely on cloud inference, which introduces latency, privacy concerns, and connectivity issues. This problem exists because most quantization tutorials target server-class GPUs and ignore memory-constrained devices where every megabyte matters. Traditional quantization (8-bit or 4-bit) still leaves models too large for sub-2GB RAM environments, and no practical guidance exists for pushing the boundaries on real edge hardware.

This talk walks through a practical method for shrinking a 4GB LLM to run comfortably on a 1GB device through aggressive quantization, operator fusion, KV-cache trimming, and runtime memory pooling. The approach uses open-weight models, offline quantization, and lightweight inference runtimes optimized for ARM CPUs. A demo shows how to load and run a quantized model on a basic board while maintaining usable accuracy. This session will benefit embedded engineers, makers, AI practitioners, and cloud-edge architects exploring low-cost, privacy-friendly AI deployments.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY102
17:15
15min
Reasoning Operators: Bringing LLM Logic Into Kubernetes Control Loops
Pratik Kumar Panda

Kubernetes Operators automate many workflows, but they’re limited to deterministic, rule-based logic. Modern clusters generate ambiguous signals - logs, events, partial failures that often require interpretation rather than fixed rules. This talk introduces AI Operators: Kubernetes controllers enhanced with LLMs to summarize cluster state, interpret anomalies, and assist in reconciliation. The need arises because traditional operators struggle with human-like tasks such as identifying patterns across resources or explaining misconfigurations.

We’ll break down a simple but safe architecture for integrating LLMs into reconcilers: CRDs that request AI insights, guardrails to prevent unsafe actions, and workflows where the operator remains authoritative while the model provides interpretation. We’ll discuss practical use cases like summarizing failing deployments, classifying noisy events, validating config changes, and offering remediation suggestions without letting the LLM execute decisions directly.

The talk includes a small demo of an Operator that listens to cluster events and produces human-readable insights. Attendees will learn when AI-augmented controllers make sense, how to build them with Kubebuilder or Kopf, and how to add LLM reasoning safely to existing automation. This session is ideal for platform engineers and SREs exploring intelligent automation on Kubernetes.

Cloud, Edge, and Sustainable Computing
VYAS - 1 - Room#VY103
17:15
15min
Vaani:Architecting Conversational AI The end-2-end Pipeline of Modern Voice Agents
Abhishek Jha

An existing conversational agent, powered by an LLM (e.g., ChatGPT, Ask Red Hat) or a retrieval system, provides the core intelligence. However, true utility and accessibility in modern applications require a high-fidelity, real-time voice interface. This talk provides a comprehensive architectural blueprint for converting a text-in, text-out agent into a fluid voice-in, voice-out platform.

AI, Data Science, and Emerging Tech
VYAS - G - Room#VY003
17:15
15min
What Did My Cluster Just Do? Event Intelligence with LightRAG
Mitali Bhalla

Kubernetes is constantly talking—your cluster fires off thousands of tiny clues about what it’s doing, what’s breaking, and what’s about to break. But most of that insight disappears before anyone sees it. This talk shows how to turn those fleeting events into actionable intelligence using LightRAG, a lightweight retrieval engine that gives your cluster a fast, durable “memory.” We’ll explore how embedding-based retrieval can reveal hidden patterns, connect related failures, and answer questions like “What just changed?” or “Has this meltdown happened before?” Attendees will walk away with a simple, low-cost architecture for bringing event intelligence to Kubernetes—no GPUs, no heavy AI stack, just clearer visibility into how their clusters really behave.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY124
17:30
17:30
5min
Day#1 ends
Opening, Keynotes, Closing: Floor8 Terrace
17:30
5min
Day#1 ends
(Workshop) VYAS - G - Room#VY004
17:30
5min
Day#1 ends
VYAS - G - Room#VY003
17:30
5min
Day#1 ends
VYAS - G - Room#VY015
17:30
5min
Day#1 ends
VYAS - G - Room#VY016
17:30
5min
Day#1 ends
VYAS - 1 - Room#VY124
17:30
5min
Day#1 ends
VYAS - 1 - Room#VY102
17:30
5min
Day#1 ends
VYAS - 1 - Room#VY103
17:30
5min
Day#1 ends
VYAS - 1 - Room#VY104
17:30
5min
Day#1 ends
(Booths) VYAS - G - Open area
18:30
18:30
150min
DevConf.IN Exclusive Speaker Dinner (Invite Only) - Location shared directly (TBD)

DevConf.IN Exclusive Speaker Dinner (Invite Only) - Location shared directly (TBD)

To celebrate the incredible lineup for DevConf.IN 2026, we are hosting an invite-only dinner for our speakers. This is a private opportunity to connect with external community leaders, industry professionals and leaders, MIT WPU - SoCSE leaders, Red Hat senior leadership, and the core organizing team in a relaxed setting before the main event.

Entry Requirements: Speaker Badge and Govt-issued ID (address & age proof) as per local Govt. guidelines

Offsite Location
09:00
09:00
60min
Badge pickup and networking (VYAS - G)
Opening, Keynotes, Closing: Floor8 Terrace
09:00
60min
Badge pickup and networking (VYAS - G)
(Workshop) VYAS - G - Room#VY004
09:00
60min
Badge pickup and networking (VYAS - G)
VYAS - G - Room#VY003
09:00
60min
Badge pickup and networking (VYAS - G)
VYAS - G - Room#VY015
09:00
60min
Badge pickup and networking (VYAS - G)
VYAS - G - Room#VY016
09:00
60min
Badge pickup and networking (VYAS - G)
VYAS - 1 - Room#VY124
09:00
60min
Badge pickup and networking (VYAS - G)
VYAS - 1 - Room#VY102
09:00
60min
Badge pickup and networking (VYAS - G)
VYAS - 1 - Room#VY103
09:00
60min
Badge pickup and networking (VYAS - G)
VYAS - 1 - Room#VY104
09:00
60min
Badge pickup and networking (VYAS - G)
(Booths) VYAS - G - Open area
09:00
60min
Morning Mixer: Badge Pickup (G Floor) & Networking Coffee (8th Floor)

09:00 - 09.45 AM | Networking Coffee (VYAS-8): Fuel up and meet the community before the rush
09:45 AM | Settling In: Wrap up networking and take your seats for the Opening Panel Keynote

Opening, Keynotes, Closing: Floor8 Terrace
10:00
10:00
60min
Panel Keynote: Floor8 Terrace
(Workshop) VYAS - G - Room#VY004
10:00
60min
Panel Keynote: Floor8 Terrace
VYAS - G - Room#VY003
10:00
60min
Panel Keynote: Floor8 Terrace
VYAS - G - Room#VY015
10:00
60min
Panel Keynote: Floor8 Terrace
VYAS - G - Room#VY016
10:00
60min
Panel Keynote: Floor8 Terrace
VYAS - 1 - Room#VY124
10:00
60min
Panel Keynote: Floor8 Terrace
VYAS - 1 - Room#VY102
10:00
60min
Panel Keynote: Floor8 Terrace
VYAS - 1 - Room#VY103
10:00
60min
Panel Keynote: Floor8 Terrace
VYAS - 1 - Room#VY104
10:00
60min
Panel Keynote: Floor8 Terrace
(Booths) VYAS - G - Open area
10:00
60min
Collaboration Synthesis:Engineering India’s Open AI Future from Classroom to Cloud
Gaurav Hirani, Ameeta Roy, Brian Proffitt, Balaji Patil, Chris Bredesen, Rajan Shah

Panel discussion: The Collaboration Synthesis: Engineering India’s Open AI Future from Classroom to Cloud

Opening, Keynotes, Closing: Floor8 Terrace
11:00
11:00
300min
Workshops, sessions, booths on G and 1 Floors
Opening, Keynotes, Closing: Floor8 Terrace
11:00
15min
Break
(Workshop) VYAS - G - Room#VY004
11:00
15min
Break
VYAS - G - Room#VY003
11:00
15min
Break
VYAS - G - Room#VY015
11:00
15min
Break
VYAS - G - Room#VY016
11:00
15min
Break
VYAS - 1 - Room#VY124
11:00
15min
Break
VYAS - 1 - Room#VY102
11:00
15min
New break
VYAS - 1 - Room#VY103
11:00
15min
Break
VYAS - 1 - Room#VY104
11:00
30min
Break & Booth setup
(Booths) VYAS - G - Open area
11:15
11:15
15min
AI Is Not Magic: The Physics, Math, and Hardware Behind It
Adarsh Dubey

AI systems often look magical from the outside they can write code, generate images, and understand language. But behind the illusion, AI is powered by surprisingly simple foundations: basic linear algebra, optimization, high-dimensional geometry, and the physics-driven constraints of modern hardware. In this talk, we break down how AI models actually operate at a fundamental level: how vectors and matrices describe meaning, how gradient descent lets neural networks learn, why high-dimensional geometry makes embeddings work, and how GPUs and tensor cores accelerate the math. By the end, participants will understand the reality of AI systems' powerful, but not mystical, and see the clear connections between math, physics, and computing that enable modern models like transformers and LLMs.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY102
11:15
15min
Apples, Oranges, and ML Models: Model Validation vs Benchmarking
Gaurav Kamathe

In the rush to operationalize machine learning, teams often celebrate “great benchmark results” while overlooking whether their model has truly been validated for its intended purpose. The result? Impressive numbers that crumble in real-world deployment — models that outperform baselines but underperform expectations.

This talk explores the subtle — yet crucial — difference between model validation and model benchmarking. While both rely on similar metrics, they answer fundamentally different questions.

We’ll unpack how these two processes differ in goal, methodology, and risk management, using simple mental models and relatable real-world analogies. You’ll learn how to design evaluation workflows that distinguish between proving correctness and proving competitiveness — and why this distinction is essential for reproducibility, transparency, and trust, especially in open-source and collaborative ML environments.

AI, Data Science, and Emerging Tech
VYAS - G - Room#VY015
11:15
15min
Art of Innovation: The Artist’s Guide to Product Management
Yogendra Joshi

As we shift from building interfaces to orchestrating AI Agents, the standard PM playbook of metrics and wireframes is no longer enough. Let me share how a Google PM operates in reality.

In this 15-minute session, I’ll share how embracing the 'Artist' mindset of prioritizing intuition, ambiguity, and empathy can prevent us from building sterile AI experiences.

Learn how to stop just 'prompt engineering' your 'Innovation' and start designing the soul of the machine and human interaction.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY103
11:15
15min
Chandrayaan Mindset for Your AI Career: Build Systems That Survive India
Monika Chahal

India doesn’t need bigger models as much as it needs AI systems that survive reality: multiple languages and scripts, noisy inputs, uneven connectivity, diverse devices, cost constraints, and high-stakes use cases. In this 15-minute lightning talk, I’ll translate the “Chandrayaan mindset” into a practical career playbook for AI builders: how to move from shipping impressive demos to shipping trustworthy systems that teams can adopt and maintain.

I’ll share a simple framework shaped by my engineering journey across five countries—speed of execution, rigor and repeatability, creativity with sovereignty, product communication, and India’s adaptability—and map it to what matters in real deployments: data readiness, evaluation as “unit tests for behavior,” safety/access controls, monitoring, and rollback-ready releases.

You’ll leave with a one-page checklist and a 30-day challenge: pick one artifact (eval set, test harness, monitoring dashboard, docs, deployment blueprint) you can contribute at work or in open source to strengthen India’s AI ecosystem—and your own growth.

What attendees will learn:
- A practical framework to grow from model-chasing to systems thinking.
- A concrete checklist + 30-day action to build trustworthy AI for India-scale variance.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY124
11:15
15min
FIREWATCH: Automated JIRA Bug Creation Based on CI Job Configuration
Saumya Vishwakarma

Tired of manually creating JIRA bugs after each CI run? Firewatch is here to save the day! This innovative, open-source tool automates bug creation based on pre-defined rules within your CI configuration, saving you time and effort.
Firewatch was designed to allow for users to define which JIRA issues get created depending on a set of predetermined rules within your CI run or reporting a success in a successful run. This automation can be used in a CI using the firewatch configuration and its a list of rules, each rule is defined using the failure_rules and successful_rules with a set of required and optional values.

Key Features:
1. Easy-to-use JSON configuration: Define your rules without complex coding, Users can track issues in their CI runs efficiently.
2. Flexible rule system: Specify criteria for both successful and failed builds, capturing all relevant issues based on labels.
3. Enhanced data insights: Leverage data generated labels to create dynamic JIRA dashboards for clear visualization of key metrics.
4. Open-source collaboration: Contribute to Firewatch's development and benefit from the RedHat QE community's expertise.

In this presentation, you'll learn:
1. Firewatch automation with deep understanding of the configuration.
2. Usage and example of JIRA issues created/reported in the Openshift-CI.
3. How others can leverage and contribute to Firewatch in the CI system.

Open Track
VYAS - 1 - Room#VY104
11:15
15min
Sealing Kubernetes with Confidential Clusters
Nitesh Narayan Lal

Think of a confidential Kubernetes cluster as a high-security bank vault. To get inside, a node needs verified attestation—think of it as requiring both a physical key and a biometric scan.
We'll show how Trustee acts as the vault's automated security system, validating every node's credentials. The Confidential Cluster Operator is the Bank Manager, setting access policies, continuously updating the master access list (reference values), and ensuring only trusted nodes can get in.
Attendees will learn practical insights into building and operating confidential clusters and how attestation enforces a "vault-grade" Kubernetes experience, where no untrusted node can breach the system.

Cloud, Edge, and Sustainable Computing
VYAS - G - Room#VY016
11:15
75min
Supercharging Kubernetes with AI
Ashutosh Bhakare

Observability and troubleshooting are crucial aspects of managing Kubernetes clusters, presenting challenges for both beginners and experienced users. This session will introduce k8sgpt, a CNCF graduated tool that simplifies this process by leveraging LLMs to explain cluster issues and suggest solutions in plain English. We will also explore integrating k8sgpt with image scanning tools like Trivy. Furthermore, we will delve into kubectl-ai, a powerful tool that dramatically eases interaction with Kubernetes.
Kubectl-ai can help users understand application behavior, generate commands, and even create or patch resources using simple English instructions, making complex Kubernetes operations more accessible.

AI, Data Science, and Emerging Tech
(Workshop) VYAS - G - Room#VY004
11:15
15min
The Open Source Community Playbook: What Works, What Doesn’t, and Why
Pritesh Kiri

In this talk, I’ll share a practical playbook for creating and sustaining a healthy open source community, drawn from my experience building projects like LitmusChaos, ToolJet, and ReactPlay. We’ll explore what really drives engagement, how to convert users into contributors, and the key practices that help maintainers avoid burnout while scaling their projects.
We’ll also discuss what you should and shouldn’t expect from a community as an open source organization. Setting the right expectations is critical for guiding the community’s growth in a healthy and sustainable direction.
Expect honest stories, actionable frameworks, and a look at what actually works (and what absolutely doesn’t) when you’re growing an open source community in the real world.
Whether you’re a maintainer, community manager, or just starting your open source journey, this session will give you tools and patterns you can apply immediately to grow your project and its community.

Open Track
VYAS - G - Room#VY003
11:30
11:30
15min
Break
VYAS - G - Room#VY003
11:30
15min
Break
VYAS - G - Room#VY015
11:30
15min
Break
VYAS - G - Room#VY016
11:30
15min
Break
VYAS - 1 - Room#VY124
11:30
15min
Break
VYAS - 1 - Room#VY102
11:30
15min
New break
VYAS - 1 - Room#VY103
11:30
15min
Break
VYAS - 1 - Room#VY104
11:30
240min
DevConf.IN 2026 Final Booths List with abstracts (Day#2)
DevConf.IN 2026 Final Booths List with abstracts

DevConf.IN 2026 Final Booths List with abstracts

Consolidated List
1. Red Hat India driven open source initiatives (community projects / meetups)
2. Fedora Project Community Corner
3. LogOut Project : Privacy Garage
4. MongoDB User Group Pune (MUG Pune)
5. Login Without Limits: Passwordless Across Consoles and Clouds
6. unifAI: no code agent orchestrator
7. Empowering Developer Innovation: Experiencing Backstage
8. k0s Project Booth
9. Secure Flow Booth
10. OKD (Origin Kubernetes Distribution): Community and Hands-On Demos
11. FOSS United Pune: Open Source Onboarding & Community Showcase
12. Build Open Source Document Workflows with ONLYOFFICE

Find full abstract details at https://drive.google.com/file/d/1lmdB0D52KELjmjK24-LkRPoTzY5cMRHu/view?usp=sharing

(Booths) VYAS - G - Open area
11:45
11:45
45min
Breaking the Build Before It Breaks You: The Magic of E2E Tests
Alka Kumari, Rizwana Naaz

The growing complexity of modern software systems, spanning distributed architectures, microservices, and sophisticated user interfaces, demands rigorous and comprehensive quality assurance strategies. End-to-End (E2E) testing has emerged as a cornerstone of reliability in this environment, providing a holistic safeguard that complements unit and integration testing. By simulating complete user journeys across frontend, backend, integrations, and infrastructure, E2E tests validate that all system components function seamlessly as a unified whole under real-world conditions.

E2E testing delivers high confidence in both the development and deployment processes by uncovering defects that typically surface only when components interact, such as data flow inconsistencies, configuration drift, and API contract mismatches. Acting as a critical gatekeeper within Continuous Integration and Continuous Delivery (CI/CD) pipelines, E2E tests accelerate feedback loops, strengthen DevOps and GitOps practices, and mitigate production risks that could lead to financial or reputational loss. Furthermore, modern E2E frameworks enhance efficiency through maintainable automation, parallel execution, and cross-environment consistency, serving as living documentation of system behaviour.

Investing in scalable and resilient E2E automation is not merely a technical choice but a strategic imperative. It enables organizations to deliver high-quality features faster, ensure operational stability, and maintain a seamless user experience in an increasingly complex and dynamic digital landscape.

Open Track
VYAS - 1 - Room#VY104
11:45
45min
Reproducible, Immutable, Bootable: Exploring bootc with Podman Desktop
Praveen Kumar

Creating custom Linux distributions has traditionally required specialized tooling, deep OS knowledge, and platform-specific build environments. With bootc, we now have a modern, container-native approach that turns OCI images into bootable, updatable Linux systems — and with Podman Desktop, this workflow becomes accessible on Linux, macOS, and Windows.

In this session, we’ll walk through how bootc leverages familiar container-building techniques to define an entire OS, enabling reproducible, declarative, and automated system images. We will explore how Podman Desktop simplifies this process with its cross-platform UI and built-in bootc extensions, allowing developers to build, test, and publish custom Linux OS artifacts without leaving their workstation.

In this session we will talk about

  • What bootc is and how it transforms OCI images into bootable distros
  • How to use Podman Desktop as a cross-platform environment for bootc workflows
  • How to build a custom Linux OS image from scratch
  • How to test bootable images locally with bootc plugin in podman desktop
  • Best practices for OS versioning, updates, and reproducibility
Cloud, Edge, and Sustainable Computing
VYAS - G - Room#VY016
11:45
45min
Securing Agentic Platforms with Zero-Trust Workload Identities
Trilok Geer, Andrew Block, Anjali Telang

Agentic architectures introduce new security challenges like dynamic policies, autonomous decision loops, continuous model execution, and cross-service actions. In this talk, we unpack the full identity flow for securing these systems from attesting compute, verifying workload lineage, enabling cryptographic identity with SPIFFE/SPIRE, integrating OIDC federation, and enforcing fine-grained authorization using purpose-built control loops. We explore patterns for securing AI agents, vector databases, model-serving pipelines, and GPU/Confidential Compute workloads. The session includes design patterns, identity lifetime management, trust-domain boundaries, workload attestation using hardware-backed roots, and how to build a platform where every component, from the operator to the model pipeline, authenticates and authorizes seamlessly.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY102
11:45
45min
Shift-Left for LLMs: Securing the AI Model Supply Chain
Nagesh Rathod

In today’s rapidly evolving AI landscape, Large Language Models (LLMs) are becoming more capable and efficient, yet this advancement introduces new security challenges that traditional SDLC and shift-left approaches do not address. As organizations rush to adopt LLMs, they often overlook critical risks such as model tampering, prompt-based attacks, data leakage, hallucinations, and unsecured inference pipelines. These gaps create an alarming and largely uncharted attack surface. Without proper processes and controls, both the model and sensitive data become vulnerable, making LLM security a critical need rather than an optional consideration.

Addressing LLM security requires a holistic, end-to-end strategy rather than reliance on a single tool. The first step is securing the model itself through signing and verification using Sigstore and Cosign, ensuring integrity and provenance, followed by vulnerability scanning with NVIDIA Garak. Guardrails around model interactions—such as moderation filters, PII detection, hallucination checks, and pre/post prompt screening—help prevent unsafe prompts, malicious injections, and harmful model outputs. Beyond safeguarding the model, securing inference traffic is equally important. Envoy can serve as the controlled API gateway to enforce authentication, rate-limiting, and protection against external threats, while Istio adds a zero-trust layer within the cluster through secure service-to-service communication and enhanced observability(service mesh istio). Completing the security posture, LLM red teaming introduces structured adversarial testing with attack corpora including prompt injections, jailbreak attempts, and data-exfiltration prompts, which can be continuously executed as regression tests to ensure ongoing robustness.

Attendees will gain practical, comprehensive knowledge of how to secure LLM systems in real-world production environments. They will learn about the unique risks introduced by modern LLMs, how to build a secure LLM supply chain, implement effective guardrails, protect API and cluster-level communication, and incorporate red teaming techniques tailored for LLMs. By exploring the processes, tools, and best practices essential for production-grade LLM security, attendees will leave with a clear roadmap for deploying and operating LLMs safely, reliably, and at scale.

Cybersecurity and Compliance
VYAS - G - Room#VY015
11:45
45min
Supercharge Your GitOps with ArgoCD Agent
Anand Kumar Singh, Akhil Nittala

GitOps, championed by tools like ArgoCD, has become the de facto standard for modern application deployment. While ArgoCD excels in managing applications within a single Kubernetes cluster, deploying and managing workloads across a fleet of clusters can introduce complexity. This session introduces the ArgoCD Agent, a powerful component designed to simplify and secure multi-cluster GitOps workflows. Modern enterprises run multiple clusters to balance compliance, resilience, and team autonomy across global operations. Attendees will learn what the ArgoCD Agent is, how it addresses challenges in a distributed environment, why it was developed, how it solves the scaling problem and see a live demonstration of it in action. If you manage more than one Kubernetes cluster and use ArgoCD, this talk is for you.

Cloud, Edge, and Sustainable Computing
VYAS - 1 - Room#VY103
11:45
45min
The Compute Revolution You’re Ignoring: JavaScript in Science
Gunj Joshi

What if anyone, anywhere, could run scientific code - instantly, from a browser tab? No setup, no downloads, just pure computation. The web is evolving from a platform for apps to a platform for science. In this talk, Gunj Joshi shows how modern JavaScript and stdlib are bringing high-performance numerical computing to billions of devices. From AI models to linear algebra, the browser is becoming the next great compute runtime - open, local, and accessible to all.

Open Track
VYAS - G - Room#VY003
11:45
45min
The GPU Utilization Problem: What’s Going Wrong and How to Solve It
Shamsher Ansari, Amita Sharma

GPUs are the backbone of modern AI and cloud workloads. But in reality, many GPUs sit idle most of the time. Even in well-run data centers, a large part of GPU capacity goes unused, which increases costs and slows teams down.

In this talk, we’ll break down why GPU utilization is so low and what you can do about it.

We’ll start with the basics, how GPUs are used today and where things go wrong. You’ll learn about common problems like uneven workloads, inefficient scheduling, limited visibility into GPU usage, and mismatches between hardware and software.

Next, we’ll walk through practical solutions. This includes GPU sharing, right-sizing workloads, better scheduling, and using the right monitoring tools. The focus will be on approaches you can actually apply in real systems.

We’ll also share real-world lessons from building a GPU-as-a-Service (GPUaaS) platform, covering features like model checkpointing, job preemption and resume, and queue-based scheduling with open-source tools such as Kueue to improve GPU efficiency.

By the end of the session, you’ll have a clear understanding of how to use GPUs more efficiently in AI, ML, and cloud environments, without unnecessary complexity.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY124
12:30
12:30
45min
Lunch break
(Workshop) VYAS - G - Room#VY004
12:30
45min
Lunch break
VYAS - G - Room#VY003
12:30
45min
Lunch break
VYAS - G - Room#VY015
12:30
45min
Lunch break
VYAS - G - Room#VY016
12:30
45min
Lunch break
VYAS - 1 - Room#VY124
12:30
45min
Lunch break
VYAS - 1 - Room#VY102
12:30
45min
Lunch break
VYAS - 1 - Room#VY103
12:30
45min
Lunch break
VYAS - 1 - Room#VY104
13:15
13:15
45min
30 Fedora releases in 30 minutes: a look back with lessons for any project
Matthew Miller

Former Fedora Project Leader Matthew Miller leads a whirlwind tour through the first 35 Fedora releases, drawn from the memories and the mailing list posts of many different Fedora contributors and users.

The talk covers both technical direction and community growth over the years. While those particularly interested in Fedora Linux will enjoy the details, the Project's missteps and successes have lessons for everyone. No prior technical or community experience is needed.

Time will be reserved for audience questions at the end.

Open Track
VYAS - 1 - Room#VY104
13:15
45min
Enabling Data Sovereignty in AI Workflows through Data Mesh Architecture
Jeyaramachandran Paulraj, PRATHEEBA RAVINDRAN

In the era of globalized AI adoption, data sovereignty has emerged as a critical challenge for both government and industry stakeholders. Regulatory mandates and emerging national data residency laws require organizations to ensure that data remains within prescribed jurisdictions while still enabling innovation and analytics. Traditional centralized architectures conflict with these mandates, forcing organizations to choose between compliance and AI performance. AI-enabled Data Mesh Architecture that allows domain-oriented ownership, federated governance, and localized model training to meet sovereignty requirements without sacrificing scalability.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY124
13:15
45min
FusionStack: The Cross-Platform Blueprint
Samson Dmello, Mandar Dixit

This demonstration showcases a powerful, replicable model for managing and protecting traditional Virtual Machine (VM) workloads across a modern hybrid cloud environment. By unifying OpenShift Virtualization (KubeVirt) on-premise and on ROSA (Red Hat OpenShift Service on AWS) with the declarative automation power of Ansible Automation Platform, we eliminate manual complexity in key cloud mobility and resilience operations. We are also leveraging power of ecosystem by infusing Veeam solution to bridge Enterprise Data Protection requirement

Cloud, Edge, and Sustainable Computing
VYAS - 1 - Room#VY103
13:15
45min
Resolving Performance and Security Issues with eBPF
Kashyap Vasant Ekbote, Yogesh Bhalchandra Babar

eBPF (extended Berkeley Packet Filter) is revolutionizing how we approach Linux kernel tooling, offering unprecedented access to kernel functions and data while maintaining safety and high performance. This session provides a practical deep dive into how modern applications can leverage eBPF to solve critical, long-standing challenges in both system security and application performance. We will begin with a clear explanation of the eBPF paradigm—the kernel's safe, sandboxed virtual machine—and its key components.
We will then explore two major use cases:
Security Enhancement: Demonstrating how eBPF can enforce granular, real-time security policies by implementing custom system call blocking and filtering mechanisms, effectively sandboxing processes directly within the kernel.
Performance Optimization: Analyzing a common system bottleneck (e.g., memory latency, inefficient I/O, or custom tracing) and showing how eBPF programs can be attached to kernel probes to provide deep, low-overhead observability and optimization opportunities that are impossible with traditional user-space tooling.
Attendees will leave with a solid understanding of eBPF's capabilities, its minimal performance footprint, and a framework for applying this powerful technology to their own performance and security requirements.

Open Track
VYAS - G - Room#VY015
13:15
45min
Solving the ML Pain You Forgot to Mention
Aniket Paluskar

In a world drowning in data, most teams still struggle with one thing: getting the right features to the right models at the right time. In this talk, I introduce Feast, the open-source feature store that acts like the chopsticks of data—lightweight, elegant, precise, and surprisingly powerful.

This will be beginner friendly talk which will explain what ML Pipeline is, what are the key components, & why we build one of the most underrated one!

I’ll break down how Feast streamlines feature management across real-time and batch pipelines, why it outperforms ad-hoc feature engineering, and how teams can use it to ship reliable ML systems faster. Whether you're building your first ML pipeline or scaling to production, come learn how Feast can turn messy data workflows into something clean, reusable, and production-ready.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY102
13:15
75min
The 75-Minute challenge: Building your first MCP server, Live
Aarti Jha, Kranti Prasad, Abhishek Kumar, Manish Bainsla

Workshop Outline:

Phase 1: The Foundation
The "Why" & The Architecture: Brief overview of why standardized engineering (via FastAPI) is the secret sauce for moving AI from "cool demo" to "production tool."
Environment Check: Rapid-fire validation of Git, Python, VSCode, and Postman environments.
The Blueprint: Introduction to the template-mcp-server repository and its core components.

Phase 2: From Zero to Local
Cloning & Configuration: Initializing your local environment and understanding the configuration files.
The "Hello World" Run: Getting the base template running on your machine.
Walkthrough: Navigating the repository structure—where the logic lives and how the server communicates.

Phase 3: Building Your Custom Tool
The Code-Along: Step-by-step implementation of a custom "tool" within the MCP framework.
Logic Injection: Defining the tool's purpose, input parameters, and execution logic.
Live Testing: Running the server locally and verifying that the tool is discoverable and functional.

Phase 4: Integration & Orchestration
Connecting to the Agent: Integrating your local MCP server with an agent (like Cursor) or a remote orchestrator.
The Real-World Test: Watching the agent invoke your custom tool to solve a live task.
Containerisation: A quick look at the "deployment-ready" state—wrapping your server for enterprise scale.

Outcomes:
Development: Move from a blank slate to a FastAPI-based MCP server.
Extensibility: Learn exactly how to add new tools to existing templates.
Integration: Connect your server to real-world agents like Cursor.
Scalability: Leave with a containerised solution ready for production deployment.

Note: This is a high-speed lab. We prioritize doing over discussing. Ensure your Python/Git/VSCode/Postman is warmed up and ready to pull images!

AI, Data Science, and Emerging Tech
(Workshop) VYAS - G - Room#VY004
13:15
45min
The Power of Edge AI: Revolutionizing Future of Real-Time, Autonomous Processing
Deepak Das

Edge AI brings machine learning inference directly to devices, enabling real-time processing without cloud dependency. This architecture reduces latency from hundreds of milliseconds to microseconds, enhances privacy by keeping data local, cuts bandwidth costs by 60-80%, and enables offline operation. Applications include smart traffic management with dynamic signal timing, industrial predictive maintenance, autonomous vehicles, healthcare wearables with real-time monitoring, and smart home energy optimization. The combination of TinyML (machine learning on microcontrollers), neuromorphic chips, and 5G connectivity is making edge AI increasingly practical.

  1. Real-Time Processing at the Edge
  2. Latency Reduction: From Milliseconds to Microseconds
  3. Enhanced Privacy and Security
  4. Cost Savings: Reduced Bandwidth Usage by 60-80%
  5. Offline Capabilities for Seamless Operation

The TechTalk will cover that Edge AI is not just a technological trend—it’s transforming industries by providing faster, cheaper, and more secure ways of processing data. With the combination of emerging technologies like TinyML, neuromorphic chips, and 5G, it’s reshaping everything from healthcare to smart cities, and laying the groundwork for an autonomous future.

AI, Data Science, and Emerging Tech
VYAS - G - Room#VY003
13:15
45min
eSim: Building an Open-Source EDA Ecosystem for Edu, R&D, and Community Innovation
Sumanto Kar

Electronic Design Automation (EDA) tools are foundational to hardware innovation, yet access to professional-grade tools remains limited due to cost, licensing restrictions, and steep learning curves. eSim is a fully open-source EDA toolchain developed under the FOSSEE project at IIT Bombay, aimed at democratizing circuit design and simulation for students, educators, researchers, and makers.

This talk introduces eSim as a community-driven open-source alternative for analog, digital, and mixed-signal circuit simulation, built on top of established FOSS components such as ngspice, KiCad, GHDL, Verilator and Python-based tooling like PyQt5. The talk also covers how the tool is becoming portable across different environments, and is now incorporating AI-assisted capabilities to enhance usability, learning, and debugging.. Beyond features, the session focuses on how open EDA ecosystems are built, sustained, and scaled—both technically and socially.

Target Audience
- Open-source developers and contributors
- Students and researchers interested in hardware, EDA, and simulation
- Educators building open laboratory workflows
- Community organizers and maintainers of FOSS projects
- Developers curious about open hardware and open EDA ecosystems
- People with backgrounds in Electrical, Computer Science, AI/ML and related fields.

What to Expect from the Session
Participants can expect:
- A technical overview of eSim’s architecture and workflow
- Demonstrations of circuit simulation pipelines
- Discussion on integration with other open-source tools
- Insights into challenges of maintaining large academic FOSS projects
- Ways developers can contribute—code, documentation, testing, or outreach

Key Outcomes
After the session, attendees will:
- Understand how eSim fits into the global open-source EDA landscape
- Learn how open tools can replace proprietary software in education
- See how government-backed initiatives can accelerate open ecosystems
- Be equipped to adopt or advocate for eSim in labs, courses, or communities
- How Open Source EDA tool like eSim can help everyone to fabricate chips at a very low cost

Links to various project and govt initiatives:
eSim aligns strongly with India’s push toward open digital public infrastructure and self-reliant technology ecosystems. eSim is part of the FOSSEE (Free/Libre and Open Source Software for Education) project. The eSim project aligns with the vision of National Education Policy (NEP 2020), Digital India & Atmanirbhar Bharat and the Indian Semiconductor Mission. The talk emphasizes how open-source communities, academia, and policy can work together to build sustainable engineering infrastructure.

Open Track
VYAS - G - Room#VY016
14:00
14:00
10min
Break
VYAS - G - Room#VY003
14:00
10min
Break
VYAS - G - Room#VY015
14:00
10min
Break
VYAS - G - Room#VY016
14:00
10min
Break
VYAS - 1 - Room#VY124
14:00
10min
Break
VYAS - 1 - Room#VY102
14:00
10min
Break
VYAS - 1 - Room#VY103
14:00
10min
Break
VYAS - 1 - Room#VY104
14:10
14:10
45min
AIOps for Distributed Environments - Deep Dive
Andreas Spanner, PRATHEEBA RAVINDRAN, Vasanthalakshmi

This session will go in-depth exploring challenges and approaches with regards to AIOps. Distributed environments entail everything from microservices running on the same server to physically distributed far edge compute. This session will cover different components such as anomaly detection, root-cause analysis (RCA) and remediation, possible maturity classifications and predictive as well as generative AI approaches. Finally, this project is an invitation to join and contribute to the Linux Foundation hosted AIOps project.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY124
14:10
45min
From IaC to InfraOps: Automating Day-2 Operations with Terraform Actions & Ansible
Dr. Rahul Gaikwad

Infrastructure as Code has standardized Day-0 provisioning, but most enterprises still handle Day-2 operations patching, configuration changes, drift remediation, and incident response through manual processes and fragmented automation. This session shows how Terraform Actions, combined with the Red Hat Ansible Automation Platform (AAP), transforms Terraform from a provisioning tool into an operational control plane. With Terraform managing infrastructure state and Ansible executing configuration and remediation workflows, teams can unify provisioning and operations into a single, governed workflow. Using the new Terraform action for AAP, a single terraform apply can trigger Event-Driven Ansible (EDA) to execute dynamic automation across Red Hat environments. The result is a repeatable, policy-driven model for Day-2 operations that reduces operational friction, eliminates ad-hoc access, and improves reliability at scale.

Target Audience: Developers, Architects, DevOps, Security, SRE

Cloud, Edge, and Sustainable Computing
VYAS - G - Room#VY016
14:10
45min
From Side Project to Industry Standard: Story of AsyncAPI
Aayush, Shriya Chauhan

We will discuss how AsyncAPI started and reached to valuation of more than $33M. Sponsored by various big tech giants: Red Hat, Solace, IBM, IQVIA Technologies, GraviteeSource Inc, Postman, Bump.sh, Svix, TIBCO, Aklivity, Kong, Route4Me, HDI Global

AsyncAPI helps teams design, document, and manage event-driven systems—but what does that actually look like in practice? In this talk, we’ll explore how companies like LEGO, eBay, and major banks use AsyncAPI to tackle real-world challenges: improving developer handoffs, managing Kafka topics, and generating code directly from message formats.

We’ll skip the marketing hype and focus on practical lessons—how AsyncAPI fits into existing pipelines, which tools teams rely on, and what’s worked (and what hasn’t). If you’re curious about bringing AsyncAPI into your own projects, this session will show you what’s possible—and what to watch out for.

Open Track
VYAS - 1 - Room#VY104
14:10
45min
Scaling ML Pipelines with Feast, Ray and Kubeflow
Nikhil Kathole, Abhijeet Dhumal

Feature engineering is eating your training time. Data loading is your bottleneck. Sound familiar?
If your training jobs crawl, your features take forever to compute, or your pipeline breaks every time you scale, this talk is for you.

In this session, we’ll show how to turn a slow, file-based ML pipeline into a distributed, production-ready architecture using modern open-source tooling:
- Feast for feature management
- Ray for distributed data processing
- Kubeflow Training Operator for orchestrating distributed training on Kubernetes

We’ll demonstrate an end-to-end pipeline, powering a Temporal Fusion Transformer trained on 421K rows of Walmart sales data. Using PyTorch DDP across multiple GPUs, how we can cut training time, while hitting 10.5% MAPE (compared to the typical 15–20% industry baseline).

You’ll see:
- Faster feature loading using Ray + Feast
- Raw data flowing through a fully managed feature platform
- Distributed PyTorch jobs launched and scaled with Kubeflow Training Operator
- Production inference path powered by Feast’s hybrid storage & compute
- How Ray transforms feature engineering performance at scale
- How Feast standardizes feature computation across training & inference

You’ll leave with a repeatable blueprint for building ML pipelines that scale as your models, data, and teams grow, along with the confidence to adopt these tools in your own production environment.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY103
14:10
45min
Transforming SBOMs from Compliance Burden to Security Asset
Martin Sikora, Ales Raszka

Managing Software Bill of Materials (SBOMs) has shifted from a security recommendation to a legal requirement. However, for large-scale projects, the primary challenge is ensuring these records are accurate and verifiable without adding friction to the build process.

In this talk, we share how we built an automated SBOM lifecycle into Konflux, a Kubernetes-native software factory system. We will provide a technical look at Mobster, the tool we use to automatically generate, enrich, and store SBOMs for every production build. We will demonstrate how this integration ensures that every container image is accompanied by a transparent record of its dependencies.

Beyond the build, we explore how this data becomes a strategic asset for Product Security. By integrating with the Trusted Profile Analyzer, we move from per-build compliance to portfolio-wide visibility. We will discuss the theoretical framework for using this data to map vulnerabilities across thousands of components, allowing security teams to pinpoint exactly where a high-risk dependency exists and orchestrate rapid, large-scale remediation.

What we will cover:
- The SBOM Requirement: A brief look at the necessity of supply chain transparency and why manual manifests fail at scale.
- Architectural Deep Dive: How we integrated Mobster into the Konflux pipeline to capture metadata and dependencies during the build.
- Standardization and Interoperability: How using industry standards (SPDX/CycloneDX) ensures data portability across security platforms.
- Empowering Product Security: * Portfolio-Wide Visibility: How centralized SBOM data allows security teams to query an entire software catalog for specific vulnerable packages.
- Accelerated Remediation: The theory of using "where-used" data to reduce the time between vulnerability discovery and patches across multiple products.

Cybersecurity and Compliance
VYAS - 1 - Room#VY102
14:10
45min
Unifying AI experience for customers: Building multi-agent systems with Google ADK
Anuj Singla, Omkar Pavale, Deepak Koul

The AI revolution is creating a new problem: AI silos. Your Account team has a chatbot. Your Product team has a chatbot. Support has its own. While departments develop their own specialized agents, customers are left with a fragmented and confusing experience. They don't want to hunt for the right interface. They demand a single, unified conversation.
This is the next step for developers: moving from simple prompt engineering to complex agent orchestration. How do you build a "super-agent" that understands a user's intent and seamlessly routes queries to the right specialized sub-agent?
In this hands-on workshop, using Google's Agent Development Kit (ADK), you will architect and create a proper multi-agent system that provides a unified customer experience. You will also learn how we overcame the following challenges in building a multi-agent system:
Intent & Routing: How does the main agent know which sub-agent to talk to?
Context Sharing: How do you pass information and state between agents without losing the thread?
Safety & Evaluation: How do you ensure the entire system is reliable and safe?

Key Takeaways for Attendees:
Understand why multi-agent systems are the future of AI.
Learn how to architect a multi-agent AI that can execute tasks reliably.
Get hands-on experience with the Google ADK to create and orchestrate a team of specialized agents.
Effective context sharing and state management between agents.
Will share our findings from our multi-agent POC.

AI, Data Science, and Emerging Tech
VYAS - G - Room#VY003
14:10
45min
eBPF-Driven Security for Kubernetes: The Tetragon Story
Aftab S

Imagine you’re running a high-security building. You have cameras (logs), alarm systems (monitoring), and guards (security policies). But what if these guards could see everything happening in real-time and prevent threats before they escalate? That’s exactly what Tetragon does for Kubernetes security.
Traditional security tools often work after an incident has happened or too slow for today’s cloud-native threats. Tetragon, powered by eBPF, changes the game by providing deep, real-time visibility into Kubernetes workloads and enforcing security policies instantly.
In this session, we’ll start with the basics of Kubernetes security, explore the limitations of traditional runtime security tools, and then dive into how Tetragon detects and mitigates threats without slowing down your workloads.
If you’ve ever wondered how eBPF helps detect unauthorized access, process executions, and network anomalies in real-time, this talk is for you.

Kubernetes workloads are constantly changing, making security a continuous challenge. Traditional security tools struggle to provide real-time visibility, leaving gaps in detection and enforcement.
This talk will explore how Tetragon, an eBPF-powered runtime security tool, enhances Kubernetes security by:
- Providing deep observability into process and network activity
- Enforcing real-time security policies without performance trade-offs
- Detecting threats instantly to prevent breaches before they spread
- Reducing complexity in securing cloud-native workloads
Attendees will gain a clear understanding of modern Kubernetes security challenges and how Tetragon helps build scalable, proactive security strategies in cloud-native environments.

Cybersecurity and Compliance
VYAS - G - Room#VY015
14:30
14:30
5min
Break
(Workshop) VYAS - G - Room#VY004
14:35
14:35
75min
Securing Your ML Model Supply Chain with OpenSSF Model Signing and Sigstore
Abhishek Ghosh, Shubham Bhardwaj

Machine learning models are shared and deployed at massive scale, yet most organizations have no way to verify whether a model is safe, authentic, or tampered with. This creates a growing risk surface. Model backdoors, malicious deserialization, and compromised model files are already appearing in the wild.
This hands-on workshop introduces participants to model supply chain security using the new OpenSSF Model Signing Standard together with Sigstore’s cryptographic signing and verification tools. We will show how to sign an ML model, verify its integrity before loading, and integrate these steps into a simple ML workflow.
The session is designed for beginners and intermediate-level practitioners. No deep cryptography background is required, just basic ML familiarity is sufficient. By the end, attendees will be able to apply signing and verification to their own models and understand how these techniques protect against real world supply chain attacks.

Cybersecurity and Compliance
(Workshop) VYAS - G - Room#VY004
14:55
14:55
10min
Break
VYAS - G - Room#VY003
14:55
10min
Break
VYAS - G - Room#VY015
14:55
10min
Break
VYAS - G - Room#VY016
14:55
10min
Break
VYAS - 1 - Room#VY124
14:55
10min
Break
VYAS - 1 - Room#VY102
14:55
10min
Break
VYAS - 1 - Room#VY103
14:55
10min
Break
VYAS - 1 - Room#VY104
15:05
15:05
45min
LogAn: Large-scale Log Analysis with Small Language Models
Rahul Shetty, Pranjal Gupta, Siddardh R A, Harshit Kumar

Do you spend endless hours troubleshooting issues in your application? Does your issue diagnosis process involve browsing through a huge volume of logs? We invite you to join us on an exciting journey as we unveil Log Analyzer (LogAn) — a powerful tool built to revolutionize how IT log analysis is handled, from small to large enterprise-level applications.

LogAn leverages Small Large Language Models (SLMs) to uncover hidden insights within logs—insights often missed by traditional analysis methods. Designed to empower Site Reliability Engineers (SREs) and Support Engineers, LogAn accelerates issue diagnosis like never before. The tool has been in production since March 2024 - scaled across 70 software products, processing over 2000 tickets for issue diagnosis and achieving a time savings of 300+ man hours.

This presentation will cover:
1. Challenges in IT Log Analysis Today: The current landscape and obstacles in traditional support methods.
2. Introducing LogAn: A deep dive into how the Log Analyzer tool works and what it offers.
3. Optimizing LLM Inference on CPU for Large-Scale Logs: Techniques to ensure efficient processing of vast log data.
4. Insight Extraction and Causal Analysis: How LogAn summarizes and identifies root cause(s) from high volumes of log data.

Finally, we’ll conclude with a live demo of LogAn, showcasing how it can drastically reduce incident detection and resolution time. Join this session to discover how you can leverage small language models using LogAn to analyze large-scale logs data to help with root cause analysis.

AI, Data Science, and Emerging Tech
VYAS - G - Room#VY003
15:05
45min
Self-Healing Pipelines? Resilient Deployments with Hardened Containers
Rajani Ekunde

In this session, we will discuss how DevOps teams can design self-healing CI/CD pipelines using hardened Docker images and automated recovery checks. We’ll cover integrating Trivy scans, Cosign signatures, and health-probe triggers into GitOps workflows. You’ll learn how these guardrails prevent misconfigurations, block risky images, and enable reliable rollbacks before incidents escalate. Combining SRE principles with container hardening, we’ll show how automation can make resilience measurable — not mythical.

Cloud, Edge, and Sustainable Computing
VYAS - 1 - Room#VY103
15:05
45min
Simplifying Storage in Kubernetes with Rook-Ceph
Subham Rai, Nikhil Ladha, Rakshith R

Kubernetes has transformed how we run applications—but storage at scale remains one of its hardest challenges. Rook, a CNCF Graduated project, brings Ceph's battle-tested storage engine into the cloud-native world, delivering Block, File, and Object storage as a fully self-managed Kubernetes-native experience.

This talk uncovers the real-world truths behind running Rook-Ceph in production: what actually works, and the patterns that make enterprise storage reliable inside Kubernetes. We'll explore fast, repeatable cluster bring-up, proven Day-2 operational tactics, multi-tenant and isolation models, and strategies to survive failures, upgrades, and disaster scenarios.

You'll also get insights directly from upstream maintainers—covering lessons learned, common pitfalls, and how the community is shaping the next generation of cloud-native storage.

If you're evaluating, operating, or scaling stateful workloads on Kubernetes, this session will give you the clarity and confidence to run Rook-Ceph like a pro.

Cloud, Edge, and Sustainable Computing
VYAS - G - Room#VY016
15:05
45min
Sovereign AI: India’s Next Big Leap
Pranjal Bathia, Renu Jhamtani

Artificial Intelligence is rapidly becoming a strategic asset for nations, influencing everything from defence and finance to education and public services. For a country as large, diverse, and digitally connected as India, relying solely on external AI technologies comes with risks. This is where the idea of Sovereign AI becomes important—not as a move toward isolation, but as a way to build and control India’s own AI capabilities, data ecosystems, and technological future.
India has already demonstrated its ability to build systems at a global scale. Platforms like Aadhaar, UPI, and ONDC have reshaped digital infrastructure; the Bhashini initiative is enabling AI to understand Indian languages; and the IndiaAI Mission is driving national ambition. With one of the world’s largest young technical workforces, India is uniquely positioned to shape the next wave of AI innovation.
This session will highlight why Sovereign AI is essential for India’s long-term growth and security, and what steps can help us move in that direction.

Key takeaways include:
The four pillars of Sovereign AI—local models, open-source collaboration, data sovereignty, and hardware capability.

Why national security, economic growth, and cultural representation depend on building AI locally.

A viewpoint on how India can build AI systems that reflect its languages, values, and ambitions using open source

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY102
15:05
45min
Think Like an Attacker: Why Your npm install Is a Hacker's Dream
Sudhanshu Dasgupta

You run npm install. Three seconds later, your AWS keys are gone. Sound dramatic?
It happened to 500+ developers in September 2025.
Most of the places you'll heard what went wrong. This one shows you how attackers think, so you can think one step ahead. We'll walk through real attacks on open source supply chains (the stuff you install every day) and show you the exact moment where things go sideways. No jargon. No assuming you're a security expert. Just honest explanations of how modern attacks work and what you can actually do about it.
You'll learn how attackers pick their targets (hint: dormant packages nobody's watching), what they automate (everything), and why traditional security tools keep missing obvious threats. We'll demo simple, open source tools you can run right now to check if your projects are already compromised and show you how to catch malicious packages before they hit your codebase.

This isn't theory. These attacks are happening today. Let's stop making it easy for them.

Cybersecurity and Compliance
VYAS - G - Room#VY015
15:05
45min
Why Your RAG System Hallucinates: Fixing the Content Segmentation Problem
Sharan Harsoor

Enterprise RAG systems fail not because of LLM limitations, but due to a critical overlooked foundation: content segmentation. Organizations invest heavily in sophisticated retrieval architectures while using naive character-count splitting that destroys semantic coherence. Contract clauses severed mid-sentence, code functions fragmented, medical narratives broken apart; these segmentation failures cause hallucinations, inconsistent responses, and lost user trust.

This session demonstrates why intelligent content segmentation has emerged as a critical engineering discipline for production AI systems. Through live demonstrations, we compare the same enterprise knowledge base processed with naive splitting versus semantic-aware segmentation, measuring the impact on retrieval accuracy (40-60% improvement), hallucination rates, and query success.

We present production-ready architectural patterns that attendees can implement immediately: semantic-aware splitting that preserves document structure and domain logic, streaming pipelines for processing large files that exceed RAM capacity, adaptive optimization through retrieval feedback loops, and multimodal handling across text, code, and structured documents.

To prove these patterns work at scale, we've released an open-source implementation available on GitHub and PyPI (pip install chunking-strategy). The codebase demonstrates thread-safe parallel processing, comprehensive error handling, and clean abstractions teams can customize for their domains no vendor lock-in, just production-quality code you can own and extend.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY104
15:05
45min
Will AI Replace Writers? No, But It Will Replace Writers Who Don’t Evolve & Adapt
Kalyani Desai, Yash Guddeti

AI is transforming the writing world at a faster pace than any previous technological shift. But instead of replacing writers, AI is reshaping what writers do. This talk examines how generative AI is automating low-level tasks such as drafting, restructuring, and summarizing while elevating the writer's role to encompass strategy, judgment, UX thinking, and information design. We’ll break down what AI can and cannot do, real examples of AI failures, and the new hybrid workflow where humans + machines create better content together. The future belongs not to writers who resist AI, but to those who evolve with it.

AI, Data Science, and Emerging Tech
VYAS - 1 - Room#VY124
15:30
15:30
30min
Booth showcase - Wrap Up & Break
(Booths) VYAS - G - Open area
15:50
15:50
10min
Break
(Workshop) VYAS - G - Room#VY004
15:50
10min
Break
VYAS - G - Room#VY003
15:50
10min
Break
VYAS - G - Room#VY015
15:50
10min
Break
VYAS - G - Room#VY016
15:50
10min
Break
VYAS - 1 - Room#VY124
15:50
10min
Break
VYAS - 1 - Room#VY102
15:50
10min
Break
VYAS - 1 - Room#VY103
15:50
10min
Break
VYAS - 1 - Room#VY104
16:00
16:00
60min
Closing Keynote: Floor8 Terrace
(Workshop) VYAS - G - Room#VY004
16:00
60min
Closing Keynote: Floor8 Terrace
VYAS - G - Room#VY003
16:00
60min
Closing Keynote: Floor8 Terrace
VYAS - G - Room#VY015
16:00
60min
Closing Keynote: Floor8 Terrace
VYAS - G - Room#VY016
16:00
60min
Closing Keynote: Floor8 Terrace
VYAS - 1 - Room#VY124
16:00
60min
Closing Keynote: Floor8 Terrace
VYAS - 1 - Room#VY102
16:00
60min
Closing Keynote: Floor8 Terrace
VYAS - 1 - Room#VY103
16:00
60min
Closing Keynote: Floor8 Terrace
VYAS - 1 - Room#VY104
16:00
60min
Closing Keynote: Floor8 Terrace
(Booths) VYAS - G - Open area
16:00
60min
Build Native. Build Open. Build the Future. (In HINDI only)
Dr. Vijay D. Gokhale

***Theme: Native Development × Engineering Excellence x India’s open source future

***Goal: To strengthen and grow the Open Source ecosystem in India by empowering the developer community and raising engineering standards across academia and industry, encouraging deeper participation, upstream contribution, and long-term capability building.

***Keynote Outcome (What Attendees Should Leave With)
By the end of the keynote closing, the community should be inspired and enabled to:
- Adopt a Native Development Mindset
- Strengthen Grassroots Engineering Foundations
- Embrace Cross-Domain Learning
- Leverage AI as an Engineering Multiplier
- Commit to Open Contribution and Community Ownership
- Build for Global Impact from India

Opening, Keynotes, Closing: Floor8 Terrace
17:00
17:00
10min
Break
Opening, Keynotes, Closing: Floor8 Terrace
17:00
10min
Break
(Workshop) VYAS - G - Room#VY004
17:00
10min
Break
VYAS - G - Room#VY003
17:00
10min
Break
VYAS - G - Room#VY015
17:00
10min
Break
VYAS - G - Room#VY016
17:00
10min
Break
VYAS - 1 - Room#VY124
17:00
10min
Break
VYAS - 1 - Room#VY102
17:00
10min
Break
VYAS - 1 - Room#VY103
17:00
10min
Break
VYAS - 1 - Room#VY104
17:00
10min
Break
(Booths) VYAS - G - Open area
17:10
17:10
20min
Conference Conclusion: Floor8 Terrace
(Workshop) VYAS - G - Room#VY004
17:10
20min
Conference Conclusion: Floor8 Terrace
VYAS - G - Room#VY003
17:10
20min
Conference Conclusion: Floor8 Terrace
VYAS - G - Room#VY015
17:10
20min
Conference Conclusion: Floor8 Terrace
VYAS - G - Room#VY016
17:10
20min
Conference Conclusion: Floor8 Terrace
VYAS - 1 - Room#VY124
17:10
20min
Conference Conclusion: Floor8 Terrace
VYAS - 1 - Room#VY102
17:10
20min
Conference Conclusion: Floor8 Terrace
VYAS - 1 - Room#VY103
17:10
20min
Conference Conclusion: Floor8 Terrace
VYAS - 1 - Room#VY104
17:10
20min
Conference Conclusion: Floor8 Terrace
(Booths) VYAS - G - Open area
17:10
20min
Closing and Conference Trivia - Win DevConf Swag!

Join us for the conference closing and take one last chance to win some great swag in our conference trivia! We will also be sharing updates on what’s coming next for DevConf events.

05:10 PM The Interactive Wrap-up: Rapid-fire Trivia Quiz with distribution of swags to quiz winners
05:20 PM The Gratitude & Vibe: Final Thank You to MIT-RH, speakers, volunteers, attendees, and the community
05:23 PM Teasers for Global DevConf chapters, conference dates, and building the "Next Year 2027 Vibe"
05:25 PM Social Media Guidelines
05:30 PM The Final Farewell: Official Closing of DevConf.IN 2026.

Opening, Keynotes, Closing: Floor8 Terrace