Generative AI is revolutionizing how we build modern applications, but for developers, it can be daunting, particularly with evaluating models, building with Gen AI, and the path to production. But, it doesn’t have to be worrisome! Join us in this session to be ahead of the curve when it comes to AI-enabled cloud-native application development.
Using container technology and open source models from Hugging Face, we’ll show how to practically integrate Gen AI in an existing application from your local development environment and ultimately deploy it onto Kubernetes. Why work with local and open-source models? From reducing cloud computing costs, keeping control of your sensitive data, and alleviating vendor-locking, it’s an increasingly popular way for developers to prototype AI applications quickly. We’ll demonstrate a sample of the whole AI journey, starting from assessing models, building applications with LLMs, and deploying/serving AI applications.
Discover how Code for GovTech (C4GT) leverages open-source collaboration to build innovative solutions for real-world challenges in Digital Public Goods and Infrastructure. Join us to learn about our initiatives, connect with like-minded individuals, and explore opportunities to contribute to impactful projects driving public good.
APIs have transformed the automotive and mobility industries, enabling seamless connectivity between vehicles, consumers, and enterprise systems. This innovation has unlocked new revenue streams and data-driven features for automakers. However, it has also introduced significant cybersecurity risks, as API vulnerabilities become a growing target for attackers.
From a lone vehicle hack in 2015 to a staggering 308% rise in API-related attacks by 2024, the story of automotive cyber threats reveals a shocking truth. APIs, now the backbone of connected cars, have become a goldmine for hackers, making API security more critical than ever.(Source: Upstream Reports)
API vulnerabilities in connected cars aren’t just about data leaks—they’re about lives and business integrity. Imagine a hacker accessing a fleet’s real-time location data, manipulating engine diagnostics, or remotely controlling functionalities of the car like opening the doors, honking the horn, flashing the lights . These are not science fiction scenarios; they’ve already happened. Yet, most organizations still treat API security as an afterthought, patching issues instead of addressing systemic flaws.
The session will be divided into three sections, focusing on Connected Cars API and Fleet Management:
Introduction and Importance of API Security (10 minutes)
Understand the role of APIs in connected cars and fleet management.
Learn why securing these APIs is critical for safety, privacy, and business operations.
Real-World Examples (10 minutes)
Explore incidents where insecure APIs led to breaches in connected car systems and fleet management platforms.
Discuss the impact on vehicle safety, data integrity, and fleet operations.
API Security Basics for Connected Cars and Fleet Management (10 minutes)
Identify common attack surfaces in connected vehicle APIs.
Learn how to recognize and address vulnerabilities specific to these systems.
will conclude the session with 5 minutes for Q&A.
FOSS United is a non-profit foundation that aims at promoting & strengthening the Free and Open Source Software (FOSS) ecosystem in India. This booth will showcase some of the activities - particularly FOSS United project grants program and flagship events.
Discover how FreeIPA simplifies and strengthens identity management and security in enterprise environments. FreeIPA provides an integrated solution combining LDAP, Kerberos, DNS, and certificate management to deliver centralized authentication, authorization, and identity services. It is designed to scale and easy to use. It ensures secure access controls and easy management of user identities within distributed systems while allowing smooth integration into hybrid and cloud-native ecosystems.
Visit our booth for some real-world use cases, live demonstrations, and the most updated FreeIPA innovations and knowledge on how it has equipped organizations to embrace effective, robust security practices in highly complex IT landscapes to drive identity management. Everyone would find something of their liking!
This Bootable Containers (Bootc) workshop offers an engaging mix of theory and hands-on practice, providing participants with a comprehensive understanding of the concept and practical applications.
It highlights the fundamentals, core advantages such as portability, isolation and real-world use cases which require purpose built operating systems for example:
-> Resource constraint environment in edge computing.
-> Workloads that require specific GPU acceleration configuration
-> Workloads that require specific network performance
-> Deployments that require specific performance profiles or real-time kernels
-> Device deployments that require specific security configurations and secure onboarding to minimize attack surface
** Key Activities: **
-> Build a Bootable Container: Create a minimal Linux image, configure services, and boot on hardware or VMs.
-> Deploy Real-World Scenarios: Test and validate in practical setups.
-> Interactive Learning: Hands-on tasks, group deployments, and guided troubleshooting.
** Takeaway: **
Discuss future possibilities, share challenges, and access additional resources (workflow guides, GitHub repos)
** Detailed Agenda **
-> Intro to Bootc: [15 min]
-> Demo: Building a Bootable Container [15 min]
-> Hands-On Lab: Deploying Bootc [45 min]
-> Q&A and Wrap-Up [15 min]
OpenTelemetry is revolutionizing the observability space, rapidly becoming the go-to standard for instrumentation and data collection. As modern applications grow in complexity, having robust observability practices is no longer optional. OpenTelemetry simplifies this by offering a unified framework for collecting and processing metrics, logs, and traces.
This talk will introduce the core components of OpenTelemetry, explore its current capabilities, and provide practical guidance on integrating it into your application stack. Whether you're just beginning your observability journey or looking to standardize across distributed systems, this talk will equip you with the knowledge to get started confidently.
Explore the next generation of Kubernetes cluster management with Hypershift and Red Hat OpenShift Service on AWS (ROSA) with Hosted Control Planes (HCP). As Kubernetes adoption accelerates across industries, managing clusters efficiently across multiple cloud providers remains a critical challenge. Hypershift revolutionises this process by offering a centralized control plane, enabling seamless scalability, enhanced resilience, and cost efficiency.
Discover how ROSA HCP streamlines operations by separating control plane pods from worker nodes, significantly reducing provisioning time, lowering infrastructure costs, and simplifying scaling. With a fully managed cloud-native environment, organizations can offload infrastructure complexities while benefiting from OpenShift’s robust security, compliance, and monitoring capabilities.
Visit our booth for a live demonstration showcasing cluster creation, application deployment, and real-time scaling, and see firsthand how Hypershift and ROSA HCP can transform your Kubernetes strategy. Revolutionize your cloud-native journey today!
Sustainability is a growing concern in IT, and Kepler (Kubernetes-based Efficient Power Level Exporter) is a groundbreaking project addressing energy efficiency in cloud-native environments. This session dives into how Kepler uses eBPF to monitor energy consumption metrics for containers and Pods, providing actionable insights through Prometheus APIs.
Learn how Kepler supports sustainability reporting and optimizes workload scheduling in platforms like OpenShift. Explore use cases, benefits, and strategies to leverage Kepler for greener cloud operations. Join us to lead the charge in sustainable IT practices!
Red Hat teams will be demonstrating the latest features in Tekton, Argo CD, Shipwright. Introduce how to build and deploy pipelines securely using cloud-native CICD products to the community. Be available for any deep dive or clarifications by the audience.
In this workshop, application developers will learn how to ideate, prototype, build, and refine AI applications directly within their local development environment. Using Podman AI Lab, participants will explore how to run and connect AI models seamlessly, enabling hands-on experimentation. Topics include selecting the right large language model (LLM), crafting and testing prompts, working with custom data, and benchmarking performance. Additionally, we’ll dive into Instruct Lab to demonstrate how it can be used to fine-tune LLMs for specific use cases. This session is ideal for developers eager to harness the power of AI in their applications.
Join us at our booth to discover the power of digital accessibility.
Learn how small, mindful changes can enhance the digital experience for all users—regardless of their abilities. Whether you're a developer, designer, student, or simply an advocate for inclusive technology, our booth invites you to explore and understand the significance of accessibility.
Key Highlights:
* What is Accessibility:
Discover the basics of web accessibility and its importance for an inclusive digital experience. Understand how accessibility benefits all users—improving website navigation, speed, and usability.
* WCAG Guidelines:
Explore the core principles of accessibility—Perceivable, Operable, Understandable, and Robust (WCAG)—and learn how these guidelines apply to website design and development, without needing a technical background.
* Live Accessibility Testing:
Interact with live demonstrations and see real-time accessibility testing using tools like Lighthouse and axe DevTools. Witness firsthand how issues such as missing text descriptions or poor color contrasts can affect user experience.
* Screen Reader Demonstration:
Engage with our demonstration to experience how visually impaired users navigate websites with screen readers. Learn about the challenges they face and how simple design changes can significantly improve accessibility.
* Making Websites Easy to Navigate:
Learn how simple design choices—like clear headings, easy navigation, and keyboard-friendly features—can make websites more accessible to all users, even those who cannot use a mouse.
* Semantic HTML & Accessible Design:
Discover the importance of using the right HTML elements and thoughtful design choices that enhance accessibility for everyone. See how small adjustments can make a big difference.
Why Attend?
* For Everyone: Whether you’re a tech enthusiast, designer, student, or simply curious about making the web more inclusive, our booth offers easy-to-understand insights and hands-on learning opportunities.
* Interactive Learning: Engage with practical demonstrations and gain valuable knowledge you can apply in your work or daily life to improve digital accessibility.
* Real-World Impact: See how simple changes can create lasting, positive effects on users’ experiences and how you can contribute to building a more inclusive digital future.
Join us to interact, learn, and understand how small adjustments can build a more accessible, inclusive, and user-friendly web for all. Together, let's shape a digital world that works for everyone, regardless of their abilities!
Join us at our booth to discover the full potential of the Ansible Automation Platform (AAP), the ultimate solution for scaling and streamlining enterprise automation. Designed to empower teams, AAP integrates seamlessly into hybrid cloud environments, enabling efficient and consistent automation across diverse infrastructure.
Exploring two transformative components of the AAP ecosystem:
- Ansible Event-Driven Automation (EDA)
- Ansible Lightspeed.
Event-Driven Ansible simplifies IT automation by responding to events in real time. It processes event data, determines the right actions, and automates tasks to address issues quickly. By leveraging observability data from existing tools, it enhances operational efficiency throughout the IT lifecycle.
Ansible Lightspeed with watsonx Code Assistant is a generative AI service designed by and for Ansible platform engineers and developers. It accepts natural-language prompts entered by a user and then interacts with IBM watsonx foundation models to produce code recommendations built on Ansible best practices. Ansible Lightspeed can help you convert subject matter expertise into trusted, reliable Ansible code that scales across teams and domains.
Engage with hands-on demos and interactive discussions to understand how AAP can simplify operations, increase productivity, and future-proof your automation strategy. Whether you're starting your automation journey or looking to scale, we have something for everyone.
Let’s transform the way you automate!
We still have challenges to attract, retain, grow our diversity talent in Technology industry. This session can be covered as a lightening talk or a talk depending on how the program structure evolves. The purpose of this talk is to delve on the problem space in limited duration but focus more on what disruptive ideas we can adopt to bring about solution to this challenge. These ideas have been picked from what companies in this sectors are doing. Having myself being recognised as "women in tech" from Zinnov and and off late struggling to hire senior technical diversity talent, it pains me to see that not enough has been done in this space while everyone understand the business benefits of increasing diversity. I hope to shake off some prejudices and inspire the young talent to grow in technology
Congratulations! You are going to be parents !!!”
It's been 1 year since we heard this and OUR life has changed ever since. A good change. a great change! We never knew our Little bundle of joy can bring us so much happiness. These 1 year have been a roller coaster ride so far and we are loving every part of it.
Month by Month as she grew, I realized there is so much to learn from her. Yes you heard it right … My one year Old is my QA teacher! How?? Let’s evaluate.
In today's cloud-native ecosystem, Kubernetes has emerged as the de facto standard for orchestrating containerized applications. However, as organizations adopt Kubernetes for critical workloads, the need for robust backup, disaster recovery, and migration solutions becomes paramount. This session will explore Velero, an open-source tool that provides comprehensive backup and restore capabilities for Kubernetes clusters and their persistent storage.
We will dive into the key features of Velero, including backup of Kubernetes resources and persistent volumes, cluster migration, and disaster recovery. Participants will learn how Velero integrates seamlessly into existing Kubernetes environments, offering a straightforward, scalable solution to protect against data loss, maintain business continuity, and manage multi-cluster workloads.
Whether you're managing production-grade OpenShift/Kubernetes environments or exploring advanced Kubernetes-native solutions, this talk will equip you with the knowledge to build and maintain resilient systems.
Managing enormous data streams while preserving smooth scalability is a critical challenge in the age of distributed systems. Businesses can function effectively in the face of constant data flow thanks to event-driven architectures, which offer the framework for processing billions of events in real time. We will examine how Apache Kafka and Ansible work together to create and administer reliable event-driven systems in this presentation.
Apache As the foundation of streaming infrastructures, Kafka enables enterprises to manage low-latency, high-throughput event processing. We'll learn how real-time data pipelines for use cases like analytics, monitoring, and microservices communication are supported by Kafka's distributed architecture. Ansible, on the other hand, acts as the automation wizard, maintaining configurations, coordinating Kafka deployments, and making sure systems are robust and fault-tolerant even as workloads scale.
This session will emphasise the following using real-time examples and useful code snippets:
How Kafka maintains fault tolerance and controls scalable data streams.
How Kafka cluster deployments and maintenance are automated and made simpler with Ansible.
Best practices for a smooth DevOps process that combine automation and real-time data streaming.
By the end of this session, you’ll walk away with a clear blueprint for leveraging Kafka and Ansible together to build dynamic, event-driven systems capable of scaling with your organization’s needs.
This paper introduces an innovative and intelligent road navigation system designed to bridge the gap in autonomous capabilities for vehicles lacking Advanced Driver Assistance Systems (ADAS). The proposed method integrates machine learning techniques with real-time decision-making to provide a comprehensive solution for safer and smarter transportation. At its core is a YOLOv8-based object detection module capable of identifying potholes, traffic signs, symbols, vehicles, and other road objects in real-time.
The system uses CANBUS communication to ensure seamless integration between software modules and hardware components. CANBUS, a robust vehicle networking standard, enables efficient data exchange between the object detection module and Electronic Control Units (ECUs). These ECUs interpret detection signals to dynamically adjust the vehicle's speed, trajectory, and navigation path. The implementation employs STM32 microcontrollers coupled with CAN shields, where the STM32 handles real-time signal processing, and the CAN shield ensures high-speed, fault-tolerant communication across subsystems. This setup enables precise coordination of vehicle control modules.
The hardware design emphasizes modularity, allowing the system to be retrofitted into non-ADAS vehicles with minimal modifications. Additionally, sensor fusion techniques enhance the reliability and robustness of detections and decision-making under various environmental conditions. By extending autonomous navigation capabilities to non-ADAS vehicles offers a cost-effective solution for improving road safety, accessibility, and vehicle intelligence.
As artificial intelligence (AI) and machine learning (ML) continue to drive innovation across industries, organizations face challenges in managing and scaling AI workloads effectively. Traditional infrastructures often struggle to meet the demands of modern AI applications, which require massive computational power, flexibility, and streamlined collaboration. This is where Red Hat OpenShift AI steps in, providing a Kubernetes-based platform designed to optimize, scale, and manage AI/ML workflows in cloud-native environments.
This talk will explore how OpenShift AI empowers organizations to accelerate AI model development and deployment at scale, integrating seamlessly with popular AI frameworks like TensorFlow and PyTorch. We will dive into its key features, including support for GPU and hardware acceleration, hybrid cloud flexibility, and powerful MLOps tools that enable the automation of the machine learning lifecycle. Additionally, we will discuss real-world use cases and demonstrate how OpenShift AI can streamline AI workflows, foster collaboration, and ensure secure, compliant deployment.
By the end of the session, attendees will gain a deeper understanding of how OpenShift AI simplifies the complexities of managing AI models, allowing teams to focus on innovation and delivering impactful AI solutions faster and more efficiently.
In this talk, we will discuss a unique architectural design that we used to create Virtual Machines on bare-metal servers with minimal latency of just a few milliseconds. The VMs created are fully isolated, ensuring high security and reliability. This method has enabled us to achieve 99.99% availability for our product offerings hosted on our infrastructure. This talk will also focus on the optimal usage of open-source tools like Nomad, Firecracker, and Ignite, which are key to this architecture and have allowed us to create over 2 million ephemeral VMs in the few months of it's implementation.
A key takeaway from this session will be learning how engineering teams can independently create VMs on bare metal with minimal code and using open source tools Nomad, Fire Cracker and Ignite, reducing reliance on managed services from the cloud provider. Attendees will also discover innovative ways to achieve VM creation with latency of few milli seconds.
We'll discuss strategies for minimizing cloud costs while maintaining high availability, along with how we address the security challenges involved.
Participants will gain practical insights into how we achieved 99.99% availability, helping them understand the tangible business value this approach can bring to their own platforms.
Performance analysis is crucial for understanding and optimizing the behavior of complex computing systems. As systems grow in scale and complexity, especially in the contexts of cloud computing, distributed systems, and big data, performance bottlenecks and inefficiencies become harder to identify and address without a systematic approach.
This meetup would help the participants get started with understanding and analyzing standalone and distributed system with opensource tools.
We would like to over the following topics in our meetup:
1. Understanding basics of System Performance and its significance.
2. Running Performance benchmarking on distributed systems / kubernetes cloud with kubeburner.
3. Practical analysis of live system performance using tools and dashboards (grafana)
During the we will explore and demonstrate various code vulnerabilities that attackers can exploit to trigger denial-of-service (DoS) conditions in widely-used software components. Through a series of real-world examples, we will demonstrate the mechanics behind these vulnerabilities and their potential impact on system stability and availability.
Forms are the crucial bridges between users and systems, yet they remain one of the biggest pain points in user experience. This talk dives deep into modern form design patterns that boost completion rates while reducing user frustration. Through live examples and case studies, we'll explore how seemingly simple decisions in form design can dramatically impact user success rates. From input field best practices to validation patterns, learn how to transform your forms from conversion killers into smooth user experiences.
Step into the world of cutting-edge AI innovation in this session, where you’ll experience the powerful synergy between InstructLab and Red Hat OpenShift AI (RHOAI). Together, we’ll explore how to fine-tune open-source models using InstructLab and seamlessly bring them to life in an OpenShift (k8s) environment on RHOAI.
We will help you discover the magic of ilab and distributed workloads with RHOAI both at the same time, by enabling faster and more efficient training for even the most demanding AI models. You’ll dive deep into an end-to-end training journey—starting with generating synthetic data (SDG) to enhance your model’s capabilities on ilab, fine-tuning IBM’s Opensource Granite model in a phased approach on OpenShift AI, and culminating in a rigorous evaluation process using Mistral-Eval as the judge with MTBench.
This session isn’t just about tools and workflows; it’s about empowering you to tackle real-world AI challenges with confidence and creativity. Start small with RHELAI (ilab) and then expand your MLOps journey on RHOAI with seamless integration. Join us to unlock the full potential of OpenShift AI and take your AI training and experimentation to the next level.
The rise of large language models (LLMs) has opened up exciting possibilities for developers looking to build intelligent applications. However, the process of adapting these models to specific use cases can be difficult, requiring deep expertise and substantial resources. In this talk, we'll introduce you to InstructLab, an open-source project that aims to make LLM tuning accessible to developers and engineers of all skill levels, on consumer-grade hardware.
We'll explore how InstructLab's innovative approach combines collaborative knowledge curation, efficient data generation, and instruction training to enable developers to refine foundation models for specific use cases. In this workshop, you’ll be provided a RHEL VM and learn how to enhance an LLM with new knowledge and capabilities for targeted applications, without needing data science expertise. Join us to explore how LLM tuning can be more accessible and democratized, empowering developers to build on the power of AI in their projects.
The Products and Technology Learning (PTL) group within Red Hat is responsible for creating technical enablement and training materials for employees and partners. With the rapid increase of new products, and the need to create corresponding training material quickly, an “open training” initiative is being launched to “crowdsource” the creation of content by employees and partners.
This talk demonstrates how we leveraged open source tools like Backstage, Eclipse Che, and Antora to provide a self-service authoring environment to simplify and automate the process of content creation.
In today’s cloud-native landscape, resource optimization is critical to maintaining operational efficiency, controlling costs, and improving cluster performance. The latest enhancements to the Right Sizing feature in Red Hat Advanced Cluster Management for OpenShift (RHACM) provide actionable insights and tools to address these needs.
This session will explore how the new capabilities enable cluster administrators to make smarter decisions when it comes to namespace or virtualization-VM level resource requests and limits. Attendees will learn how to leverage improved feedback loops, enforce best practices, and avoid resource wastage.
Through live demonstrations and real-world use cases, this talk will highlight how RHACM’s new right-sizing experience enhances visibility, encourages proactive resource adjustments, and promotes cost savings. Whether you're managing OpenShift workloads at scale or seeking to fine-tune resource allocation, this session will equip you with the strategies, techniques, and tools to achieve optimal performance.
Attendees will leave with the knowledge to improve their OpenShift cluster efficiency, reduce operational costs, and ensure the right resources are being used in the right places, at the right time.
Data breaches have become a major concern in today’s interconnected world, highlighting the urgent need to protect sensitive information. Logs, often overlooked, are treasure troves of critical data that can pose significant security risks if exposed. They frequently contain personal information, financial details, authentication credentials, and other sensitive data. When mishandled or left unprotected, these logs can become entry points for data breaches, unauthorized access, and identity theft. Moreover, failure to safeguard logs can lead to violations of legal and regulatory requirements, potentially resulting in hefty fines and reputational damage.
While logging is indispensable for debugging and monitoring applications, it must be approached cautiously to avoid exposing sensitive data. Careless logging practices, such as recording passwords, personal user details, or financial information, can lead to severe security vulnerabilities. To mitigate these risks, developers must adopt secure logging practices. This includes avoiding the logging of sensitive information altogether or using techniques like masking or redaction when necessary. Logging levels should be configured thoughtfully, ensuring detailed logs are used only in development or debugging stages while minimizing exposure in production environments. Logs should also be securely stored, encrypted, and access-controlled to prevent unauthorized access.
In this talk, we will explore and learn about effective methods and strategies for securing information while maintaining the efficiency and reliability of log management systems. By implementing these best practices, organizations can strengthen their security defences and protect sensitive data from potential threats. We will also see some real-life data breaches in history, and how they could have been prevented.
Passwords have existed since the 1960s, when Fernando Corbato invented them for the Compatible Time-Sharing System (CTSS). As the first mode of authentication, it has been instrumental in protecting access rights to digital services for decades. However, a sharp rise in cyberattacks and financial frauds has been seen in the twenty-first century, especially in developing countries like those in south and southeast Asia. These utilize social engineering and phishing to make a legitimate user give up his credentials. In the past few years, the rise of hardware-backed cryptography and cryptographic authentication technology has risen and is now being adopted as a strong alternative mode of authentication. The efforts of the FIDO Alliance has made Passkeys a strong authentication form and is now being adopted to various web services, including banking services to have phishing resistant authentication and fight these cyberattacks.
As environments, workloads and AI training becomes more and more distributed so does the need for tracing, logging and monitoring to support safety, security, availability and resilience. This session introduces a Validated Pattern approach to Edge Observability where CTO's, product teams, engineering, system integrators, ISVs and customers come together to ease deployment of a common observability infrastructure. Apart from infrastructure monitoring with regards to compute and memory utilization a leading practice also supports metrics around power consumption to support sustainabilty efforts, as well as the enabling of workload specific observabilty. This session is also an open invitation for interested parties to join to evolve our open source approach to making observability accessible to everyone interested in edge, distributed and federated environments. We will be walking through the associated git repo, the components and a roadmap of planned features.
InstructLab is an open-source AI community project with a mission to democratize the future of generative AI. By removing technical barriers and fostering collaboration, it empowers professionals across all skill levels to create customized AI models tailored to their domain-specific needs. InstructLab leverages tools like vLLM for scalable and efficient model deployment and supports a transparent, community-driven approach to improving open-source licensed large language models (LLMs).
This session will explore:
- How InstructLab enables seamless model customization and deployment
- Crafting high-quality seed Q&A pairs for robust synthetic data generation (SDG)
- The cascading effects of seed quality on model performance
- Optimizing SDG workflows for maximum accuracy and relevance
- Advancing the AI ecosystem through open-source collaboration
Whether you're a business analyst, domain expert, or developer, this session provides practical insights to build impactful AI workflows while contributing to a growing ecosystem of accessible and innovative generative AI.
Web Components are transforming the way we build user interfaces, offering developers the ability to create reusable, modular, and framework-agnostic components. This session will explore the importance of Web Components in modern web development, their core features, and the benefits they provide for maintainable, scalable applications. Attendees will learn how to leverage Web Components for cleaner code, improved performance, and seamless integration across different frameworks. Additionally, we'll discuss the critical topic of accessibility within Web Components, ensuring that these components are usable by all users. By the end of the talk, developers will have a solid understanding of Web Components, their practical applications, and best practices for adoption in their own projects.
Key Points:
-
Introduction to Web Components
Overview of Web Components and their role in modern front-end development.
Why they are a game-changer in creating reusable, framework-independent UI elements. -
Core Concepts of Web Components
Custom Elements: Defining new HTML elements with custom behavior.
Shadow DOM: Encapsulation for styling and DOM manipulation.
HTML Templates: Defining reusable HTML structures and content. -
Importance and Benefits of Web Components
Reusability: Build once, use anywhere—across different projects or frameworks.
Encapsulation: Isolate component styles and behavior, preventing conflicts with global styles and scripts.
Interoperability: Framework-agnostic, making it easy to use in React, Angular, Vue, or vanilla JavaScript applications.
Performance: Lightweight and optimized components that reduce page load time and improve rendering speed.
Maintainability: Cleaner, more modular code that’s easier to maintain, scale, and test. -
Real-World Use Cases
Case studies from companies and projects using Web Components.
Examples of large-scale applications benefiting from modular and reusable components.
Demonstrating cross-framework compatibility. -
Challenges and Best Practices
Potential challenges with adopting Web Components (e.g., browser compatibility, tooling support).
Best practices for implementing Web Components effectively in existing projects.
Tips for integrating Web Components with modern JavaScript frameworks (React, Angular, Vue). -
Ensuring Accessibility in Web Components
Why accessibility should be a priority when building Web Components.
Key accessibility considerations: focus management, keyboard navigation, and screen reader compatibility.
Techniques for making Web Components accessible out of the box (ARIA roles, proper semantic HTML, and custom accessibility attributes).
Addressing common pitfalls in accessibility and how to overcome them. -
The Future of Web Components
How Web Components align with the evolution of the web and modern UI development.
Their role in the growing trend of component-driven development and design systems.
Predictions on how Web Components will shape front-end ecosystems.
Key Takeaways:
- A strong understanding of Web Components and their practical applications.
- Knowledge of how to create reusable, modular, and accessible components.
- Insight into how Web Components can help solve common challenges in front-end development, such as component reuse, performance optimization, and framework interoperability.
- Awareness of best practices for ensuring Web Components are accessible and usable for all users, including those with disabilities.
Modern software development demands more than just functional code—it requires systems that are intuitive to maintain, debug, and support. This session introduces a transformative approach: embedding the needs of customer-facing roles like Support Engineers, SysAdmins, DevOps, and SREs directly into the software development lifecycle. By prioritizing supportability and debuggability from the ground up, organizations can deliver tools and systems that not only enhance usability but also simplify long-term maintenance.
We’ll explore the tangible benefits of this mindset across Customer Experience (CX), Product Experience (PX), and Associate Experience (AX), showcasing how this approach improves satisfaction, efficiency, and collaboration.
Attendees will gain actionable insights into incorporating support-first thinking into every stage of development—from design to deployment. Through real-world examples and proven strategies, this session will equip developers to create products that thrive in the hands of users and support teams alike, setting a new standard for seamless software operations.
In product design, even the smallest details can significantly impact hygiene, usability, and maintenance, especially in industries where cleanliness is prioritized. This research explores the potential of seamless design principles, focusing on the creation of smooth, edge-free surfaces to eliminate dirt accumulation, enhance functionality, and improve user satisfaction. By investigating how eliminating sharp edges and unnecessary transitions can lead to practical, elegant, and user-friendly products, the research demonstrates how these principles can prevent dirt build-up, reduce cleaning effort, and improve overall usability.
The study will explore practical strategies for applying these principles across diverse product categories, such as kitchen tools, home appliances, and healthcare devices, showcasing their broader relevance and innovation potential.
The expected outcomes of this research include:
- An improved understanding of how seamless design enhances hygiene and functionality.
- Techniques for incorporating flow-driven, edge-free transitions into product development.
- Insights into how these principles can address design challenges in industrial and everyday contexts.
This research aims to provide valuable insights for product designers, UX professionals, and innovators looking to integrate aesthetics, practicality, and hygiene into their designs.
Imagine a DevOps team managing a Kubernetes cluster. On a Friday, Alex, an intern, deploys a new app to the Kubernetes cluster but forgets essential labels, resource limits, and annotations. By Monday, the cluster is chaotic-misbehaving workloads, scattered resources, and a flood of alerts. After troubleshooting, the team finds Alex's oversight and the urgent need for proper rules and checks. This is where Policy engines step in. They act as the 'responsible adults,' enforcing rules to prevent such chaos and streamline operations. In this session, we’ll explore how policy engines enforce Kubernetes security and governance. We’ll compare Open Policy Agent (OPA), Kyverno, and jsPolicy, focusing on their features like validation, mutation, and compliance enforcement. Attendees will also learn how to get started with these tools and select the best policy engine to meet specific needs and enhance their Kubernetes environment.
This talk explores the importance of policy engines in Kubernetes security and compares three popular options: OPA, Kyverno, and jsPolicy. (Will be focusing more on Kyverno and jsPolicy) The abstract highlights:
The role of policy engines in enforcing rules and best practices within Kubernetes clusters
Key functionalities like validation, mutation, and compliance enforcement.
Brief descriptions of OPA, Kyverno, and jsPolicy, emphasizing their unique strengths.
Factors to consider when selecting the right policy engine for your needs.
This presentation will benefit developers and operations professionals seeking to enhance the security and governance of their Kubernetes environments.
Unlock the full potential of your Flutter app by diving into Remote Config, Localization, and Vertex AI with a focus on 'why' and 'how.' This session will not only explain the importance of these tools for dynamic, personalized, and intelligent app experiences but will also provide hands-on demonstrations. You'll see code examples in action, from configuring Remote Config for real-time feature updates to implementing Localization for seamless multilingual support. Additionally, explore Vertex AI integration through practical code snippets and output demos, showcasing how machine learning can be used to create smart, data-driven features. Get ready to learn by doing, with live coding and real-world outputs that bring these powerful concepts to life.
Contributing to open-source documentation is a powerful way to help users and a great entry point for new contributors. This talk provides an overview of how to create clear, inclusive, and accessible documentation, from setting up tools and following style guides to submitting pull requests and collaborating with maintainers. Learn how your writing can bridge the gap between developers and end users, broaden community engagement, and improve the project experience for everyone.
Problem
While GraphQL APIs provide great flexibility in querying data, their dynamic nature can lead to performance issues, including excessive database queries and API calls which can sometimes lead to throttling by the data source. These challenges can significantly impact the response time and scalability of applications, especially as they grow in complexity.
Approach
This talk focuses on caching strategies to improve GraphQL API performance:
- Entity-Level Caching: Efficiently cache individual records to minimize database fetches.
- Response Caching: Cache entire query results to reduce redundant data processing and API calls.
- Redis Caching: Implement Redis for high-speed in-memory caching, optimizing latency and throughput.
- DataLoader Utility: Leverage DataLoader to batch and cache database requests, eliminating the N+1 query problem.
Benefits
By incorporating these caching techniques, developers can achieve:
- Enhanced performance through reduced database queries and less number of network calls.
- Scalability to handle more requests without performance degradation.
- Improved efficiency in handling complex, data-intensive queries.
Implementation Strategy
We’ll dive into practical implementation using tools like Redis and DataLoader, along with strategies for cache expiration, ensuring long-term efficiency and scalability of GraphQL APIs.
Why This Is Effective
Unlike traditional REST APIs, GraphQL APIs benefit from a flexible query system that, when paired with the right caching strategies, can significantly reduce overhead, improve load times, and enhance overall system performance. This session provides a blueprint to efficiently scale and optimize GraphQL APIs in real-world applications.
OpenAPI is transforming the way APIs are designed, documented, and consumed across industries. This session will showcase how OpenAPI simplifies API development, ensures consistency, and enhances collaboration across teams. We will dive into its powerful features, including automated documentation, testing, and version control. Learn how OpenAPI is enabling organizations to scale faster, improve quality, and streamline integrations. Join us to explore the future of API design and discover how OpenAPI is revolutionizing API ecosystems!
Container image size is crucial in cloud native app development. Bloated images slow down deployment pipelines, increase attack surfaces and inflate costs. This is a major problem across most tech organisations. This session dives deep into the art of container image reduction, exploring various techniques and tools that help achieve this. We’ll begin with an overview of overlay filesystems, explaining how they work and understanding their role in building container images. This will be followed by a deep dive into inspecting container images layer by layer and analyzing Dockerfiles, with a focus on best practices for writing lean and efficient Dockerfiles and common pitfalls to avoid.
We’ll then move on to leveraging open-source tools and techniques to reduce container image sizes, demonstrating the use of tools like dive, docker-squash, and stargz-snapshotter, as well as techniques such as multi-stage builds, to significantly reduce container image sizes by over 80%, which is critical in production scenarios. The workshop/talk will be highly interactive, featuring audience quizzes throughout to reinforce learning and ensure engagement. By the end of this session, attendees will have a solid understanding of how to reduce their container image sizes, making their cloud-native applications faster, more secure, and more cost-effective. The session will conclude with a Q&A segment to address any remaining questions.
In today's digital landscape, design has transcended mere visual aesthetics to become a powerful storytelling medium that connects brands with audiences on a profound emotional level. This presentation decodes the intricate art of transforming pixels into compelling narratives that resonate, engage, and inspire.
We'll explore how modern design bridges psychology, technology, and creativity to create experiences that speak directly to the human experience. Participants will discover how strategic design goes beyond color palettes and typography, becoming a sophisticated language of communication that can transform perceptions, drive engagement, and build meaningful connections.
Key insights include:
Decoding the psychological principles behind impactful design
Translating brand values into visual storytelling
Creating user experiences that feel intuitive and emotionally intelligent
Leveraging emerging technologies to enhance narrative design
Understanding design as a strategic business tool
Generative AI has revolutionized text processing, but quantitative datasets pose unique challenges, such as high variability, limited historical data, and complex relationships. Traditional models, including LLMs, often fail to deliver precise and actionable insights for such data types.
Enter Large Quantitative Models (LQMs)
LQMs combine the strengths of Variational AutoEncoders (VAEs) and Generative Adversarial Networks (GANs) to address these challenges. By learning latent structures and generating synthetic data, LQMs enhance predictive accuracy and robustness, bridging critical gaps in data-limited environments.
Beyond Specification: A Universal Tool
While originally designed for financial forecasting, LQMs have versatile applications. From improving IoT sensor predictions to simulating patient outcomes in healthcare, these models bring reliability and adaptability across domains.
A Glimpse Into the Future
This talk explores the brief of architecture and applications of LQMs.Audience will leave with a new perspective on generative AI’s potential for quantitative analysis and actionable steps to experiment with LQMs using open-source tools.
As cybersecurity threats continue to evolve, ethical hackers are leveraging cutting-edge technologies to stay ahead of malicious actors. One of the most promising innovations in this field is AI-driven ethical hacking, which uses artificial intelligence to enhance penetration testing, threat detection, and vulnerability assessment. This session will explore how AI is transforming the ethical hacking landscape, focusing on the tools and techniques that empower ethical hackers to work smarter and more efficiently.
We will begin by examining how AI-driven ethical hacking helps ethical hackers by automating time-consuming tasks such as vulnerability scanning and exploit development, allowing them to focus on more complex, high-level analysis. We will also address a common myth that AI will replace ethical hackers, discussing how AI acts as a complementary tool that enhances human expertise rather than replacing it.
Finally, we will dive into ChatGPT-powered AI tools for ethical hackers, highlighting the role of advanced language models and AI-driven platforms in assisting ethical hackers with reconnaissance, research, and even generating exploit code. Attendees will gain a deeper understanding of how AI tools can support their ethical hacking workflows, improve efficiency, and address challenges in cybersecurity.
By the end of the session, participants will have a clear view of the synergy between AI and ethical hacking.
In today’s data-driven landscape, organizations in finance and healthcare face the dual challenges of managing sensitive data—such as revenue and patient information—and dealing with data fragmentation across different systems and formats. Protecting sensitive data from unauthorized access while ensuring its accessibility for operations is crucial, as is integrating fragmented data for effective governance. This session will explore how the OSS Unity Catalog addresses these issues by centralizing data management, enforcing fine-grained access controls, and ensuring robust governance. Attendees will learn how Unity Catalog’s multimodal interface and support for various data formats and engines enable secure, efficient data sharing across tables, files, functions, and AI models. Demo will include the integration of Apache Iceberg, Apache Spark, and x-table, all deployed on Kubernetes to showcase a scalable data management solution.
In rural India, the lack of awareness about unconventional career paths and limited exposure to industry norms and culture create significant barriers for students aspiring to step beyond traditional or local occupations. Societal influences, coupled with insufficient career guidance, often prevent these students from envisioning a future in dynamic fields such as technology, design, or digital marketing. Raaha, a career development platform, aims to address these challenges by equipping students with the knowledge, skills, and confidence required to become career-ready in an increasingly globalized world.
A user-centered design approach was employed, involving foundational research with educators and students, 50+ survey participants, 20+ interviews, and field studies across 8 schools. Data collection included empathy mapping, journey mapping, and usability testing, analyzed through qualitative and quantitative methods to refine the platform design.
The findings reveal that rural students often lack access to career resources and mentorship, resulting in limited awareness of diverse opportunities. The platform, Raaha, demonstrated significant potential in improving career awareness, with a usability score (SUS) of 88% and positive feedback from initial users, indicating its accessibility and effectiveness.
This research contributes to education technology and career development by providing insights into the challenges rural students face and designing a scalable solution to address them. Raaha fosters career readiness by bridging the gap between education and global opportunities, empowering students to explore unconventional paths and thrive in modern industries.
Discover how we improved a web application with over 800,000 pages by switching from client-side rendering (CSR) to server-side rendering (SSR) using Nuxt.js and GraphQL.
This migration achieved transformative results, including a 17-18% boost in SEO (now 100%), significant performance gains, and accessibility improvements across key pages. These changes drove remarkable outcomes, with 1.5 million clicks on Google Search in just 1.5 month. By addressing challenges like scalability and content delivery, we revolutionized user experience while managing large-scale content efficiently.
In this talk, we’ll share our journey, challenges, strategies, and key steps to implement SSR effectively. Perfect for anyone wanting to understand the benefits of SSR and modern web development.
Problems Addressed:
1. Slow Performance: Pages initially scored 90 in performance but now reach up to more than 95, ensuring faster load times.
2. Poor SEO Rankings: SEO scores improved by 17-18% (now 100), enhancing visibility and driving 1.5 million clicks on Google Search in just 1.5 month.
3. Inefficient Data Management: Streamlined content management for over 800,000 pages, improving efficiency.
4. Scalability Issues: Transitioning to SSR provided a scalable and robust framework for high-performance delivery.
5. User Experience Gaps: Accessibility scores increased from 94 to 100, ensuring consistent and inclusive experiences.
Key Takeaways:
1. Benefits of SSR: Learn how server-side rendering enhanced performance, improved SEO by 17-18% (now 100), and provided a seamless user experience.
2. Nuxt.js and GraphQL: Discover how these tools streamlined data fetching for thousands of pages while optimizing performance.
3. Overcoming Challenges: Practical strategies for addressing issues like URL rewriting and maintaining smooth transitions, ensuring improved SEO and scalability.
4. Measuring Impact: Using Lighthouse metrics, track improvements such as SEO gains of 17-18% (now 100) and 1.5 million clicks in Google Search within 1.5 months.
5. Handling Large-Scale Content: Insights into managing 800,000+ URLs efficiently with SSR for long-term sustainability.
Target Audience:
1. Web developers and engineers
2. Technical decision-makers
3. Anyone interested in modern web development practices
The connected 5G and IOT world is leading to exponential growth of time-series data resulting in Observability challenges of scale and stability. Prometheus and Victoria Metrics are both prevalent open-source time-series databases (TSDBs) that share many similarities, but also possess distinct characteristics resulting in considerable efforts for designers to pick one over another for telemetry use-cases.
This talk will help demystify the selection of solution based on the real-world experience sharing from the 5G Telco high-cardinality use-case. It compares Prometheus and Victoria Metrics commonalities and differences on key performance metrics such as dimensioning, query language, latency, IOPS, memory footprint, and ingestion scalability and retention policies.
Our experience sharing and cost-benefit analysis of these two observability solutions will help audience to pick the best choice for their solution and to consider ways to implement newer use-cases
If you are software engineer or a quality engineer, you might have heard a lot about digital accessibility and accessibility testing. But there are no straightforward ways/learning materials available in the market to assist us on implementation.
Do you want to know how as a Software Engineer/Quality Engineer, one can ensure accessibility is maintained throughout the development process?
I will share my experience that how I have automated the accessibility checks using Cypress framework and incorporated them into my regression suite.
During the discussion, I will also provide you with live coding session to address all your queries with accessibility testing in real time. Why wait, Developers, Quality Engineers, and designers let's start building a more inclusive digital world.
OpenShift Container Platform (OCP) is a robust Kubernetes-based solution designed to simplify application and infrastructure management. At the heart of OCP are operators, automated controllers that handle complex operational tasks like scaling, updates, and configuration management. Among these, the Machine Config Operator (MCO) plays a pivotal role in maintaining consistent node configurations, handling updates, and enabling smooth system management.
But what happens when MCO isn’t stable? Instabilities can lead to downtime, misconfigurations, or interruptions during updates—challenges that can significantly impact the reliability of an OpenShift cluster. This talk delves into why MCO stability is essential for ensuring a resilient OpenShift environment and the critical role testers play in identifying and mitigating potential risks.
This session sheds light on the "why" behind MCO testing from a tester’s perspective, focusing on its role in safeguarding cluster health. It highlights the risks of overlooking MCO stability and how robust testing practices help minimize these risks. By ensuring that configurations and updates are applied seamlessly, testers help keep OpenShift environments stable, resilient, and ready to meet production demands.
Encryption has played a vital role in safeguarding communication throughout history, adapting to the growing complexity of threats and the ever-increasing need for data privacy in modern times. Early methods, such as the Caesar cipher and substitution techniques, introduced fundamental ideas for securing information. While these traditional approaches were innovative for their era, they were limited in resilience, often susceptible to frequency analysis and basic decryption attempts.
With the advent of modern computing, encryption methods advanced significantly, in The digital age brought revolutionary advancements in encryption, including the development of symmetric key algorithms like the Data Encryption Standard (DES). Introduced in the 1970s, DES became a widely adopted standard for data protection, relying on a single shared key for both encryption and decryption. While DES marked a significant milestone, its 56-bit key length eventually proved vulnerable to brute force attacks, prompting the adoption of more robust successors like Triple DES (3DES) and the Advanced Encryption Standard (AES). introducing sophisticated algorithms capable of withstanding more complex attacks. AES algorithm is optimized for speed and efficiency, making them ideal for protecting large volumes of data. Meanwhile, asymmetric encryption systems such as RSA and Diffie Hellman key exchange facilitate secure communication and digital verification through their reliance on intricate mathematical operations.
Recent developments have pushed the boundaries further with quantum-resistant encryption to counteract the looming risks posed by quantum computing and homomorphic encryption, which enables encrypted data to be processed without exposing its contents. These breakthroughs address critical challenges in domains such as cloud computing, financial systems, and data-driven industries. This talk traces the trajectory of encryption technologies from their traditional origins to contemporary advancements, exploring their impact, limitations, and potential to secure future communications in a rapidly evolving technological landscape.
Also talk will also discuss my own algorithm “Symmetric Encryption Algorithm using ASCII Values” written and its implementation.
Managing multiple Kubernetes clusters can be complex, but Submariner makes it easier by enabling seamless communication between Pods and Services across different environments. As one of the earliest and fully open-source implementations of the Kubernetes MCS API, Submariner operates at layer 3 of the OSI model, supporting communication for any type of application data or protocol. In this hands-on tutorial, we'll take a deep dive into Submariner's architecture, configuration, and practical applications. You'll learn how to set up secure connections between clusters, discover and access services across cluster boundaries, and troubleshoot common issues. Whether you're working with hybrid or multi-cloud deployments, this tutorial will provide you with the necessary skills to effectively use Submariner. Participants will have the chance to experiment with Submariner in the lab environment or use pre-deployed clusters on any cloud providers.
Join us for the first-ever DevConf.IN PyTorch Meetup! This gathering is a perfect opportunity for Python / PyTorch developers, AI/ML enthusiasts, and the local tech community to connect, collaborate, and share insights. Whether you're an experienced researcher, a data scientist, or a curious beginner, this meetup will have something for everyone.
Discover the latest state of affairs Fedora PyTorch SIG, explore real-world applications of AI and ML, and engage in hands-on discussions with fellow enthusiasts. Let’s come together to build a stronger PyTorch community, exchange ideas, and spark innovation.
RHEL AI is a Red Hat product based on upstream InstructLab project allowing user to customize Large Language Models (LLM) using private data.
Accuracy of the custom LLM is critical to achieve optimal performance of AI.
Best practices and optimization is required at every stage of LLM customization to achieve accurate custom LLM for inferencing.
- Data Seed stage
- Synthetic Data Generation stage
- Training stage
- Evaluation and Re-Training stage
- Prompt Engineering
The session will cover best practices and optimization techniques which can be used on each stage to achieve optimal and accurate LLMs.
The information shared in the session can also be used on upstream InstructLab or 3rd party LLM Models in general.
In today's digital landscape, speed, and search engine visibility are non-negotiable for any successful web application. With Google's Core Web Vitals becoming a direct ranking factor, developers need tools that make building SEO-friendly and high-performance web pages easier.
Enter Next.js, a powerful framework that simplifies server-side rendering (SSR), static site generation (SSG), and advanced optimizations for images, fonts, and scripts—all crucial for achieving a 95+ score on PageSpeed Insights(https://pagespeed.web.dev/).
In this talk, we’ll dive deep into:
- The must-have SEO elements for modern web apps.
- How Next.js helps you improve Core Web Vitals like LCP, FID, and CLS.
- Practical tips to optimize your pages for lightning-fast performance and search engine dominance.
Live Demo: We'll take a sample Next.js app, audit its performance, and implement optimizations step-by-step. Watch in real-time as we transform a slow-loading app into one that scores 95+ on PageSpeed Insights!
Whether you're building from scratch or improving an existing project, this session will give you actionable strategies to enhance SEO and user experience using Next.js. Don’t miss this opportunity to unlock your app’s true potential!
Standing out in the ever-growing tech community can feel like an uphill battle. Open source contributions not only amplify your skills and network but also unlock incredible opportunities—ranging from recognition to tangible rewards like bounties, cash prizes, internships, and more!
Did you know that there are over 9,000 active bounties on GitHub offering rewards from $10 to $1,000? Most people are unaware of how to discover and seize these opportunities.
Personal Success Stories:
- I've contributed to renowned organizations like Jenkins, Checkstyle, and AsyncAPI.
- My journey led me to an internship at Red Hat, showcasing how open source can directly influence career growth.
- Through my contributions, I earned a $1500 reward from the Asyncapi Organisation [ IBM and Red Hat are both sponsor of this organisation.
- Became a Maintainer and TSC of Asyncapi CLI and Selected as a Asyncapi Mentee 2024 and eligible for reward of $1500.
Beyond the rewards, the exponential learning and connections I've gained have been priceless.
Join this session to learn how to discover high-impact issues, strategies to contribute effectively, and how to turn your open-source journey into a launchpad for personal and professional growth!
👉 Check out my GitHub for some inspiration: github.com/AayushSaini101.
Open-source software development thrives on collaboration, but the strength of any open-source project lies in the vibrancy and health of its community. In this session, I will share strategies for fostering inclusive, diverse, and sustainable open-source communities. We will explore proven methods for creating equitable opportunities, attracting contributors from underrepresented groups, and empowering them to grow as leaders within the ecosystem.
Additionally, I will discuss practical approaches to community governance, handling conflict resolution effectively, and using tools to measure and improve community health. Drawing from personal experiences and industry best practices, this talk will provide actionable insights for maintainers, contributors, and community leaders to build thriving, welcoming, and impactful communities.
This session introduces a new approach to video search, enabling users to query video content using natural language, much like querying a database. It improves the search process by eliminating manual scrubbing and providing precise results quickly.
The session will cover:
-
Problem Overview: Challenges with traditional video search and the need for a smarter solution.
-
Live Demonstration: A real-time showcase of querying videos using natural language and retrieving relevant timestamps and clips.
-
Technical Insights:
- How vision models generate descriptive captions for video frames.
- The use of embeddings to represent both video content and user queries in a shared vector space.
- Storing and querying this data quickly using a vector database. -
Applications: Real-world use cases, including surveillance footage analysis, media curation, and content-aware video search platforms.
This talk is ideal for developers, AI practitioners, and tech enthusiasts interested in understanding the intersection of vision models, natural language processing, and media.
Documentation that meets accessibility standards does more than check boxes, it empowers every user to engage with your content fully. This talk demystifies the role of guidelines like WCAG (Web Content Accessibility Guidelines) in shaping inclusive documentation. Through real-world examples, we’ll explore practical techniques for structuring information, writing clear instructions, and optimizing digital assets (like images and links) so that users of all abilities can benefit. Attendees will leave with a straightforward approach to building accessibility into their writing process, ensuring no one is left behind.
Omniscient is a cutting-edge cybersecurity tool leveraging AI to revolutionize the digital forensics process. Designed to address the growing challenges of cybercrime investigations, it enables real-time extraction and analysis of social media data across multiple platforms like Instagram, Facebook, and WhatsApp. Its unique chat-based interface empowers investigators to query and retrieve critical evidence efficiently, drastically reducing manual effort and human error. By integrating cross-platform support and AI-driven insights, Omniscient enhances the speed and accuracy of investigations, aiming to resolve twice as many cases in one-fifth of the time compared to traditional methods. This scalable and user-centric tool positions itself as a vital solution for combating the rising tide of AI-powered cyber threats.
Cloud workloads must comply with your organization's security policies, and joining them to an identity management domain can play a crucial role. Automating this process takes it a step further. Learn how the Podengo project enables the automatic and secure enrollment of VMs into a FreeIPA domain, with live demonstrations!
FreeIPA is an open-source identity management solution offering authentication, access control, and other security features for Linux systems. It helps organizations meet their security and compliance objectives, even when running workloads on public clouds. However, traditional workflows, such as using SSH keys to access machines, often fall short of meeting modern security standards.
Enter Podengo. The Podengo service registers your FreeIPA deployment (which could be on-premises), authenticates cloud VMs, and enables automatic and secure domain enrollment. This talk will explain how the protocol works, what is required, and how we leverage the Podengo service to provide the Domain Join feature in the Red Hat Hybrid Cloud Console.
After covering the fundamentals and showcasing current use cases, we will explore existing feature gaps, how to address them, and potential support for additional identity management solutions.
This presentation is particularly relevant for system and cloud administrators, infosec professionals, and those curious about cryptography and secure identity management.
References:
FreeIPA Project
Podengo Project on GitHub
Public cloud usage is increasing daily, with many organizations adopting public clouds for their workloads, this trend often results in the creation of numerous resources that go unused or are forgotten to be deleted, leading to cost leakage and resource quota issues. This presentation will focus on identifying and pruning unused resources, ensuring they remain within the resource quota, and mitigating cost leakage.
We have implemented several pruning policies in the cloud governance automation framework. During resource monitoring, we found that most of the cost leakage comes from available volumes, unused NAT gateways, and unattached Public IPv4 addresses (Starting from February 2024, public IPv4 addresses will be chargeable whether they are used or not). Without automation, it is unreliable and impossible to control these unused resources effectively.
The Power of Community: Building Movements, Not Just Platforms
A Community-First Mindset
Technology as an Enabler
Trust, Contribution, and Shared Purpose
Open Collaboration
OpenSource changed the world. Over the last 30 years, how we approach technology has completely turned around to a point where its hard today to imagine the closed inner loops of the past. But it was not easy, and it was not by default - it took a lot of effort to bring this change, and while we see Open Source everywhere, its not the accepted execution default.
And we wont get over the next set of challenges without a determined effort from the communities, the vendors and the commercials. In this talk I would like to take a few minutes to talk about what these challenges are, and how we can start approaching them. Having been involved in the open source ecosystem for over 30 years now, I will aim to bring both a historical aspect as well as a forward looking expectations view.
And lets be clear - OpenSource changed the world, but today people are keen to change the OpenSource world, so they can go back to asserted proxy control. We must not let that happen!
In the fast-paced world of hackathons, turning your ideas into real-world solutions requires speed, innovation, and the right tools. Open Source is your secret weapon—empowering you to quickly prototype, test, and scale your project while keeping costs low. Whether you're diving into cloud computing, machine learning, or rapid prototyping, open-source tools allow you to focus on innovation, not infrastructure.
In this talk, I’ll share:
1. What Hackathons Are Really About – How hackathons spark creativity and rapid innovation.
2. Key Open Source Tools to Leverage – From cloud platforms to ML frameworks, the tools that power fast prototyping.
3. Winning Insights – My experience in UNESCO India Africa Hackathon and Smart India Hackathon(SIH) Winner 2022, and how Open Source led to success.
4. Tips for Rapid Prototyping – How to move fast and optimize your workflow in time-sensitive environments.
5. Resources for Lifelong Innovation – Tools, communities, and platforms to keep you moving forward post-hackathon.
Join me to learn how Open Source can take your hackathon journey from idea to impact—fast, efficient, and scalable.
Explore how a fictional insurance company uses OpenShift AI to improve its claims processing. In this immersive experience, you will have the opportunity to deploy and work with different AI models while utilizing various features of OpenShift AI.
Key highlights of this workshop includes:
1. Exposure to Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG).
2. Image detection models to analyze and process claims data.
3. Hands-on deployment of an application that integrates these AI technologies for a cohesive business solution.
This workshop provides a glimpse into how AI/ML technologies can be applied to real-world business problems like insurance claim processing. Please note, while the models and techniques used in this lab are illustrative of a prototype, they are not designed for a production environment.
Compliance-as-code encompasses many activities such as automation of system configuration and general DevSecOps approaches. One area examined less is how to manage the documentary artifacts associated with compliance ‘as code’, replacing word documents and excel spreadsheets with markdown, yaml and json.
Emerging data standards such as NIST’s OSCAL facilitate this approach. The OSCAL standard has been adopted by FedRAMP, Australian Cyber Security Centre, Center for Internet Security, Singapore’s GovTech, among others.
OSCAL-Compass is a project by IBM Research and Red Hat that has recently become a CNCF sandbox project. It leverages NIST's OSCAL, a set of data and process standards for compliance, and and provides an opinionated compliance-as-code approach to OSCAL adoption.
Today OSCAL-compass has three key projects which work together: Compliance-trestle; compliance 2 policy (c2p); and agile authoring. This workshop will demonstrate how to use these tools together to document and measure compliance controls on a Kubernetes cluster using Open Cluster Manager.
This workshop will be hands on: attendees will need a github.com account; python installed and a kubernetes CLI (either kubectl
or oc
).
Attendees will be securing their kubernetes cluster against their ACME corp corporate standards; and using compliance-trestle to generate a report for their internal auditors.
Througout the session the speakers will also discuss the adoption of OSCAL globally including how Red Hat and IBM have been adopting the standard internally.
Discover how Code for GovTech (C4GT) leverages open-source collaboration to build innovative solutions for real-world challenges in Digital Public Goods and Infrastructure. Join us to learn about our initiatives, connect with like-minded individuals, and explore opportunities to contribute to impactful projects driving public good.
FOSS is a cornerstone in the success story of the Indian industry. The DPG-related initiatives prove that the Government of India too recognizes the value that FOSS provides to India and the World. But publicly talking about FOSS adoption continues to be a problem in India, let alone supporting and creating FOSS. In this talk, we will briefly look at the prevailing Tech Policy in India and how it impacts the adoption, support, and creation of FOSS.
FOSS United is a non-profit foundation that aims at promoting & strengthening the Free and Open Source Software (FOSS) ecosystem in India. This booth will showcase some of the activities - particularly FOSS United project grants program and flagship events.
Discover how FreeIPA simplifies and strengthens identity management and security in enterprise environments. FreeIPA provides an integrated solution combining LDAP, Kerberos, DNS, and certificate management to deliver centralized authentication, authorization, and identity services. It is designed to scale and easy to use. It ensures secure access controls and easy management of user identities within distributed systems while allowing smooth integration into hybrid and cloud-native ecosystems.
Visit our booth for some real-world use cases, live demonstrations, and the most updated FreeIPA innovations and knowledge on how it has equipped organizations to embrace effective, robust security practices in highly complex IT landscapes to drive identity management. Everyone would find something of their liking!
As AI systems transition from standalone models to fully integrated, Agentic systems, the need for trust, transparency, and safety has never been greater. This session will explore how to design and operationalize trustworthy Open AI systems that balance autonomy with accountability. Drawing from recent research on risk alignment, model openness and Agentic AI governance.
The session will leverage an example of Agentic AI system that leverages both multi-agent orchestration and function calling to achieve dynamic, goal-driven behavior, to showcase how to implement core safety practices such as risk alignment, user alignment, and real-time intervention controls, using open source tools and frameworks to ensure that AI agents can be safely deployed in real-world environments with high variability and uncertainty. Through this, we hope participants will learn the key principles of designing AI systems that can act independently while maintaining rigorous safety standards.
Automating virtual machine provisioning across multiple clouds can be a challenge, especially in CI workflows or testing environments. mrack (https://github.com/neoave/mrack) is a multi-cloud provisioning tool designed to make this process easier by translating infrastructure requirements across providers.
The development of mrack started when limitations were found in tools like Linchpin. It offers faster provisioning using asynchronous operations, supports multi-architecture environments (Windows, Linux, and various OS flavors), and simplifies multi-host setups. These features make mrack a strong choice for diverse CI use cases.
In this talk, I’ll present how mrack works with providers like AWS, OpenStack, Podman, and Beaker. I’ll showcase real-world examples, including its use in IdM-CI (Red Hat Identity Management CI framework), tmt (test management tool), and an automated deployment example for Walmart.
Explore the next generation of Kubernetes cluster management with Hypershift and Red Hat OpenShift Service on AWS (ROSA) with Hosted Control Planes (HCP). As Kubernetes adoption accelerates across industries, managing clusters efficiently across multiple cloud providers remains a critical challenge. Hypershift revolutionises this process by offering a centralized control plane, enabling seamless scalability, enhanced resilience, and cost efficiency.
Discover how ROSA HCP streamlines operations by separating control plane pods from worker nodes, significantly reducing provisioning time, lowering infrastructure costs, and simplifying scaling. With a fully managed cloud-native environment, organizations can offload infrastructure complexities while benefiting from OpenShift’s robust security, compliance, and monitoring capabilities.
Visit our booth for a live demonstration showcasing cluster creation, application deployment, and real-time scaling, and see firsthand how Hypershift and ROSA HCP can transform your Kubernetes strategy. Revolutionize your cloud-native journey today!
Red Hat teams will be demonstrating the latest features in Tekton, Argo CD, Shipwright. Introduce how to build and deploy pipelines securely using cloud-native CICD products to the community. Be available for any deep dive or clarifications by the audience.
oin us at our booth to discover the power of digital accessibility.
Learn how small, mindful changes can enhance the digital experience for all users—regardless of their abilities. Whether you're a developer, designer, student, or simply an advocate for inclusive technology, our booth invites you to explore and understand the significance of accessibility.
Key Highlights:
* What is Accessibility:
Discover the basics of web accessibility and its importance for an inclusive digital experience. Understand how accessibility benefits all users—improving website navigation, speed, and usability.
* WCAG Guidelines:
Explore the core principles of accessibility—Perceivable, Operable, Understandable, and Robust (WCAG)—and learn how these guidelines apply to website design and development, without needing a technical background.
* Live Accessibility Testing:
Interact with live demonstrations and see real-time accessibility testing using tools like Lighthouse and axe DevTools. Witness firsthand how issues such as missing text descriptions or poor color contrasts can affect user experience.
* Screen Reader Demonstration:
Engage with our demonstration to experience how visually impaired users navigate websites with screen readers. Learn about the challenges they face and how simple design changes can significantly improve accessibility.
* Making Websites Easy to Navigate:
Learn how simple design choices—like clear headings, easy navigation, and keyboard-friendly features—can make websites more accessible to all users, even those who cannot use a mouse.
* Semantic HTML & Accessible Design:
Discover the importance of using the right HTML elements and thoughtful design choices that enhance accessibility for everyone. See how small adjustments can make a big difference.
Why Attend?
* For Everyone: Whether you’re a tech enthusiast, designer, student, or simply curious about making the web more inclusive, our booth offers easy-to-understand insights and hands-on learning opportunities.
* Interactive Learning: Engage with practical demonstrations and gain valuable knowledge you can apply in your work or daily life to improve digital accessibility.
* Real-World Impact: See how simple changes can create lasting, positive effects on users’ experiences and how you can contribute to building a more inclusive digital future.
Join us to interact, learn, and understand how small adjustments can build a more accessible, inclusive, and user-friendly web for all. Together, let's shape a digital world that works for everyone, regardless of their abilities!
Join us at our booth to discover the full potential of the Ansible Automation Platform (AAP), the ultimate solution for scaling and streamlining enterprise automation. Designed to empower teams, AAP integrates seamlessly into hybrid cloud environments, enabling efficient and consistent automation across diverse infrastructure.
Exploring two transformative components of the AAP ecosystem:
- Ansible Event-Driven Automation (EDA)
- Ansible Lightspeed.
Event-Driven Ansible simplifies IT automation by responding to events in real time. It processes event data, determines the right actions, and automates tasks to address issues quickly. By leveraging observability data from existing tools, it enhances operational efficiency throughout the IT lifecycle.
Ansible Lightspeed with watsonx Code Assistant is a generative AI service designed by and for Ansible platform engineers and developers. It accepts natural-language prompts entered by a user and then interacts with IBM watsonx foundation models to produce code recommendations built on Ansible best practices. Ansible Lightspeed can help you convert subject matter expertise into trusted, reliable Ansible code that scales across teams and domains.
Engage with hands-on demos and interactive discussions to understand how AAP can simplify operations, increase productivity, and future-proof your automation strategy. Whether you're starting your automation journey or looking to scale, we have something for everyone.
Let’s transform the way you automate!
As cloud infrastructure continues to evolve, so do the technologies that power it.
Virtual machines (VMs), which are the de facto standard for cloud-based computing, are increasingly being challenged by newer approaches, including containerization and unikernels. While containers have gained significant traction, unikernels have not had their moment.
The technology is promising but struggles to find prominence. The unikernel architecture is a more radical departure from traditional virtualization. It promises benefits such as improved performance, security, and resource efficiency.
This talk will delve into the concept of unikernels, exploring their architecture, advantages, and limitations. We will discuss how unikernels differ from both VMs and containers, focusing on their ability to leverage specialized hardware and operating system kernels tailored to specific applications. The talk will also include a taxonomy of adjacent approaches such as Firecracker VMs from AWS and Hyperlight from Microsoft.
We will also explore the active research being conducted in the unikernel space, including Red Hat's Unikernel Linux (UKL) project. UKL aims to leverage the flexibility and maturity of the Linux kernel to create a unikernel environment, promising to further enhance performance, security, and resource efficiency.
Finally, we will critically assess the challenges and obstacles that may hinder the widespread adoption of unikernels. These challenges include the complexity of development, the need for specialized tooling, and the potential limitations in terms of flexibility and portability. Ultimately, this talk aims to provide a comprehensive overview of unikernels, enabling attendees to make informed judgments about their future role in the evolving cloud computing ecosystem.
Security is not just about dealing with current threats, it's about making sure that software stays secure throughout its lifecycle.Having the best practices for software maintenance, identifying and mitigating security risks in legacy systems, and efficiently back porting fixes is necessary.
Using automation to streamline security patching and minimizing disruption can result in sustained reliability
eBPF is a revolutionary technology which extends the Linux kernel and makes it programmable. bpftrace is high-level front end available for eBPF. Learning bpftrace allows a user to write simple yet powerful one-liners or small scripts to fulfill their networking, observability and security needs. This talk aims to introduce the user to writing bpftrace programs to harness the potential of eBPF technology.
Background:
Cloud platforms like AWS, Azure, and Google Cloud provide organizations with scalability and flexibility but also introduce challenges in managing resources. Developers/DevOps often forget to clean up resources after running tests or performing deployments, resulting in escalating costs, especially in large-scale enterprise environments. Manual clean-up processes are error-prone, time-consuming, and difficult to scale.
Cloudwash, a tool developed by Red Hat QE, offers an automated, efficient solution for identifying and cleaning up these resources, ensuring that cloud expenditures remain under control.
Key Takeaways:
-
Understanding Cloud Resource Cleanup Challenges: Attendees will gain insights into the common pitfalls and inefficiencies in managing cloud resources.
-
How Cloudwash Works: An in-depth explanation of how Cloudwash identifies orphaned resources across multiple cloud providers.
-
Integration and Usage: Learn how to integrate Cloudwash into existing workflows to automate resource cleanup.
-
Cost Savings: Real-world examples of how organizations have reduced their cloud bills using Cloudwash.
-
Call for contribution: Invite people to contribute to the number of RFEs opened from community.
Target Audience:
This session is aimed at cloud infrastructure managers, DevOps engineers, QA teams, and anyone responsible for cloud cost optimization in their organization.
Why this talk?
With the proliferation of cloud services, cost management has become a critical concern for organizations. Cloudwash not only addresses an immediate pain point but also exemplifies how open-source tools can drive operational efficiency. By sharing the journey of Cloudwash, this talk will inspire attendees to rethink their cloud management practices and embrace automation for cost optimization.
In this modern era of containerization and AI, it can be complex and overwhelming for a developer to learn, get updated on these technologies and include them in their development workflow. Podman Desktop equips developers with these capabilities and reduces the barrier to learn, develop and test their cloud native and AI applications, all from the ease of a local environment.
In this talk, we will explore how to unlock the power of containers and Kubernetes from your local workstation. We will also discuss how to use Podman AI Lab to work with LLMs (Large Language Models) and use playground environments to experiment and test AI models.
OpenShift is best known for running container-based applications at scale. However, not every application can be or should be containerized. Resource-constrained corporate IT shops now contend with supporting modern, cloud-native applications, monolithic databases, and virtual machine-based solutions. OpenShift Virtualization closes the gap between modern cloud-native architectures and those virtual-machine based solutions. Learn from our experts about OpenShift Virtualization and Experience a modern application platform for your virtual machines.
Sustainability in an open source project means ensuring that the project continues to thrive, grow, and remain useful over the long term. This involves having a strong and active community of contributors who regularly update the project, fix bugs, and add new features. It also means having clear guidelines and documentation so new contributors can easily get involved. Financial support, whether through donations, sponsorships, corporate participation, or grants, helps cover costs and support ongoing development.
In this talk, Red Hat OSPO's Brian Proffitt will outline what sustainability means for open source projects and communities, and the best practices in achieving that sustainability.
Connecting Volunteers with Meaningful Opportunities
Individuals eager to volunteer can easily find opportunities that align with their skills and passions through effective volunteer matching platform.
In this talk, I will delve into the systemic issues plaguing volunteer matching processes in India, where only 22% of firms offer volunteering time-off compared to 66% in the U.S. We will explore how inefficient matching leads to a staggering potential loss of ₹7,500 crore in socio-economic contributions, as millions of employees remain disengaged from meaningful volunteer work.
Current Landscape: An overview of volunteerism in India, highlighting barriers such as limited awareness, mismatched expectations, and geographic constraints.
Technological Solutions: Introduction to data-driven matching algorithms and online management systems that can enhance volunteer engagement.
Case Studies: Insights from successful models like Volunteer-Match and best practices derived from comprehensive literature reviews on volunteer governance.
Interactive Demonstration: A live demonstration of a prototype volunteer matching platform that showcases personalized recommendations and flexible options for volunteers.
Takeaways
Attendees will leave with a clear understanding of:
The critical gaps in current volunteer matching processes and their implications for community engagement.
Practical strategies for organizations to improve their volunteer management systems.
Insights into how technology can bridge the gap between volunteers and opportunities, ultimately fostering a culture of active citizenship.
This talk is designed for corporate leaders, nonprofit managers, and anyone interested in enhancing community engagement through effective volunteerism. Join us to discover how we can collectively harness the power of volunteering to create meaningful change in society.
This lightning talk reveals psychological principles and design patterns that make users perceive faster loading times, even when technical optimization isn't possible. Through live examples, we'll explore skeleton screens, progressive loading, and animation patterns that keep users engaged during wait states. Perfect for designers and developers wanting to improve user satisfaction without complex backend changes.
The Istio project is an open-source service mesh platform designed to manage and secure communication between microservices in distributed systems. It provides a transparent, standardised layer that abstracts the complexities of service-to-service communication, allowing developers to focus on business logic while enhancing observability, security, and reliability.
This talk will provide an overview of the different features of Istio and demonstrate a few use-cases of how the service mesh can be used to control, observe and secure network traffic between microservices without changing source code or re-deploying the applications.
In today’s rapidly evolving tech landscape, sustainability is no longer just an environmental concern but a fundamental aspect of software development. The growing demand for energy-efficient, scalable, and secure systems calls for a shift in how we design, build, and maintain code. This talk explores how to achieve "Green by Design" in software development—where sustainability and security go hand in hand.
In this talk, we will dive into key strategies for creating Green by Design software, emphasizing energy efficiency, resource optimization, and security. We will cover practical approaches, backed by code snippets and real-world examples, to help you build more sustainable and efficient applications.
The demo will cover:
1. Optimizing Algorithms : The power of choosing O(n) instead of O(n^2)
2. Efficient Resource Management : Avoid Memory leaks, Deadlocks, Caching Strategies, Lazy Loading
3. Energy Efficient Security Algorithms : Lightweight Cryptography, Energy Efficient Monitoring tools
4. Art of balancing Cost and Efficiency : Low Carbon Footprint Strategies
5. Sustainable Architecture: Promoting sustainable coding practices and sustainable development lifecycle
6. Leverage Use of Microservices, Distributed, Serverless and Event Driven Architecture
By promoting a "Green by Design" mindset in software development, we ensure that sustainability isn't just an afterthought but an integral part of the development process.
Join me to build a Cleaner , Greener, Sustainable Code that scales efficiently while being environmentally and economically mindful.
Deploying and managing large language models (LLMs) in production often presents significant challenges.By self-hosting LLMs on Kubernetes, organizations gain enhanced data privacy, flexibility in model training, and potential cost savings. This talk aims to enabling beginners by demystifying the process of self-hosting LLMs within the robust Kubernetes ecosystem.We will place a special emphasis on harnessing the capabilities of the Podman Desktop AI Lab extension to accelerate and simplify the development, deployment, and management of LLM workloads on Kubernetes.
Key topics will include:
- Strategically selecting and containerizing suitable open-source LLM models for optimal performance
- Crafting Kubernetes deployment manifests tailored for LLM workloads
- Provisioning and managing Kubernetes resources to meet the computational demands of LLMs
- Deep dive into leveraging the Podman Desktop AI Lab extension for streamlined LLM workflows on Kubernetes
Red Hat OpenStack Services on OpenShift(RHOSO) provides a new architecture for OpenStack that takes advantage of the benefits of Kubernetes for improved resource management and scalability, greater flexibility across the hybrid cloud, simplified development and DevOps practices, and more. With the combination of Kubernetes and OpenStack, we can innovate quickly and manage all types of applications— bare metal, virtualized, and containerized—together.
In this talk, we will be talking about:
* RHOSO Architecture in comparison with legacy architecture
* What's new for Operators and Admins:
* How to migrate from existing Red Hat OpenStack deployment to RHOSO
* A live demo showcasing
- How to get RHOSO installation in minutes,
- using OpenStack like as a openshift user
- Scale out/Updating the cloud
Finally OpenStack/OpenShift Users will get to know how they can extend OpenShift to manage OpenStack and with this they can get capabilities of OpenStack at same platform.
Title: Driving Open Source Innovation & Effective Leadership
Abstract: This session will delve into the critical role of leadership in fostering innovation within open source world . We will explore strategies for effective community building, collaboration, and governance that drive technological advancements and sustainable growth.
We will look at methods for fostering productive communities, teamwork, and governance that promote innovation in technology and long-term development.
Outline:
1. Introduction to Open Source Leadership:
o Importance of leadership in open source
o Key qualities of effective open source leaders
2. Building and Nurturing Communities:
o Strategies for community engagement
o Encouraging diverse contributions and inclusivity
3. Collaborative Development:
o Best practices for collaborative coding and project management
o Tools and platforms that facilitate open source collaboration
4. Governance and Sustainability:
o Establishing governance structures
o Ensuring long-term sustainability of open source projects
5. Case Studies:
Case Studies:
Case Study 1: Google Play Store
o How leadership and community collaboration have driven Play Store to become the de facto standard for Application Development.
Case Study 2: Microsoft Azure
o Governance and sustainability practices that have supported the growth of numerous successful projects under the Azure umbrella.
o
o Lessons learned and best practices
Target Audience: This talk is aimed at open source contributors, project maintainers, and anyone interested in learning how leadership can drive innovation and success in open source communities.
Takeaways:
• Understanding the impact of leadership on open source innovation
• Practical strategies for building and leading successful open source communities
• Insights from real-world examples of effective open source leadership
In today's digital landscape, businesses must integrate various systems, applications, and processes across on-premises and cloud environments. WebMethods, a powerful middleware platform by Software AG, simplifies this challenge by enabling seamless, scalable, and secure integration. This session will provide an overview of WebMethods' core capabilities, including Enterprise Application Integration (EAI), B2B communication, API management, and Business Process Management (BPM). Attendees will learn how WebMethods facilitates efficient data exchange, automates workflows, and connects disparate systems, ensuring enhanced interoperability and reduced complexity in enterprise IT ecosystems. Through real-world use cases, we will demonstrate how WebMethods can streamline integrations, drive business efficiency, and support future growth.
A module is a reusable, standalone script that Ansible runs on your behalf, either locally or remotely. Modules interact with your local machine, an API, or a remote system to perform specific tasks like changing a database password or spinning up a cloud instance. Each module can be used by the Ansible API, or by the ansible or ansible-playbook programs. A module provides a defined interface, accepting arguments and returning information to Ansible by printing a JSON string to stdout before exiting. Ansible ships with thousands of modules, and you can easily write your own. If you’re writing a module for local use, you can choose any programming language and follow your own rules. This workshop illustrates how to get started developing an Ansible module in Python.
Agenda
- Environment setup
- Starting a new module
- Exercising your module code
- Exercising module code locally
- Exercising module code in a playbook
- Testing basics
- Sanity tests
- Unit tests
- Contributing back to Ansible
- Communication and development support
The convergence of Generative AI and Data Mesh architecture is redefining enterprise observability. While Generative AI provides intelligent insights and predictive capabilities, Data Mesh empowers organizations with a decentralized, domain-driven approach to data management. Together, they unlock the full potential of edge-to-cloud systems, driving faster decision-making and proactive operations.
This presentation explores how combining Generative AI with Data Mesh architecture can:
Deliver advanced anomaly detection and predictive analytics at scale.
Automate root-cause analysis across decentralized data domains.
Synthesize complex telemetry data into actionable insights for diverse teams.
Enable domain-focused observability with real-time, AI-powered insights.
Through practical examples and strategies, attendees will discover how Generative AI and Data Mesh work synergistically to enhance observability, streamline incident response, and foster a resilient, intelligent enterprise ecosystem.
Join us to explore the next frontier of observability, powered by Generative AI and Data Mesh architecture.
With the advent of containers, Kubernetes became the de-facto tool to manage these millions of containers. While IP address is a scarce resource, every last pod in Kubernetes demand its own IP address. From infrastructure design perspective, it presents a challenge. A key aspect of container networking is IP address management, which often gets overlooked. How Kubernetes assigns IP address to pods is determined by the IPAM (IP Address Management) plugin being used. Different IPAM plugins provide different feature sets. This talk throws some light on how some of the open source CNI plugins tackles the challenge of IP exhaustion and approaches IPAM.
This talk explores chain-of-thought reasoning, a fine-tuning technique to enhance the logical reasoning capabilities of SLMs like LLaMA (1B/3B parameters). Attendees will learn the full process—from dataset preparation to fine-tuning and evaluation—demonstrating how smaller models can deliver interpretable, step-by-step responses with minimal resources.
Key Takeaways
- Fine-tune small language model for better reasoning and interpretability.
- Practical insights on datasets, training, and hardware.
- Apply scalable techniques to open-source SLMs.
Target Audience
AI&ML engineers, data scientists, and researchers seeking to enhance reasoning in small-scale models with practical, resource-efficient methods.
Why Attend
Learn actionable techniques to make SLMs smarter, more interpretable, and accessible for real-world applications.
Introduction to https://www.linkedin.com/groups/9899111/
No one wants to be responsible for breaking the build. But what can you do as a developer to avoid being the bad guy? How can project leads enable their teams to reduce the occurrence of broken builds?
In talking within our own teams, we discovered that many developers weren’t running sufficient integration and End to End tests in their local environments because it’s too difficult to set up and administer test environments in an efficient way.
That’s why we decided to rethink our entire local testing process in hopes of cutting down on the headaches, heartaches, and valuable time wasted. Enter Kuttl. Connecting Kuttl to CI builds has empowered our developers to easily configure a development environment locally that accurately matches the final test environment — without needing to become an expert CI admin themselves.
These days, we hear, “Who broke the build?” far less often — and you can too!
The purpose of this proposal is to discuss the importance of high quality datasets and corpus in natural language processing and how it can accelerate the advancements in LLMs and AI in general in India. Performance of several natural language processing applications rely more on the occurrence and frequency of tokens than their lexical arrangements based on the intuition that similar words appear together naturally. This constitutes us to generalize the language when preparing datasets or corpus as a method of computation performing differently on Indian languages than English can be contrasted and disentangled only after the domain distribution, structure, and generalization of an Indian corpus will match that of a standard western one, before that for every research comparison the ambiguous question that whether a method would work differently if a relevant corpus was there remains. In fact, to even refute a theory on how a particular language behaves based on statistical occurrence a good dataset is required. The sheer amount of applications of NLP, that are directly involved in development of LLMs and conversational units similar to OpenAI's ChatGPT, obligates us to plan and prepare datatsets on at a larger level of collaboration and contribution.
Kubernetes adoption is surging, with 96% of organizations using it in some capacity, according to the CNCF. Companies like Spotify, Airbnb, and Shopify operate dozens, if not hundreds, of Kubernetes clusters to support their global applications. But managing multiple clusters isn’t just a technical feat—it’s a logistical challenge. Consider this: A large enterprise managing 100 clusters could have tens of thousands of nodes and millions of pods. Each cluster generates a flood of metrics, logs, and alerts that must be coordinated to ensure high availability and performance. Managing multiple clusters introduces new levels of complexity that traditional tools like Terraform and Ansible weren’t designed to handle. While these tools are effective for provisioning infrastructure, they fall short in addressing day-2 operations such as policy enforcement, cluster upgrades, and unified monitoring across multiple environments. Similarly, GitOps pipelines streamline application deployment but provide limited visibility into the overall health and governance of multiple clusters. Teams are often left without a single-pane-of-glass solution for managing configuration drift, enforcing security policies, or gaining visibility into workloads across clusters.
Why does this problem persist? Because multi-cluster Kubernetes, while powerful, introduces inherent complexities. Networking between clusters can suffer from latency, causing out-of-sync application instances. Kubernetes’ built-in security tools only apply within single clusters, requiring manual replication to ensure uniform enforcement. Monitoring tools must be deployed individually in each cluster, often resulting in fragmented observability and disjointed data correlation.
While solutions like Cluster API, ArgoCD, and KCP offer partial relief, they lack the holistic approach needed for full multi-cluster lifecycle management. This is where Open Cluster Management (OCM) shines. OCM provides a unified framework for managing multiple Kubernetes clusters efficiently. The talk will feature a live demo showcasing how OCM Hub can seamlessly manage two Kubernetes clusters. We’ll demonstrate how OCM automates lifecycle tasks, such as policy enforcement, while providing a centralized platform for monitoring, governance, and workload distribution. By intelligently correlating data from multiple clusters, OCM simplifies troubleshooting, minimizes latency issues, and ensures consistency across environments.
In this session, we’ll demonstrate OCM’s ability to manage two Kubernetes clusters seamlessly through a live demo. You’ll see how it automates critical tasks such as upgrades and policy enforcement, ensuring smooth operation even across dozens of clusters. OCM’s centralized monitoring provides correlated insights that drastically reduce downtime and troubleshooting complexity.
Whether someone is operating in hybrid, multi-cloud, or edge environments, this session would help gain practical insights into leveraging OCM to reduce operational complexity, enhance resilience, and streamline Kubernetes operations at scale.
Many users already have a Ceph cluster they would like to use with Kubernetes. For administrators who are new to Kubernetes, it can be challenging to know where to start to make that happen, and it can be even harder to make sure it gets done securely and following best-practices.
The focus of this session is both instructional and practical. We will describe why Rook is a good fit and how it helps make the process easier. We will also cover important best practices you need to know before going to production. A major focus will be helping administrators ensure security by showing how to set up Rook as a tenant of the Ceph cluster with limited permissions.
In the high-stakes world of software deployment, traditional verification methods fall short of ensuring robust, reliable, and safe releases. Harness's Continuous Verification (CV) leverages Machine Learning and represents a paradigm shift in approaching deployment reliability. This talk will unveil how Harness CV leverages advanced ML algorithms to transform deployment strategies, providing unprecedented insights into service performance, error detection, and risk mitigation.
By integrating seamlessly with APM and logging tools, Harness CV goes beyond simple threshold monitoring. It creates an intelligent, adaptive verification framework that learns from each deployment, automatically identifies anomalies, and can trigger immediate rollbacks when potential issues are detected. From canary and blue-green deployments to rolling updates, we'll explore how machine learning is revolutionizing the way organizations ensure software quality and minimize deployment risks.
Talk Overview
- Duration: 35 minutes
- Target Audience: DevOps engineers, SREs, Software Architects, Technology Leaders
- Technical Level: Intermediate to Advanced
Analytical Progression
- The Deployment Verification Challenge (5 minutes)
- Current limitations of traditional deployment verification
- The cost of deployment failures in modern distributed systems
- Why manual verification is no longer sustainable
- Introduction to Continuous Verification (10 minutes)
- Defining Continuous Verification
- Core principles of ML-driven deployment verification
- Harness CV architecture and design philosophy
Key Technical Highlights:
- Machine learning techniques for time series analysis
- Symbolic Aggregate Approximation (SAA) for metric comparison
- Log clustering and anomaly detection algorithms
- ML Techniques in Continuous Verification (5 minutes)
- Metric Analysis Techniques
- Comparing time series data using standard deviation
- Early trend detection before threshold breaches
-
Automated performance deviation identification
-
Log Analysis Strategies
- Error clustering mechanisms
- Automatic detection of:
- New error types
- Frequency changes in existing errors
- Similar log pattern identification
-
Deployment Strategy Variations (3 minutes)
- Canary Deployments
- Blue-Green Deployments
- Rolling Updates
- Auto Deployments -
Practical Implementation and Best Practices (2 minutes)
- Configuring sensitivity levels
- Metric and log feedback mechanisms
- Integrating CV into existing CI/CD pipelines
Key Takeaways
- Practical strategies for implementing intelligent deployment checks
- How to reduce deployment risks and improve software reliability
- Understanding the transformative potential of ML in deployment verification
Technical Demonstration (10 minutes)
- Real-world CV scenario with metric and log analysis for Datadog as an example APM tool
- Automatic anomaly detection
- Rollback trigger mechanism
Building modern, distributed and highly scalable microservices with Kubernetes is hard - and it is even harder for large teams of developers. Traditional development environments often face several challenges including lengthy development cycles, local setups, inconsistent dependencies and manual configuration, hinder collaboration and innovation. Built on the open source Eclipse Che project, Red Hat OpenShift Dev Spaces uses Kubernetes and containers to provide developers and other IT team members with a consistent, secure, and zero-configuration development environment. The experience is as fast and familiar as an integrated development environment (IDE) on your laptop.
In this session, we’ll see how OpenShift Dev Spaces simplifies the development of containerized applications. We’ll also look at how it transforms the developer experience by delivering consistent, pre-configured environments customized for specific team and project requirements. Additionally, we'll demonstrate how to take advantage of its powerful features, such as integrated IDEs, automated dependency management, and seamless CI/CD pipeline integration, to minimize setup time and eliminate common "it works on my machine" issues.
This session is ideal for:
Developers, seeking efficient, consistent environments for coding and collaboration.
DevOps professionals interested in integrating development workflows into CI/CD.
Team leaders, looking to enhance collaboration, reduce onboarding time, and improve overall productivity.
Organizations aiming to modernize their development practices to stay competitive.
By attending this session, participants will:
Understand the benefits of using OpenShift Dev Spaces for cloud-native development.
Learn practical implementation techniques, including environment setup, integration with tools, and best practices.
Walk away equipped with actionable knowledge to modernize their development workflows and achieve faster innovation cycles.
Abstract:
The rise of artificial intelligence (AI) in education has primarily been dominated by Large Language Models (LLMs), which can handle complex tasks and generate detailed responses. However, these models require significant resources, making them difficult to use in resource-constrained educational settings. This proposal explores how Small Language Models (SLMs) — simpler, open-source and more affordable AI systems — can transform education by offering personalized learning and scalable solutions, even in low-resource environments.
Introduction:
SLMs are smaller, simpler versions of AI models that are easier to use and cost less. Unlike Large Language Models (LLMs), which need powerful computers, SLMs can work on basic devices, making them ideal for schools with limited budgets. They maintain efficiency and accessibility while offering targeted solutions tailored to specific educational needs. By focusing on domain-specific knowledge, local languages, and contextual customization, SLMs address the equity and inclusivity gaps often exacerbated by larger models.
Methodology:
This approach leverages insights from recent implementations of Small Language Models (SLMs) in education. Key methodologies include:
1. Fine-tuning on Localized Data: Customizing SLMs to address specific learner needs and proficiency levels.
2. Using Lightweight AI Frameworks: Deploying open-source, resource-efficient tools for underserved communities and low-resource settings.
3. Integrating SLMs into Systems: Adding SLMs to existing learning platforms to create personalized plans and adjust lessons based on students' progress.
4. Real-World Success Stories: Demonstrating how SLMs enhance teaching through automation, support student progress, and operate efficiently without high-end infrastructure.
Demo Proposal:
We will showcase a live demonstration to highlight how Small Language Models (SLMs) can be effectively deployed in real-world educational settings. The demonstration will feature:
Personalized Learning Assistant:
A fine-tuned open-source Small Language Model(SLM) will be used to create personalized learning plans tailored to students' progress and feedback. The key benefits include:
1. Adaptability to Individual Needs: The system adjusts the complexity of learning materials based on each student’s progress. For slow learners, subjects are presented gradually, with difficulty levels, without overwhelming them. This step-by-step progression builds confidence, enhances engagement, and promotes a deeper understanding of concepts, contributing to their long-term success and mental well-being.
2. Affordability in Resource-Constrained Settings: SLMs are optimized for low-resource hardware, making them a cost-effective solution for schools and communities with limited budgets.
This demo will illustrate how SLMs can empower educators and improve learning outcomes, making quality education accessible to all.
Strengths of SLMs:
Small Language Models offer several key advantages for education:
1. Low Resource Requirements: They run on basic devices like low-cost laptops or mobile phones.
2. Cost-Effective: Affordable for schools with limited budgets or in developing regions.
3. Ease of Updates: Easy to adapt to new curricula or local content.
4. Privacy-Friendly: On-device processing ensures student data security.
5. Personalization: Tailors learning to individual student needs and paces.
6. Support for Local Languages: Adaptable to regional languages and cultural contexts.
7. Energy Efficient: Uses less energy, supporting sustainable AI use.
8. Scalable: Easy to implement across schools and districts.
Limitations of SLMs:
1. Less Powerful: SLMs may struggle with complex tasks like handling multi-disciplinary contexts or generating nuanced responses.
2. Performance Risks: Aggressively scaling down models can lead to degraded accuracy, especially for tasks requiring deep understanding.
3. Requires Careful Design: Achieving balance between simplicity and performance demands thoughtful optimization and regular evaluation.
Final thoughts:
Small Language Models hold transformative potential for democratizing AI in education, offering a pathway to equitable, scalable, and sustainable educational innovation. By strategically scaling down, we can scale up educational impact, addressing the diverse needs of learners worldwide. This approach reinforces the importance of context-aware AI design in fostering an inclusive digital future for education.
References
1.https://www.researchgate.net/publication/386358963_Applications_of_Artificial_Intelligence_for_Modern_Businesses_and_Entrepreneurial_Decision_Making_A_Systematic_Review
2. https://hammer.purdue.edu/articles/thesis/ANALYSIS_AND_MODELING_OF_STATE-LEVEL_POLICY_AND_LEGISLATIVE_TEXT_WITH_NLP_AND_ML_TECHNIQUES/27956481?file=50976462
3. https://link.springer.com/chapter/10.1007/978-3-031-46677-9_40
4. https://www.sciencedirect.com/science/article/abs/pii/S1041608023000195
Bridging the Gap: A Gamified Approach to Mastering UX Patterns:
This project introduces a novel approach to learning and applying UX patterns through a multifaceted system.
UX Pattern Card Game: This engaging card game incorporates five distinct card types: Constraints, Personas, Pattern Scenarios, Power Cards, and Pattern Cards. Players collaborate to solve design challenges by strategically combining these cards, fostering critical thinking and creative problem-solving within a collaborative environment.
UX Pattern Identification Chrome Extension: This innovative extension empowers users to directly observe and learn from real-world examples. By analyzing websites, the extension identifies and highlights implemented UX patterns, offering valuable insights into their practical application and encouraging deeper understanding of their effectiveness.
This integrated system aims to provide an interactive and dynamic learning experience, making UX pattern mastery accessible and enjoyable for designers of all levels.
India's agriculture sector faces significant challenges due to unpredictable crop damage, directly impacting farmers' livelihoods and insurance claim processes. In this talk, we introduce KishanRakshak, an innovative AI-driven framework leveraging transfer learning to classify and predict rice crop damage with high accuracy.
Using image captured by farmers KishanRakshak achieves efficient damage estimation tailored to India's diverse agro-climatic regions. This framework not only streamlines insurance claim evaluations but also empowers policymakers and insurers to optimize response strategies, enhancing resilience in the agriculture sector.
Join this session to explore how AI and transfer learning are transforming crop damage assessment, paving the way for sustainable farming and smarter insurance solutions in India.
AIforAgriculture #CropDamagePrediction #KishanRakshak #SustainableFarming #TransferLearning
In today’s competitive tech landscape, building a standout personal brand and navigating your career with confidence is crucial. This hands-on workshop is designed specifically for aspiring professionals in computer science and IT.
Led by Deepak Koul, an Engineering Manager at Red Hat, and Anuj Singla, a Principal Software Engineer and renowned tech youtuber, this session will explore key strategies to craft better resumes, optimize LinkedIn profiles for visibility and networking, and develop a personal brand online.
Additionally, you’ll get exclusive insights into engineering career paths and opportunities at Red Hat, one of the most innovative open-source companies in the world. Whether you’re preparing for internships, your first job, or carving out a niche in the tech industry, this session will provide actionable insights to empower your professional journey.
Workshop breakup
Resume Mastery (15 mins)
Making Your Resume ATS-Friendly
Showcasing your projects.
Quantifying the impact of your role in those projects.
LinkedIn Optimization ( 15 mins)
Creating a strong, keyword-optimized headline.
Showcasing projects and achievements in the experience or featured section
Networking strategically and posting regularly
AI Learning Roadmap (20 mins)
Starting with the foundations first - Statistics, regression and neural networks.
Programming and exposure to libraries like numpy, pandas and tensorflow
Choosing a specialization path - LLMs/NLP/OpenCV
Talking about the specifics and not AI generality.
Personal Brand Strategy (10 mins)
Understand the importance of personal branding and how to consistently showcase your unique value in the tech industry.
The importance of having a niche subject you are passionate about and can post about consistently.
Pitfalls of political posturing.
Exploring Red Hat Careers (10 mins)
Learn about the engineering roles, open-source culture, and career growth opportunities at Red Hat, and how to position yourself for success at this global tech leader.
Learn about the training paths that Red Hat Academy offers.
Q&A (10 mins)
Recent advancements in quantum computing algorithms, such as qubitization, are revolutionizing the efficiency of quantum simulations by significantly reducing runtime compared to traditional methods like Trotterization. This breakthrough is particularly impactful in addressing computational problems with high complexity, such as those involving orbital numbers exceeding 50, where quantum advantage is becoming evident. By optimizing algorithmic performance and lowering the resource threshold, qubitization paves the way for practical applications in fields like chemistry, optimization, and machine learning, positioning quantum computing as a transformative technology for solving previously intractable problems
LLMs have been very useful and we have high potential for LLMs in Enterprises. However, evaluating these models remains a complex challenge and one of the reasons for LLMs not being adopted directly.
The responsible and ethical AI is going to be the key for Enterprises to adopt the LLMs for their business needs.
Traditional metrics like perplexity or BLEU score often fail to capture the nuanced capabilities of LLMs in real-world applications.
This talk is about current best practices in benchmarking LLMs, limitations of existing approaches and emerging evaluation techniques.
We’ll explore a range of qualitative and quantitative metrics,
from task-specific benchmarks (e.g., code generation, summarization)
to user-centric evaluations (e.g., coherence, creativity, bias detection).
importance of specialized benchmarks that test LLMs on ethical and explainability grounds
Outcome : The audience will be able to understand how to choose LLMs for the right balance of accuracy, efficiency, and fairness. Additionally understand what has improved in granite 3.0 which makes it better LLM.
As organizations grow, developers often struggle with fragmented workflows, an overwhelming number of tools, and infrastructure roadblocks—factors that hinder developer productivity and innovation. These aren’t just operational headaches; they’re barriers to creativity and satisfaction. Platform Engineering offers a game-changing approach: cohesive, self-service platforms that empower developers to focus on what truly matters—delivering impactful software.
This talk guides you through the journey of designing an Internal Developer Platform (IDP) that empowers developers with autonomy while ensuring robust operational governance.
We will share how we designed scalable workflows leveraging tools like Backstage, Kubernetes, Terraform, and a Custom Platform Orchestrator. These workflows streamlined infrastructure provisioning for developer environments and reduced inconsistencies in developer onboarding.
You’ll walk away with actionable strategies for:
- Reducing cognitive load by consolidating tools and workflows.
- Building scalable self-service systems that developers love to use.
- Overcoming resistance to platform adoption through developer-first design principles.
Join us to explore how platform engineering isn’t just about solving technical problems—it’s about empowering developers, fostering creativity, and cultivating joy in the software-building process.
Foreman, Ansible, and AWX provide a powerful combination for managing infrastructure automation. This session will focus on leveraging the integration of Ansible and AWX with Foreman to streamline provisioning, configuration management, and automation processes.
By connecting Foreman with Ansible/AWX, users can enhance their ability to automate infrastructure tasks, execute complex Ansible playbooks, and efficiently manage diverse environments from a single interface. This integration brings a new level of flexibility and control to infrastructure management, all within an open-source ecosystem.
As cloud-native environments grow increasingly complex, the need for seamless integration between infrastructure as code (IaC) tools like Terraform and configuration management tools like Ansible becomes vital. By combining these two powerful tools, teams can automate the entire lifecycle of infrastructure deployment and configuration with minimal friction.
This talk will introduce the Terraform-Ansible Plugin, showcasing how it bridges the gap between provisioning and configuration. Attendees will learn how to use Terraform to provision infrastructure and invoke Ansible playbooks for post-deployment configuration. We'll cover key use cases, demonstrate workflows, and explore best practices for leveraging this integration in real-world projects.
Whether you're a DevOps engineer, cloud administrator, or automation enthusiast, this session will equip you with the tools to streamline your infrastructure workflows.
In today's software development landscape, securing the software supply chain is more critical than ever. This talk will explore how Konflux CI addresses the growing need for robust security and streamlined deployments. With its focus on achieving SLSA compliance, Konflux ensures software builds are secure, transparent, and resilient. By leveraging advanced features like build isolation, provenance tracking, and decentralized architecture, Konflux empowers developers to deliver scalable, secure software faster, all while maintaining the flexibility needed for complex projects. Join us to learn how Konflux transforms DevOps pipelines for the future of secure software delivery.
Confidential Containers leverage Trusted Execution Environments (TEEs) to enhance security in cloud-native applications by safeguarding data-in-use against external threats. However, security vulnerabilities persist when container images are pulled from third-party registries, as such sources may introduce compromised or malicious images. This paper proposes a comprehensive design for an in-cluster container registry tailored for TEE-enabled environments, detailing its implementation, benefits, and role in strengthening the security posture of confidential workloads.
In an era where rapid prototyping is essential, Streamlit has emerged as a powerful open-source tool for building interactive web applications with Python. Its simplicity and versatility make it an excellent choice for developers creating AI-powered apps, custom learning portals, or data visualization tools.
This talk will demonstrate how to build a fully functional Python-powered website in just 15 minutes using Streamlit. We'll cover key features like creating quick user interfaces, integrating APIs, and deploying the app for real-world use. Whether you're a data scientist, developer, or an educator, Streamlit can revolutionize your development workflow.
- Introduction
- What is WebAssembly and why build your own runtime?
- Introducing Whisk: The goal, the constraints, and the motivation.
- Laying the Foundation
- Understanding the structure of Wasm modules (sections and LEB128 decoding).
- Implementing a parser.
- Key Learning - Binary parsing teaches you to appreciate efficient encoding and data representation.
- Executing Functions
- Implementing an interpreter for Wasm opcodes.
- Building a stack-based virtual machine.
- Handling control flow (e.g.,
return
,call
). - Supporting arithmetic operations and simple control flow.
- Key Learning - Small, incremental steps are crucial to avoid being overwhelmed by complexity.
- Adding WASI Support
- Parsing the Imports section to handle external functions.
- Implementing minimal WASI functions (e.g.,
fd_write
) to support basic I/O. - Mapping imports to host functions dynamically.
- Key Learning - Understanding WASI deepens your appreciation for portable system interfaces.
- Running Real-World Programs
- Parsing Exports to locate and execute the _start function.
- Compiling and running a Rust program with Whisk.
- Demo: Executing hello.wasm with Whisk.
- Key Learning - Running a non-trivial program validates your runtime and boosts your confidence.
- Key Takeaways
- Debugging Wasm modules without existing tooling.
- Implementing stack management and function call mechanics.
- Balancing minimalism with extensibility.
- How Whisk deepened my understanding of WebAssembly.
- Inspiration for the audience to explore Wasm internals.
- Open the floor for questions and discussions.
Do you want to engage your users with a friendly and more productive user experience? In this talk, we will share with you a new vision of AI powered intelligent user interfaces. Equipped with a rich and contextual user experience, they are more productive than the common, text only based, chat interfaces. This vision builds on the idea of AI agents integrated with organization’s backends, enhanced with AI generated user interface. This general architecture can be implemented using a variety of technologies, but the demo’s prototype will be provided using LangGraph framework, Ollama LLM’s and GraphQL API.
This will be talk on Platform Engineering and how the paradigm shift is happening from Infrastructure as Code to Infrastructure as Data. A detailed presentation on CNCF tool Crossplane and sharing my experience of using Crossplane to build control planes and APIs that power the Internal Developer Platforms for my organization used across multiple teams. Utkarsh will give a brief overview of the benefits of Crossplane as a Platform Orchestrator over traditional IAC tools like Terraform and it's better integration with other CNCF tooling like ArgoCD for GitOps and Backstage for Platform Engineering. Utkarsh will share the use-cases of Crossplane and experience of using the tools for multiple projects over a period of 2 years. Utkarsh will share the recent developments in Crossplane project and also the challenges when migrating your existing Infrastructure to be managed by Crossplane.
This session provides an in-depth look into data science pipelines and showcases how Red Hat OpenShift AI can be leveraged as a robust platform for building and managing these pipelines within a Openshift environment. It is designed for data scientists, machine learning engineers, and professionals looking to optimize their machine learning projects.
Attendees will learn about the advantages of using Red Hat OpenShift AI for data science pipelines and gain a practical understanding of how to construct and manage pipelines using Elyra or kubeflow effectively within this framework. Through a combination of presentation and demonstration, the session guides attendees through best practices for automating their AI/ ML workflows and maximizing efficiency.
As Cybersecurity threats evolve to be more advanced, it is of paramount importance for cyber professionals to be equipped with advanced skills to identify, analyze and mitigate any vulnerabilities. This hands-on workshop bridges theoretical and practical knowledge by incorporating the understanding of MITRE ATT&CK Framework, Basics Malwares Analysis & Reverse Engineering.
Participants will also get a hands-on demo of offensive security techniques like solving Portswigger labs, exploiting bWAPP / Metasploitable, and creating phishing templates with Social Engineering Toolkit (SET). The session ends with an operational note where custom built virus payloads are created and the problems that available malicious code can cause along with their mitigations is highlighted..
This was designed for IT & Cybersecurity Professionals & Enthusiasts so the participants of this workshop are armed with the skills necessary to face the challenges of real-life situations, enabling them to comprehend the different concepts about the defensive and offensive angles of cybersecurity. Be with us to push your practical skills and get useful information about the continuously changingworld of cybersecurity.
Abstract
Platform tools that integrate core development processes are redefining how we approach software delivery, especially in an era where simplicity and speed drive productivity. This session offers a deep dive into Harness Open Source, a robust development platform that combines seamless code hosting, automated DevOps pipelines, cloud development environments (Gitspaces), and artifact registries—all in one place.
This hands-on workshop will guide participants through real-world scenarios of modern software delivery, covering the end-to-end development lifecycle. Attendees will not only learn to set up and use Harness Open Source but will also walk away with practical skills to streamline workflows, enhance collaboration, and boost innovation. By the end of the session, participants will be equipped to replicate and adapt these setups for their own projects and teams.
Target Audience
This session is perfect for:
- Developers and students looking to streamline their development workflows.
- DevOps practitioners eager to explore tools for code hosting, CI/CD pipelines, and environment management.
- Open-source enthusiasts interested in integrated, collaborative solutions.
Workshop Outline
Prerequisites:
- Basic understanding of containers and Kubernetes.
- Laptop with Docker/Podman installed.
Introduction (10 minutes):
- The shift toward platform-driven development.
- Why integrated tools matter in modern workflows.
Hands-on (60 minutes):
Participants will follow along in a hands-on lab environment to:
- Set up a Kubernetes cluster using K3D.
- Install Harness Open Source and configure it.
- Host source code using the in-built GitSpaces IDE.
- Run automated pipelines, including:
- Build, Test, and Push container images.
- Security scan for vulnerabilities.
- Deploy to Kubernetes.
- Explore artifact registries for managing build and release workflows.
Discussion and Q&A (10 minutes):
- Challenges and opportunities with platform-driven workflows.
- Open discussion on community collaboration and innovation.
This session promises actionable insights and a hands-on experience that bridges the gap between modern tools and practical application. Don’t miss the chance to learn how integrated platforms like Harness Open Source can simplify and supercharge your software delivery processes.
Modern CI/CD pipelines rely on automated testing to maintain software quality and accelerate releases. However, inefficiencies such as flaky tests, redundant executions, and lack of actionable insights often hinder the process, leading to wasted resources, delays, and maintenance overhead. This paper proposes leveraging artificial intelligence (AI) to address these challenges by delivering insights into test stability, prioritization and failure analysis.
This idea centers on analysing historical test execution data using AI models to classify tests as stable, flaky or high-priority. This solution can optimize test suite execution by skipping redundant stable tests and focusing on those prone to failure.
This solution can optimize test suite execution. Machine learning models identify patterns linking test failures to code changes, helping teams address root causes efficiently.
The methodology integrates AI models into build pipelines for dynamic test selection and predictive failure analysis, using metrics like reduced execution time, improved defect detection, and fewer flaky tests to evaluate success.
Challenges such as inconsistent data quality, test environment variability, and overfitting are addressed with solutions like data augmentation, containerized environments, and model regularization. Transparency tools like SHAP or LIME ensure AI-driven decisions are interpretable and actionable for teams.
This framework transforms test automation by automating failure prioritization and root cause analysis, reducing maintenance burdens and accelerating release cycles. The methodology lays the groundwork for future innovations, such as self-healing scripts and cross-project insights, offering a significant leap towards intelligent, efficient pipelines.
Key Takeaways:
• Challenges in Test Automation:
o Flaky tests, redundant executions and lack of failure insights slow down CI/CD pipelines.
• AI-Powered Solution:
o Analyse historical test data to classify tests and optimize test suite execution.
o Prioritize tests prone to failure and skip redundant stable ones.
o Use machine learning to correlate test failures with code changes for root cause analysis.
• Methodology:
o Implement artificial intelligence models for dynamic test selection and predictive failure analysis.
o Integrate into CI/CD pipelines for real-time impact.
• Challenges :
o These models require high-quality, clean, and structured data for accurate analysis and predictions
o Test environments often change due to differences in configurations, hardware, software versions or network conditions.
• Solutions:
o Standardizing and cleaning the data
o Use tools like Docker, Kubernetes to stabilise environments
• Impact:
o Reduced execution time, improved defect detection, and fewer flaky tests.
o Future potential for self-healing tests and cross-project insights.
In the evolving world of cloud-native security, KubeArmor has become a key runtime enforcement engine, yet it lacks effective tools for visualizing its logs and alerts. This session explores the development of a custom Grafana dashboard plugin for KubeArmor, aimed at enhancing application behavior visibility. As an LFX mentee, the speaker will discuss the challenges faced, such as selecting the right Grafana plugins, developing process and network graphs, and implementing caching mechanisms for IP mappings. Attendees will gain practical insights into the solutions and approaches used to overcome these challenges, making this session essential for those interested in Kubernetes observability, cloud-native security, or Grafana plugin development.
Ever since the invent of AIs, development of applications is in rapid pace. As a developer or as a quality engineer, now it became really essential to provide and ensure consistent and visually appealing user experience.
We often use conventional manual testing methods to test the visual changes. Even though it is reliable, it is time consuming and prone to human errors.
This presentation will explore how AI powered visual testing can empower the software development/quality engineering with efficient and human centric approach.
We will deep dive into the core principles of visual regression testing,
1.Image comparison algorithms: Understanding how AI algorithms can detect visual discrepancies even the subtle ones.
2.Machine learning models: Exploring how AI models can learn and adapt to the visual characteristics of different applications improving accuracy and reducing false positive.
3.Human-in-the-loop approach: Discussing how important the human oversight in AI-driven testing which ensures that the technology complements human expertise rather than replacing it.
By joining this talk, we will also get to know how one can leverage AI to improve their software development process and deliver exceptional user experience.
Key takeaways for audience:
1. Improved efficiency: How AI can significantly reduce testing time and effort.
2. Enhanced quality: How AI can help identify/ address visual defects early in the development process.
3. Faster time to market: How AI can accelerate the development and release process by automating visual testing and reducing the risk of visual regressions.
As AI-powered applications increasingly shape the technological landscape, quality engineers must adapt their testing strategies to ensure these systems are robust, fair, and transparent. This workshop introduces prompt engineering as a critical skill for effectively testing AI models and LLMs.
Participants will learn how to design, evaluate, and refine prompts to systematically test AI outputs, uncover biases, and validate system behavior. Through practical demonstrations, hands-on exercises, and real-world case studies, attendees will develop actionable techniques to incorporate prompt engineering into their testing toolkit, ensuring AI applications meet high-quality standards.
Key learning objectives include:
Understanding the basics of prompt engineering
Developing strategies for creating clear, unambiguous prompts
Prompt refinement techniques for improved AI responses
Designing targeted prompts for various scenarios
By mastering these techniques, attendees will be empowered to leverage AI as a powerful ally in delivering reliable, trustworthy software products.
Are you struggling to operate Tekton Pipelines at scale?
Join us for a deep dive into the intricate art of managing Tekton Pipelines or Kubernetes Custom resource controllers in a large-scale production environment.
This presentation will explore key strategies that helped us deal with customer escalations related to performance regression, bottlenecks, and deploying a scalable Tekton instance:
Understanding the Scale: What happens when your teams run lots of concurrent pipelines?
Identify Bottlenecks with Metrics: Discover which indicators are most valuable for pinpointing degradation in your Tekton instances.
Fix the Problems Head-On: Use actionable strategies for improving efficiency once they've been identified. We will discuss strategies for preventing bottlenecks from reoccurring.
Scale with Confidence: Explore a comprehensive strategy that ensures your Tekton infrastructure remains robust and reliable as your workloads grow.
Your development teams will thank you!