Deepak Koul
Deepak is an experienced engineering manager with a passion for psychology and organization design and now AI. Throughout his career, Deepak has been fascinated by the intersection of psychology and organizational design. He believes that a deep understanding of human behavior and motivation is essential to building high-performing teams and organizations. He has applied this knowledge to develop innovative management strategies and leadership practices that have helped to improve productivity and employee engagement.
In his current role as a senior engineering manager at Red Hat, Deepak is focused on building digital experiences for Red Hat Partners. He works closely with his team to identify opportunities for process optimization, automation, and product design improvements.
Sessions
In early 2024, a finance officer at the global engineering firm Arup wired $25 million to scammers after joining what appeared to be a legitimate video call with his CFO and colleagues. None of them were real. Every face and voice on that call was AI-generated, a deepfake ensemble so convincing it bypassed every human instinct for suspicion.
(Source: https://www.theguardian.com/world/2024/feb/05/hong-kong-company-deepfake-video-conference-call-scam)
The same AI tools that write your code, documentation and summarise your meetings and Slack threads are now being used to deceive, clone, and exploit at industrial scale. This isn’t just another evolution of hacking, it’s the democratization of deception.
Generative AI has reshaped the threat landscape - automating phishing, creating synthetic identities, generating polymorphic malware, and scaling disinformation campaigns.
Reports from Google’s Threat Intelligence Group, Microsoft’s Digital Defense Report, and Mandiant all confirm this trend: AI has lowered the barrier to entry for sophisticated deception, enabling faster, smarter, and more targeted attacks.
This session exposes the anatomy of AI-enabled threat activity, tracing how criminal and state actors are using LLMs and diffusion models in the wild. It also outlines a defense playbook, behavioral anomaly detection, AI-aware phishing simulations, and governance models for responsible internal AI use.
Because the future of cybersecurity won’t be won by whoever builds the bigger model, it’ll be won by those who recognise deception faster than machines can fabricate it.
Key Takeaways
Understand how AI is transforming the tactics of modern cyber adversaries.
Learn to detect linguistic, behavioural, and media-based AI deception.
Apply a practical mitigation framework combining AI governance, behavioral analytics, and security awareness.
The AI revolution is creating a new problem: AI silos. Your Account team has a chatbot. Your Product team has a chatbot. Support has its own. While departments develop their own specialized agents, customers are left with a fragmented and confusing experience. They don't want to hunt for the right interface. They demand a single, unified conversation.
This is the next step for developers: moving from simple prompt engineering to complex agent orchestration. How do you build a "super-agent" that understands a user's intent and seamlessly routes queries to the right specialized sub-agent?
In this hands-on workshop, using Google's Agent Development Kit (ADK), you will architect and create a proper multi-agent system that provides a unified customer experience. You will also learn how we overcame the following challenges in building a multi-agent system:
Intent & Routing: How does the main agent know which sub-agent to talk to?
Context Sharing: How do you pass information and state between agents without losing the thread?
Safety & Evaluation: How do you ensure the entire system is reliable and safe?
Key Takeaways for Attendees:
Understand why multi-agent systems are the future of AI.
Learn how to architect a multi-agent AI that can execute tasks reliably.
Get hands-on experience with the Google ADK to create and orchestrate a team of specialized agents.
Effective context sharing and state management between agents.
Will share our findings from our multi-agent POC.