BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//pretalx.devconf.info//devconf-cz-2025//speaker//DLHZGF
BEGIN:VTIMEZONE
TZID:CET
BEGIN:STANDARD
DTSTART:20001029T040000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20000326T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-devconf-cz-2025-NLWP8Z@pretalx.devconf.info
DTSTART;TZID=CET:20250612T153000
DTEND;TZID=CET:20250612T160500
DESCRIPTION:Performance Engineering for many companies is an afterthought\,
  however at Red Hat\, Performance Engineering is a key contributor to prod
 uct readiness. Developers across the OpenShift Product portfolio can easil
 y execute Performance and Scale workloads against PRs they introduce\, bef
 ore the feature merges downstream! In this talk we will go over the toolin
 g\, strategies and details of how the OpenShift Performance and Scale Team
  has shifted their Performance and Scale testing totally left.
DTSTAMP:20260310T061314Z
LOCATION:D0206 (capacity 154)
SUMMARY:Shifting Performance Engineering Left - José Castillo Lema\, Raul 
 Sevilla
URL:https://pretalx.devconf.info/devconf-cz-2025/talk/NLWP8Z/
END:VEVENT
BEGIN:VEVENT
UID:pretalx-devconf-cz-2025-N9CLR3@pretalx.devconf.info
DTSTART;TZID=CET:20250613T125500
DTEND;TZID=CET:20250613T131000
DESCRIPTION:With the development of next generation IoT services\, in parti
 cular services such as mobile IoT\, or an Internet of Nanothings (IoNT)\, 
 novel environments must be researched to combat the demands of this ever g
 rowing landscape. These new environments are expected to be highly mobile\
 , involving small cell management\, and large-scale deployments. Edge comp
 uting\, and Decentralised Edge Cloud computing architectures play a key ro
 le in the Edge-Cloud continuum\, as there is also an increase in service  
 decentralisation.\n\nCODECO\, an EU funded Open Source Research project\, 
 is a Kubernetes plug-in aimed at optimizing application deployment onto ed
 ge devices through cognitive and cross-layer orchestration. By leveraging 
 AI-driven decision-making\, CODECO significantly improves deployment effic
 iency across the edge. This is achieved through Automated Configuration Ma
 nagement (ACM) coupled with Open Cluster Manager (OCM)\, which utilizes AI
 -generated recommendations from the Privacy-preserving Decentralised Learn
 ing and Context-awareness (PDLC) component. These recommendations are base
 d on real-time resource metrics collected from available edge clusters\, g
 uiding the optimal deployment of applications to the most suitable edge cl
 uster. Additionally\, ACM offers a user-friendly interface\, allowing user
 s to easily deploy\, monitor\, and manage their applications.\n\nOur dedic
 ation to Innovation and Research Community Engagement Programme encourages
  collaboration among developers\, SMEs\, and research communities. This ta
 lk targets stakeholders keen on advancing both AI Edge-Cloud orchestration
 . Our attendees will have the opportunity to understand CODECO's principle
 s\, objectives\, key research contributions\, open-source toolkits\, AI pr
 ediction mechanisms\, and training resources through a use case focused on
  Decentralised\, wireless AGV Control for Flexible Factories including a d
 emonstration of how an application is deployed at the optimal performance 
 locations. Finally\, we’ll discuss how the project and community can ide
 ntify future improvements and contribute to the end goal of redefining the
  Edge cloud continuum.
DTSTAMP:20260310T061314Z
LOCATION:A113 (capacity 64)
SUMMARY:CODECO: AI-Driven Orchestration for Multi-Cluster Edge Deployment -
  Dean kelly\, Alka Nixon\, José Castillo Lema
URL:https://pretalx.devconf.info/devconf-cz-2025/talk/N9CLR3/
END:VEVENT
END:VCALENDAR
