














Many AI practitioners are using Python every day and aren’t yet tapping into what makes it great. Python is open by default. It is road-mapped and developed in public. Anyone can contribute a library or tool to PyPI to share what they’ve learned with others.
Participation in a massive, online, global community makes everything you do bigger and better. Open collaboration is the gift that keeps on giving – through inspiration, through time saved, and through a network of like-minded people who want to build the future with you. But participation is much more than using code. In this talk, I’ll cover all the ways that you can get involved and talk about why you’d want to.
Many AI practitioners are using Python every day and aren’t yet tapping into what makes it great. Python is open by default. It is road-mapped and developed in public. Anyone can contribute a library or tool to PyPI to share what they’ve learned with others.
Participation in a massive, online, global community makes everything you do bigger and better. Open collaboration is the gift that keeps on giving – through inspiration, through time saved, and through a network of like-minded people who want to build the future with you. But participation is much more than using code. In this talk, I’ll cover all the ways that you can get involved and talk about why you’d want to.
“How do I learn Python?” is a very common question, and it can be complicated to answer.
This talk begins with practical advice for people new to programming: how to approach learning Python, how to practice effectively, how to read documentation, how to ask good questions, and how to choose resources without becoming overwhelmed. Rather than focusing on syntax alone, I’ll emphasise habits and learning strategies that help new developers make consistent progress.
But Python is not a static thing. The techniques and technologies around the language – and even the language itself – are constantly changing. Even the most experienced Python developers are always learning Python, and sometimes they have problems of their own! I’ll discuss some of the problems I’ve encountered as an experienced developer, and how (I think) I’ve overcome them.
Learning Python, it turns out, is not something you finish – it is something you practice for your entire career.
Attendees will leave with:
“How do I learn Python?” is a very common question, and it can be complicated to answer.
This talk begins with practical advice for people new to programming: how to approach learning Python, how to practice effectively, how to read documentation, how to ask good questions, and how to choose resources without becoming overwhelmed. Rather than focusing on syntax alone, I’ll emphasise habits and learning strategies that help new developers make consistent progress.
But Python is not a static thing. The techniques and technologies around the language – and even the language itself – are constantly changing. Even the most experienced Python developers are always learning Python, and sometimes they have problems of their own! I’ll discuss some of the problems I’ve encountered as an experienced developer, and how (I think) I’ve overcome them.
Learning Python, it turns out, is not something you finish – it is something you practice for your entire career.
Attendees will leave with:
AI coding agents have gotten big, and Python is the language of AI. But what does the Age of AI mean for the Python open source community? This talk covers the first reaction, how to put AI to work productively, but then covers how AI can hurt – or help – the Python community. See, emdashes!
AI coding agents have gotten big, and Python is the language of AI. But what does the Age of AI mean for the Python open source community? This talk covers the first reaction, how to put AI to work productively, but then covers how AI can hurt – or help – the Python community. See, emdashes!
Polars is a high-performance query engine for DataFrame workloads, written in Rust. Over the last two years, the Polars team has built a novel streaming engine that is becoming the default backbone for all lazy processing. As the optimizer increasingly rewrites and transforms query plans, the physical execution can diverge significantly from what users originally wrote, making profiling and query insights more important than ever. This talk will explore how Polars tackles that challenge and gives users visibility into what their queries actually do.
Polars is a high-performance query engine for DataFrame workloads, written in Rust. Over the last two years, the Polars team has built a novel streaming engine that is becoming the default backbone for all lazy processing. As the optimizer increasingly rewrites and transforms query plans, the physical execution can diverge significantly from what users originally wrote, making profiling and query insights more important than ever. This talk will explore how Polars tackles that challenge and gives users visibility into what their queries actually do.
The strength of the Python ecosystem has always come from its people, but building a community where everyone can participate doesn’t happen by accident.
In this panel, prominent PyLadies organizers will explore why PyLadies plays a vital role in the Python community and how its work helps Python thrive. They will discuss how PyLadies supports the next generation of Pythonistas by creating spaces where people can develop not only technical skills, but also confidence, leadership, and a sense of belonging. From open source contributions to public speaking and community organizing, PyLadies helps people grow into active, visible members of the ecosystem.
Drawing on their experiences leading local chapters, organizing global initiatives, and supporting first-time speakers and leaders, the panelists will reflect on how this work ripples outward into conferences, Python projects, and workplaces, broadening participation and strengthening the Python community as a whole.
The strength of the Python ecosystem has always come from its people, but building a community where everyone can participate doesn’t happen by accident.
In this panel, prominent PyLadies organizers will explore why PyLadies plays a vital role in the Python community and how its work helps Python thrive. They will discuss how PyLadies supports the next generation of Pythonistas by creating spaces where people can develop not only technical skills, but also confidence, leadership, and a sense of belonging. From open source contributions to public speaking and community organizing, PyLadies helps people grow into active, visible members of the ecosystem.
Drawing on their experiences leading local chapters, organizing global initiatives, and supporting first-time speakers and leaders, the panelists will reflect on how this work ripples outward into conferences, Python projects, and workplaces, broadening participation and strengthening the Python community as a whole.
Python's dynamic nature isn't a bug – it's a feature. Django leveraged this from the start, building elegant APIs that would be impossible in a rigidly typed system. Duck typing, runtime introspection, and flexible interfaces gave us the expressiveness we grew up with.
But sometimes we want more. Type safety at API boundaries. Auto-completion that actually works. Data classes instead of ORM objects. The confidence that comes with catching errors before runtime.
The answer isn't to abandon Python's dynamic core – it's to build static islands where they help. Incremental typing lets us wrap specific layers (like the ORM) in type-safe interfaces while leaving Django's liquid core untouched.
This talk explores when, why, and how to add these type-safe layers, and demonstrates Mantle – a library of utilities for typing around Django's liquid core. We'll keep the Python you love, with those little extras when you need them.
Python's dynamic nature isn't a bug – it's a feature. Django leveraged this from the start, building elegant APIs that would be impossible in a rigidly typed system. Duck typing, runtime introspection, and flexible interfaces gave us the expressiveness we grew up with.
But sometimes we want more. Type safety at API boundaries. Auto-completion that actually works. Data classes instead of ORM objects. The confidence that comes with catching errors before runtime.
The answer isn't to abandon Python's dynamic core – it's to build static islands where they help. Incremental typing lets us wrap specific layers (like the ORM) in type-safe interfaces while leaving Django's liquid core untouched.
This talk explores when, why, and how to add these type-safe layers, and demonstrates Mantle – a library of utilities for typing around Django's liquid core. We'll keep the Python you love, with those little extras when you need them.
At 20 years old, Django suffers from a marketing problem. Many believe it’s slow, poor for APIs, or stuck in maintenance mode. This talk aims to debunk those myths and celebrate the reality: Django is a fast, modern, and actively developed framework!
At 20 years old, Django suffers from a marketing problem. Many believe it’s slow, poor for APIs, or stuck in maintenance mode. This talk aims to debunk those myths and celebrate the reality: Django is a fast, modern, and actively developed framework!
Kernighan's law, stated by the legendary programmer Brian Kernighan, observes that "everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?"
The original intention of that statement was to argue for simple code. If you write fancy, clever code, then you are going to need to be extra clever when you need to fix it.
Now imagine debugging someone else's fancy, clever code. Even harder, right?
Despite all the hype and promise of LLM-based coding tools, the code they produce is often questionable. It's bad enough that a whole new profession has sprung up: the vibe code cleanup specialist. If Kernighan's law holds true, then what does it mean for these specialists? Vibed code is full of technical debt – puzzling architectural choices, convoluted algorithms, absurd tests (or no tests at all), badly repeated, highly coupled code... technical debt as far as the eye can see.
All that being said, I kinda love working with LLMs when writing code. And there are ways to guide these tools to get them to be very helpful. A lot comes down to careful prompting, strategic guardrails, and healthy skepticism.
I've been doing a lot of Django work lately, and Claude Code is my (mostly) trusty sidekick.
In this talk, I'll be giving you a peek into how I use it and what I watch out for. This is a zero-hype talk – I'll share techniques I actually rely on and pitfalls I watch out for.
Kernighan's law, stated by the legendary programmer Brian Kernighan, observes that "everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?"
The original intention of that statement was to argue for simple code. If you write fancy, clever code, then you are going to need to be extra clever when you need to fix it.
Now imagine debugging someone else's fancy, clever code. Even harder, right?
Despite all the hype and promise of LLM-based coding tools, the code they produce is often questionable. It's bad enough that a whole new profession has sprung up: the vibe code cleanup specialist. If Kernighan's law holds true, then what does it mean for these specialists? Vibed code is full of technical debt – puzzling architectural choices, convoluted algorithms, absurd tests (or no tests at all), badly repeated, highly coupled code... technical debt as far as the eye can see.
All that being said, I kinda love working with LLMs when writing code. And there are ways to guide these tools to get them to be very helpful. A lot comes down to careful prompting, strategic guardrails, and healthy skepticism.
I've been doing a lot of Django work lately, and Claude Code is my (mostly) trusty sidekick.
In this talk, I'll be giving you a peek into how I use it and what I watch out for. This is a zero-hype talk – I'll share techniques I actually rely on and pitfalls I watch out for.
Details coming soon.
Details coming soon.
AI Agents have become increasingly good at generating code. Developers who know how to use agentic tools as they program can increase their productivity significantly. In this talk we’ll walk through how to use Github Copilot for agentic coding in PyCharm, Github.com and with Copilot CLI in the terminal. We’ll look at how we can integrate these tools in ways that preserve code quality and give you control of how much or how little to use AI in the process. We’ll see some live demos and practical ways we can pull in data from M365, to create interesting new workflows with code.
AI Agents have become increasingly good at generating code. Developers who know how to use agentic tools as they program can increase their productivity significantly. In this talk we’ll walk through how to use Github Copilot for agentic coding in PyCharm, Github.com and with Copilot CLI in the terminal. We’ll look at how we can integrate these tools in ways that preserve code quality and give you control of how much or how little to use AI in the process. We’ll see some live demos and practical ways we can pull in data from M365, to create interesting new workflows with code.
Building with open-source AI models has a lot of benefits. It ensures privacy, gives the application owner control and transparency over the model lifecycle, and cuts costs at scale. In this talk, Merve will go through the state of open AI, workflows, tooling, and more for building with open models.
Building with open-source AI models has a lot of benefits. It ensures privacy, gives the application owner control and transparency over the model lifecycle, and cuts costs at scale. In this talk, Merve will go through the state of open AI, workflows, tooling, and more for building with open models.
Building effective AI agents requires more than just connecting an LLM to tools – it demands thoughtful architecture around specific use cases. This talk introduces LlamaAgents, an open-source Python framework for orchestrating multi-agent systems that solve real problems. We'll explore how to build document-centric agents that bridge enterprise document processing (parsing, extraction, and chunking via LlamaCloud) with flexible OSS orchestration, covering practical patterns for retrieval, reasoning, and action across your stack. Learn how to move beyond generic chatbots to agents designed for compliance, research, support, and other targeted workflows.
Building effective AI agents requires more than just connecting an LLM to tools – it demands thoughtful architecture around specific use cases. This talk introduces LlamaAgents, an open-source Python framework for orchestrating multi-agent systems that solve real problems. We'll explore how to build document-centric agents that bridge enterprise document processing (parsing, extraction, and chunking via LlamaCloud) with flexible OSS orchestration, covering practical patterns for retrieval, reasoning, and action across your stack. Learn how to move beyond generic chatbots to agents designed for compliance, research, support, and other targeted workflows.
Jupyter notebooks enable scientists to combine narrative explanation with executable code, finally realizing Knuth's vision of literate programming. AI tools now amplify this capability, allowing researchers to express problems conversationally, refine solutions through dialogue, and share documentation that bridges implementation with understanding.
This talk explores how notebooks and AI strengthen three essential pillars of scientific work: conversation, computation, and community.
Jupyter notebooks enable scientists to combine narrative explanation with executable code, finally realizing Knuth's vision of literate programming. AI tools now amplify this capability, allowing researchers to express problems conversationally, refine solutions through dialogue, and share documentation that bridges implementation with understanding.
This talk explores how notebooks and AI strengthen three essential pillars of scientific work: conversation, computation, and community.


