Baruch Sadogursky (@jbaruch) did Java before it had generics, DevOps before there was Docker, and DevRel before it had a name. He built DevRel at JFrog from a ten-person company through IPO, co-authored "Liquid Software" and "DevOps Tools for Java Developers," and is a Java Champion, Microsoft MVP, and CNCF Ambassador alumni.
Today, he's obsessed with how Al agents actually write code. At Tessl, an Al agent enablement platform, Baruch focuses on context engineering, management, and sharing. On top of sharing context with Al agents, Baruch also shares knowledge with developers through blog posts, meetups, and conferences like DevNexus, QCon, Kubecon, and Devoxx, mostly about why vibecoding doesn't scale.
How does behavioral psychology connect to coding? This talk explores how understanding and managing your mental energy can transform the way you work. Using accessible research, including Daniel Kahneman’s concepts of “fast” and “slow” thinking, we’ll dive into how different types of thinking impact decision-making and productivity. We’ll also discuss how to conserve mental fuel, so you have the focus and clarity needed for critical tasks—even at the end of a demanding day.
In addition to understanding how our minds work, we’ll talk about practical techniques for managing time and allocating mental resources effectively. This includes strategies to reduce context switching, avoid wasting mental energy on low-priority tasks, and stay focused on what really matters. By using your mental energy wisely, you’ll be able to maintain productivity and avoid burnout.
If you’re interested in learning how to apply behavioral psychology to your workflow, improve time management, and make smarter decisions with less effort, this talk is for you.
We’re in the middle of another leap in abstraction.
Like compilers, cloud, and containers before it, AI coding agents arrived with hype, fear, and broken assumptions. We gave the monkeys GPUs. Sometimes they output Shakespeare. Other times, they confidently ship code that compiles, passes tests, and still does the wrong thing.
The problem is simple: intent gets lost between what we mean, what we ask for, and what actually runs.
This talk delivers a practical model for software development with AI coding agents built on three equally essential ideas:
The Chasm: the divide between human intent and what is actually expressed to an AI coding agent.
The Context: the shared, explicit, and reusable knowledge an AI coding agent operates within. APIs, conventions, constraints, and domain rules replace guessing.
The Chain: the Intent Integrity Chain. A structured flow of prompt → spec → test → code, at each stage produces a verifiable artifact and is validated externally and grounded in a shared context at every stage.
Together, these form a system where intent survives implementation. Natural language becomes specifications. Specifications become tests. Tests become code. Every step is grounded in a shared context instead of assumptions and is never validated by the same model. This approach is informed by recurring failure patterns observed in real AI agents development workflows: systems passed tests, shipped successfully, yet still failed to meet intent.
We’re in the middle of another leap in abstraction.
Like compilers, cloud, and containers before it, AI coding agents arrived with hype, fear, and broken assumptions. We gave the monkeys GPUs. Sometimes they output Shakespeare. Other times, they confidently ship code that compiles, passes tests, and still does the wrong thing.
The problem is simple: intent gets lost between what we mean, what we ask for, and what actually runs.
This talk delivers a practical model for software development with AI coding agents built on three equally essential ideas:
The Chasm: the divide between human intent and what is actually expressed to an AI coding agent.
The Context: the shared, explicit, and reusable knowledge an AI coding agent operates within. APIs, conventions, constraints, and domain rules replace guessing.
The Chain: the Intent Integrity Chain. A structured flow of prompt → spec → test → code, at each stage produces a verifiable artifact and is validated externally and grounded in a shared context at every stage.
Together, these form a system where intent survives implementation. Natural language becomes specifications. Specifications become tests. Tests become code. Every step is grounded in a shared context instead of assumptions and is never validated by the same model. This approach is informed by recurring failure patterns observed in real AI agents development workflows: systems passed tests, shipped successfully, yet still failed to meet intent.
Searching for speaker images...
