Behavior-Centered Development - a different approach for collaborating on software development
What if the real source of truth in software development wasn’t the code — but the intent behind it? I’m going to explain my thoughts on behavior-centered development, and how we get there.
For a lot of teams, thinking about changes or improvements starts in a design doc - this could be in Notion, Confluence, Google Docs or something else. Discussion could be inline, comments, a Slack thread, or in a meeting. Eventually, a design is created, and it’s signed off - now what?
Someone - typically the product owner, or whoever plays that role - breaks the design doc into tickets, and starts to outline acceptance criteria. These tickets go into another system - JIRA, Linear, GitHub Issues - where the development team can start to get involved on the implementation side.
Of course, as the process continues, requirements change, or get more clarity, and the issues get more refined. Sometimes that gets tracked back in the original design document, but there isn’t an obvious linkage.
Now, we have a feature design broken down into tickets, and it’s up to the development team to implement the working code as well as the tests. With the pre-LLM development workflow, there was often no connection whatsoever between the tickets and the code - now, with LLMs, we could use the text description in the ticket to help us with the code.
But what does that workflow look like? Are we copying and pasting tickets into an LLM, along with an existing source code file, and then taking a look at the results? What happens when we modify the prompt to get it to address a concern or make a fix - does that get captured in the ticket?
Further more, what happens when the next developer needs to work on a similar ticket - without any idea of what the acceptance criteria were, if they aren’t fully captured in unit or integration tests, we stand the chance of creating a regression that could make it to the end user.
Last, we get to the pull request stage, where other developers comment on the code changes made - but the other developers don’t always have the context to know if they meet the acceptance criteria or security considerations, so the focus can be on code style, nitpicks, and maintainability. This can also bring out the worst tendencies in a development team, whether it is to simply “Let’s get this merged (LGTM)”, or to roadblock a pull request under a set of comments that might be well-intentioned, but not critical.
Pivoting to behaviors
What if we took all of the work we did developing acceptance criteria and used that as the source of truth? Making sure that the code never drifts too far out of sync with the desired behavior - and if it does, we reflect that with an updated behavior?
What if we stored it in source control, just like the other artifacts we need to run builds?
We start to make our intentions crystal clear for development - when we make a pull request, we change the code, tests and the behavior, so that there is a built-in record of our changes in plain text.
For legacy applications, where the original team may have moved on, we now have clear expressions of our decisions and motivations, without having to dig through unmaintained wiki pages, chat history, or simply make a guess. It’s all in source control, next to our code files. No hunting around in wikis or anywhere else.
An example behavior
What might a behavior look like? Here’s an example ZapCircle behavior:
name = "LoginForm"
behavior = """
# LoginForm Behavior Specification
## Overview
The `LoginForm` component collects user credentials (email and password) and submits them to the `/login` API endpoint. It displays error messages for invalid inputs and server-side errors and redirects to a dashboard upon successful login.
## Input Fields
- **Email Field**
- Type: Email
- Placeholder: "Enter your email"
- Validations:
- Required: "Email is required"
- Format: "Please enter a valid email address"
- **Password Field**
- Type: Password
- Placeholder: "Enter your password"
- Validations:
- Required: "Password is required"
- Minimum Length: 8 characters
## Buttons
- **Login Button**
- Label: "Login"
- Disabled State:
- Disabled if either email or password is invalid.
## API Integration
- **POST /login**
- Payload:
```json
{
"email": "user@example.com",
"password": "password123"
}
```
- Success Response:
- HTTP Status: 200
- Action: Redirect to `/dashboard`
- Error Response:
- HTTP Status: 401
- Action: Display "Invalid email or password."
- HTTP Status: 500
- Action: Display "Something went wrong. Please try again later."
## UI States
- **Loading State**
- Display a spinner on the login button.
- Disable all inputs and buttons.
- **Error State**
- Display error messages near the corresponding input fields or as a global error message.
## Accessibility
- Ensure all inputs and buttons are accessible via keyboard navigation.
- Associate error messages with inputs using `aria-describedby`.
## Events
- **Form Submission**
- Triggered on button click or `Enter` key press.
- Validates inputs and sends a POST request to `/login`.
## Testing
- Unit Tests:
- Validate input fields render correctly.
- Ensure validations are triggered on form submission.
- Verify API calls with correct payload.
- Test handling of API responses.
- Integration Tests:
- Simulate user interaction and check end-to-end flow.
The intent of this behavior file is to capture everything needed to completely specify this login form.
Trying this workflow now
Can we start to adopt this workflow now? Yes, with tools like my project ZapCircle.
For instance:
- Discussing design in a document -> Remains the same
- Writing an issue with acceptance criteria -> Remains the same, with the changes reflected in a ZapCircle behavior file
- Implementing the change -> Use
npx zapcircle generate
to create the initial version of the source code - Writing unit tests -> Use
npx zapcircle generateTests
to create the unit tests - Conducting a code review locally, before the pull request -> Use
npx zapcircle review
to run a code review from the command line - Getting additional feedback on a code review -> Use
npx zapcircle review --github
to add feedback to a pull request.
Moving to an agentic AI future
I was at Meta’s LlamaCon last month (April 30, 2025). I got some great feedback on ZapCircle, but also got an idea of what others were up to.
At LlamaCon, Meta CEO Mark Zuckerberg and Microsoft’s CEO Satya Nadella had a great discussion of agentic workflows - in regards to GitHub Copilot, you can listen to Satya explain it here:
[Satya Nadella on agentic workflows at LlamaCon 2025, starting at 40:17]
But what does this workflow look like in the future?
Are we using the same issue trackers we always use?
Who’s writing the code? Is it developers, or is it an LLM?
Do sprints still exist in a world where AI agents can run overnight?
Who prioritizes what the agents work on?
I think we’re still in the process of automating the existing software development lifecycle, where we’re using LLMs for code completion, and getting some code generation without context. Existing tools are adding AI features, but not really taking advantage of automation.
I don’t think we’re all moving to 100% agentic software development anytime soon. I do think we’ll find some teams experiment with it, and we’ll have some better answers as to what new workflows might look like.
Is ZapCircle the answer?
I’m putting something out there because I want to see if it resonates - it’s an experiment.
If behavior can start to drive code - and vice versa - check out my open source project ZapCircle. You can use it with your favorite LLM provider.
If you are building something similar, I’d love to hear about it, or to collaborate.