Recapping the progress of ZapCircle through version 0.1.3


After building out the first few versions of ZapCircle, I’m definitely learning a lot about how code generation with LLMs can work.

In the past couple of versions (up to 0.1.3), I’ve shipped:

  • zapcircle review - analyze pull requests and give feedback from an LLM on the changes - along with offering actionable suggestions
  • zapcircle context - combine the source code of your project into one file for use with an LLM.
  • zapcircle distill - create a small description of the current project for use with an LLM - saves money and time over using all of the source code.

That’s all on top of the original feature set, which allows you to analyze, generate, or update React components and tests based on descriptive behaviors.

I’m seeing a lot of people asking questions on Twitter/X about what to use for these problems they are facing, especially as they dive into the world of LLM-assisted coding for themselves.

Challenges with Other Areas

I had high hopes of getting some other projects finished, such as an architect feature that did a very high level analysis of your code base, or a migrate feature that created a step-by-step actionable plan for performing a migration or upgrade (for instance, React class components to React functional components).

These are still on the list to do, but there are some interesting challenges here around orchestrating multiple LLM calls with structured inputs and outputs.

Another area I was working on was an agent feature that could address issues for you relatively autonomously. I’d really like to dive into this a little more and figure out what the state of the art is for modifying source code files, and then verifying that the results are correct - I don’t think I can really trust unit test generation as the only gate here.

Next on the Roadmap

In addition to the architect, agent, and migrate features mentioned previously, I’d like to build out an application generator, similar to v0 or bolt.new, using .zap.toml behavior descriptions.

I’d also like to look into creating some additional GitHub Action integrations, because they mesh well with the command-line nature of ZapCircle.

The current tool only uses OpenAI, but there isn’t any real reason it can’t use Claude, Gemini, or a local LLM. I think the ideal situation would be an agent running locally, talking to a local LLM, so costs and data privacy would be much less of a factor.

Last, I need to create some more educational content around using generative AI with code generation - whether it’s ZapCircle or another tool.