The promise of Spec-Driven Development
Envisioning a future where a small team of 3 maybe 4 people can build something magnificent - in the digital realm. Take a product person, add a few software engineers and someone with social media skills.

Specifications becoming executable: this is the promise of a relatively new "toolkit" (or framework) brought to life by the Github folks:
Spec Kit: https://github.com/github/spec-kit
Iteration one
A few days ago I gave it a try with a project that I previously successfully implemented, just wanted to see what it can do.
The scope
The scope of the software project was to build a file pre-processor, a set of scripts that turns all sorts of digital files into a digital "vault", a collection of digital memories.
The files can include photos, videos, texts, ebooks, basically any sort of files that someone wants to preserve.
The output should be an organized, deduplicated, renamed set of files, nicely converted into a "vault" that can be imported later into any serious Digital Asset Management System.
How did it go?
First and foremost, you need to lay down the foundations of what's called the constitution: the project's overarching governing principles and development guidelines.
This is nothing but a set of instructions and guidelines that the AI tool, the "agent" must always follow.
Nest is a simple, single document - can be easily called the README - that provides a very high level of specifications of the desired system behavior and outcome.
For me, this took me about half an hour to write, and it was not more than a single page of text on the screen.
...and... ...pretty much that is all you need to get started.
Pretty scary!
In this very first iteration I made the initial rookie mistake of specifying the entire "system" as one prompt, using the /specify
command.
That turned out to be a huge mistake.
Because, in the next step the framework requires you to follow the steps of
/plan
/tasks
/implement
which takes the input of what you have specified when you initiated the /specify
command, so essentially the tool kit run away with your single sentence and flash out the entire system in one go...
Well, that did not go as anticipated.
Though, I need to add that the AI did generate some usable code and the system did start processing files.
Iteration two
Learning from my fist mistakes, the second round went WAY better:
Used the same constitution, added more details to the main README and started specifying each component in separate, individual steps with /specify
.
As a result, I've got 13 distinct components that make up the entire system, neatly compartmentalized.
At this point I felt more comfortable issuing the
/plan
/tasks
/implement
steps for each individual feature.
The magic DID happen: the system - following Test Driven Development (TDD) principles - started creating the test cases, then the actual functionality and then run the tests for contracts, integrations and unit tests.
Unexpected surprises, truly magical
My jaw dropped when I started seeing
- user stories
- acceptance criteria (here it's called "acceptance scenarios")
- functional requirements
- checklists
being generated as part of the more detailed specification, and ON TOP a research step was initiated in which the framework started researching and validating technical details and solutions for each individual function.
Something I did not see coming, and it was awesome.
Completing all 13 features and finally bringing together the entire code base took a few hours.
Key takeaways - so far
What I was missing throughout the entire process of letting the AI do the work was:
- lack of control: I did not feel that I was in the driver's seat
- missing of small, atomic testable iterations
- missing of an "MVP as a first step" (and expanding later on top of that)
...and what happened in the end?
I'm still finalizing the code base (with the help of an AI agent of course), the results are promising, I'll follow up on the final end results in another post.
Sorry for keeping you hanging there...
Have you tried it? How did it go?