Every engineering organization is asking the same question right now: how do we use AI to build software faster?
Most are answering it the wrong way — by simply dropping AI tools (Copilot, Claude, etc.) into their existing workflows. The result is often incremental at best: code gets written a bit quicker, but the overall development process barely changes. Sometimes it even creates the illusion of progress while hiding deeper problems.
The real opportunity is not "how do we add AI to our current process." It is "what new process do we need to build around AI to get reliable, production-quality output at dramatically higher speed?"
I just put that question to the test. Working alone with one AI collaborator, I built a complete working implementation of the SMB network protocol (the technology behind Windows file shares and NAS backups) in Kotlin — roughly 16,000 lines handling authentication, encryption, and real file operations against actual servers. All in just six days.
Traditional estimate for the same work: two senior engineers for six weeks.
But the speed was not the most important part. What mattered was how it became possible. The majority of the effort wasn't in the coding itself. It was in the upfront planning, the detailed specifications, the repeated iteration on the process, and the discipline to scrutinize the methodology rather than just the generated code.
The goal was to reach a point where every implementation task was so clearly defined — with exact file structures, class signatures, logic flows, edge cases, and references to the official specification — that the AI could execute with high confidence and minimal real-time scrutiny. When that planning is done right, the coding phase becomes almost mechanical.
Domain Expertise Is the Real Bottleneck
Everyone has access to the same powerful models — Claude, GPT, or whichever frontier model you prefer. The models themselves are now a commodity. What remains scarce, and what actually determines success, is deep domain expertise.
I bring thirty years of software engineering experience, having built all kinds of complex systems and protocols over that time. That background gave me a clear sense of the kinds of architectural challenges these systems present, the typical failure modes to watch for, and the structural patterns that usually work. It also taught me that the fine details are rarely worth memorizing — the real value comes from knowing how to read and enforce a specification as ground truth.
Crucially, that experience let me recognize the right layers this kind of software needs — cleanly separating transport from session management, authentication state machines from encryption boundaries, and so on — and direct the AI to respect those separations from the very beginning. Getting the architecture layers right early was one of the keys to making the rest of the work feasible at speed.
That expertise is what made the six-day timeline possible. The AI was the powerful accelerator, but the domain knowledge was the fuel and the steering wheel.
Without it, AI produces code that looks correct, compiles cleanly, and passes its own tests — yet still fails in production. During this project, the AI generated 352 unit tests. Every single one passed. Yet when we connected to real Windows servers and NAS devices, eleven subtle protocol bugs surfaced. The tests and the code shared the same blind spots because they were both shaped by the same training data. Only someone who has shipped real systems and seen these exact kinds of failure modes could spot that the tests were verifying the wrong behavior.
This is the uncomfortable truth most organizations still miss: AI does not democratize complex engineering. It dramatically accelerates the engineers who already know how to build these systems. A junior developer armed with Claude cannot reliably produce a production-grade protocol library. A senior engineer with deep domain expertise and a disciplined process can — in days instead of months.
The difference is not the model. It is the ability to direct the model effectively, catch its subtle hallucinations, and enforce ground truth at every critical decision point.
The Methodology: Building a Process That Actually Works with AI
Knowing that AI needs structure is easy. Building the right structure — one that consistently delivers production-quality output — is much harder.
Most teams still approach AI the wrong way: they treat it like a very fast coder and simply throw prompts at it, or hand it a specification and expect magic. That path usually feels productive at first but leads to hidden defects and disappointing results.
The real breakthrough comes from designing a disciplined workflow that uses multiple AI sessions in distinct roles, separated by clear phase gates. This creates separation between planning and execution, independent review, and repeated grounding against external reality.
The exact practices we used remain proprietary, but the high-level principles that enabled the six-day result were:
- Rigorous upfront planning so that implementation could proceed with minimal real-time decision-making or correction.
- Independent review using fresh AI sessions that had no attachment to earlier design choices.
- Mandatory external validation — because tests and reviews generated within the AI environment inevitably share the same blind spots as the code itself.
- Treating the process itself as something that must evolve based on what actually breaks in practice.
This structured, multi-role, multi-phase approach is what allowed one experienced engineer to deliver a working, production-ready library in days rather than months. It is not about better prompting. It is about building a reliable system around the AI.
The good news is that this kind of methodology can be adapted to many domains — not just network protocols — once you have the experience to design it correctly.
What This Means for Your Business
The economic implications of this kind of AI-assisted engineering are significant and immediate.
Most engineering organizations maintain a long backlog of projects that were never started because the traditional cost was simply too high — custom integrations, internal tools, specialized libraries, or domain-specific capabilities that would be valuable but never justified two or three engineer-months of effort.
When a complex piece of infrastructure can be built in roughly one engineer-week instead of six engineer-weeks, the entire build-versus-buy calculation changes. Projects that were previously deprioritized suddenly become worth doing. The "someday" list turns into "this quarter."
But here is the critical point: the speed is not available to every team that starts using AI. The advantage does not come from having access to better models — everyone has the same ones. It comes from having the domain expertise to direct the work effectively and the disciplined methodology to turn AI output into production-quality software.
Without that combination, AI often just lets you make plausible mistakes faster. The competitive edge belongs to organizations that can pair deep engineering experience with a structured process designed specifically around AI's strengths and weaknesses.
This approach is not limited to network protocols. The same pattern — heavy upfront planning, role-separated AI sessions, independent review, external validation, and continuous process refinement — can be applied to many areas with clear specifications: data pipelines, internal tooling, custom integrations, compliance frameworks, and more. Domains that feel less structured can often be made suitable with additional upfront work to clarify requirements.
At Coconut Tree Software, this is exactly how we operate. We don't just use AI tools. We design and run AI-assisted development methodologies that let experienced teams deliver high-quality systems at speeds that fundamentally shift what is economically viable to build.