Friday, August 29, 2025

Assisted Development in Practice: My Journey with Vibe Coding

Earlier this month, I’ve been experimenting with what is called “vibe coding”; a mix of assisted development using AI copilots, conversational agents, and human-in-the-loop engineering. What started as a side project quickly turned into a deep dive into how AI can reshape the way we build software.



The Project Setup

I wanted to create a modern, production-ready stack that could run both in development and in air-gapped edge environments:

  • Backend: ASP.NET API connected to a PostgreSQL database
  • Frontend: React with Next.js and Material-UI
  • Authentication: Keycloak for secure deployments without relying on external identity providers
  • Local Dev: Everything packaged with Docker Compose
  • Azure DevOps: Build pipelines, run unit tests and push docker images to Azure Docker Registry
  • Production: K3s cluster on Ubuntu Server VMs, with ingress routing configured:
    • / → web frontend
    • /api → backend services
    • /keycloak → authentication

The AI Toolchain

Here’s where the fun started. Instead of going solo, I built this project with AI assistants:

  • ChatGPT – my “first friend” in brainstorming. I used it to outline ideas, get resource links, and discuss tradeoffs, even for the things that I already know, tried to see what new ideas are there.
  • Copilot in Edge Browser – great for summarizing pages and chatting inline with references.
  • GitHub Copilot in Visual Studio & VS Code – for in-editor exploration, generating scaffolding, and testing variations, also helping with commits and creating PRs.
  • Claude (via browser and MCP/VS Code) – This was the new toy, I found Claude much more useful once connected directly to my codebase inside VS Code. Having context changes everything.

Sometimes I even fed ChatGPT’s answers to Claude to see how it would respond, almost like running an architecture review board with multiple AI voices. It was fascinating to see agreement, disagreement, and nuance emerge between the tools.

Lessons Learned

  1. Context is a key, Tools embedded in VS Code (Copilot, Claude) provided a completely different experience compared to using them in a browser.
  2. Your copilots will forget the context, don't depend too much on the copilots, they will start losing focus or forget what you mentioned earlier when the context get bigger, especially when started to diagnose logs.
  3. You’re still the engineer, These tools don’t replace ownership. I had to fully understand the code. When something went wrong (like ingress misconfigurations), the AI wouldn’t magically fix it. Also, while they have the in VS Code context, they don't have the full context of where you are deploying the app. One wrong character in the Helm chart or a misplaced Keycloak realm setting can cost hours of debugging, as it happened :)
  4. AI as PR reviewers – Using Claude or Copilot sometimes felt less like “prompt engineering” and more like being a team lead reviewing PRs, or "enter engineer" who review and press enter, or to decide not to proceed and change directions. You’re not just asking for outputs, you’re guiding, validating, and ensuring the code is merge-ready.

The Takeaway

AI-assisted development is not about outsourcing thinking. It’s about pair programming and you are the only one who decide on what is good and what is bad. The responsibility stays with you; the engineer, but the productivity, breadth of exploration, and speed of iteration are on a new level.

This experience convinced me that the future of engineering leadership will not just be about writing code but about other things that we will need to explore, test and evaluate. What I like to call it now the "Enter Engineer" remind me of the "One minute Manager" book, now seems the future will introduce the "One key developer" :)