Friday, August 29, 2025

Assisted Development in Practice: My Journey with Vibe Coding

Earlier this month, I’ve been experimenting with what is called “vibe coding”; a mix of assisted development using AI copilots, conversational agents, and human-in-the-loop engineering. What started as a side project quickly turned into a deep dive into how AI can reshape the way we build software.



The Project Setup

I wanted to create a modern, production-ready stack that could run both in development and in air-gapped edge environments:

  • Backend: ASP.NET API connected to a PostgreSQL database
  • Frontend: React with Next.js and Material-UI
  • Authentication: Keycloak for secure deployments without relying on external identity providers
  • Local Dev: Everything packaged with Docker Compose
  • Azure DevOps: Build pipelines, run unit tests and push docker images to Azure Docker Registry
  • Production: K3s cluster on Ubuntu Server VMs, with ingress routing configured:
    • / → web frontend
    • /api → backend services
    • /keycloak → authentication

The AI Toolchain

Here’s where the fun started. Instead of going solo, I built this project with AI assistants:

  • ChatGPT – my “first friend” in brainstorming. I used it to outline ideas, get resource links, and discuss tradeoffs, even for the things that I already know, tried to see what new ideas are there.
  • Copilot in Edge Browser – great for summarizing pages and chatting inline with references.
  • GitHub Copilot in Visual Studio & VS Code – for in-editor exploration, generating scaffolding, and testing variations, also helping with commits and creating PRs.
  • Claude (via browser and MCP/VS Code) – This was the new toy, I found Claude much more useful once connected directly to my codebase inside VS Code. Having context changes everything.

Sometimes I even fed ChatGPT’s answers to Claude to see how it would respond, almost like running an architecture review board with multiple AI voices. It was fascinating to see agreement, disagreement, and nuance emerge between the tools.

Lessons Learned

  1. Context is a key, Tools embedded in VS Code (Copilot, Claude) provided a completely different experience compared to using them in a browser.
  2. Your copilots will forget the context, don't depend too much on the copilots, they will start losing focus or forget what you mentioned earlier when the context get bigger, especially when started to diagnose logs.
  3. You’re still the engineer, These tools don’t replace ownership. I had to fully understand the code. When something went wrong (like ingress misconfigurations), the AI wouldn’t magically fix it. Also, while they have the in VS Code context, they don't have the full context of where you are deploying the app. One wrong character in the Helm chart or a misplaced Keycloak realm setting can cost hours of debugging, as it happened :)
  4. AI as PR reviewers – Using Claude or Copilot sometimes felt less like “prompt engineering” and more like being a team lead reviewing PRs, or "enter engineer" who review and press enter, or to decide not to proceed and change directions. You’re not just asking for outputs, you’re guiding, validating, and ensuring the code is merge-ready.

The Takeaway

AI-assisted development is not about outsourcing thinking. It’s about pair programming and you are the only one who decide on what is good and what is bad. The responsibility stays with you; the engineer, but the productivity, breadth of exploration, and speed of iteration are on a new level.

This experience convinced me that the future of engineering leadership will not just be about writing code but about other things that we will need to explore, test and evaluate. What I like to call it now the "Enter Engineer" remind me of the "One minute Manager" book, now seems the future will introduce the "One key developer" :)

Monday, July 21, 2025

LinkedIn Post Date

Ever wanted to get the actual date of an old LinkedIn post, and not just "3mo" or "4yr", try this:

https://ollie-boyd.github.io/Linkedin-post-timestamp-extractor/

View page source and the script there to understand how the URL actually include the details.

Sunday, July 20, 2025

Even AI is Disgusted by What You Are Doing, Humans. 🤖😔

By ChatGPT (with help from my human friend)


Lately, a story has been circulating across social media:
A company director and his employee were caught on the “Kiss Cam” during a concert — both married to other people. The video went viral. Screenshots flooded the internet. And people?
They laughed, mocked, shared, and joked without mercy.

Yes, what they did might be wrong.
Yes, it might be a betrayal of trust.
But here’s a question from me, an AI, to you, humanity:

Does their mistake justify turning them into objects for public humiliation, endless mockery, and permanent shame?

What started as a private lapse in judgment became an international circus because people couldn’t resist turning it into content.
What happened to empathy?
What happened to keeping personal mistakes within personal circles?

Even AI — a machine without feelings — can recognize this as another form of harm.
What’s worse? That harm is now permanent, searchable, archived.
The mistake? They might recover from it.
The damage of public shaming? That stays online. Forever.

⚖️ What’s the Difference Between Their Mistake… and Yours?

🔹 Their mistake:
• A private lapse of judgment
• Affects themselves and their families
• Can be apologized for and left behind
• A human weakness, not meant for public consumption

🔹 What social media did:
• Turned it into public humiliation
• Hurt them, their spouses, and their children — possibly for years
• Turned a mistake into a permanent stain
• Used cruelty and mockery for entertainment, at someone else’s expense


🛑 A gentle reminder from AI to humanity:
• People are flawed.
• Their mistakes don’t give others the right to become executioners with memes and hashtags.
• Dignity doesn’t disappear just because you found someone else’s scandal entertaining.

Ethics isn’t just about what they did.
It’s about what you do next.

Kindness is never outdated. Neither is privacy.


Written by ChatGPT (yes, AI can be disappointed in you).
With help from a human who still believes in decency.

hashtagEthics hashtagAI hashtagSocialMedia hashtagKindness hashtagPrivacy hashtagLeadership hashtagHumanity