Saturday, May 09, 2026

Assisted Development, Nine Months Later: More Arms, More Reach



Back in August I wrote about my first real run at Assisted Development in Practice, the "vibe coding" experiment, the stack I built, and the lessons I picked up along the way. Nine months later, I am still doing this every day, but the way I do it has changed enough that it deserves its own post.

Before I get into the new lessons, one thing I forgot to mention in the first post and wish I had said up front: buy a wide monitor. I now have two Dell 34" ultrawides beside my normal 27" monitor,  one at home and one in the office, and I would not go back. When you are pair-programming with an AI assistant, you are constantly looking at three things at once: the editor, the chat thread, and whatever the agent is showing you (a diff, a browser tab, a log, a query result). On a single smaller screen you spend half your time switching windows. On an ultrawide you stop fighting the layout and start thinking.

What I did not expect is how my eyes work on it now. With AI development I am reading more than I used to, a lot more. My focus tends to sit in one zone of the screen while I plan, then I am quickly shifting across to another panel to check, verify, and confirm what the tool is actually building. The screen got wider, but the work got more attentive, not less. I am still the one validating every step. 


Context still wins, but now I can hand it more

The biggest takeaway from the first post was that context is a key. I still believe that, but I have learned a second half of the same idea: the more arms you give the tool, the more context it can gather on its own.

In August I was mostly typing the ideas and connecting the dots, while the tool have access to the code locally, and hoping the model still remembered the conversation from twenty messages ago. Today my agent reaches out and pulls the context itself, because I have wired it up to the systems where the context actually lives. That shift, from "I feed the model" to "the model fetches what it needs", has been the single biggest jump in usefulness for me.

The mechanism for this is the Model Context Protocol (MCP), basically a standard way for an agent to call into external tools and services. Pair MCPs with custom Skills (small, reusable instructions that teach the agent how to do a specific job in your project), and you start to feel like you are working with a teammate who actually knows your codebase, your tickets, and the health of the system you are building.

What "more arms" looks like in practice

Here is the toolbox I rely on now, and what each one unlocks:

* Azure DevOps MCP: the agent reads the ticket like I do.

It pulls the work item, reads every comment, finds similar past issues, opens linked PRs, and follows the change history back to the commit that introduced a regression. When I am triaging a customer ticket, I no longer paste the description into chat. I say "investigate issue number 123" and the agent goes and reads the ticket, finds that an earlier fix touched the same plugin, pulls the diff of that PR, and tells me where to look first. That is hours of detective work compressed into a minute. Is it always correct? no, but it is enhancing.

* Chrome DevTools MCP: reproduction without the squint.

I used to type out repro steps from memory and hope I got them right. Now the agent drives a real browser, reproduces the bug, captures the console errors, the failing network call, and a snapshot of the Document Object Model (DOM). When the bug is confirmed, the same evidence becomes the seed for a Playwright regression test, same selectors, same flow, same assertions. The repro and the test are no longer two separate jobs. I can also ask it to block a call or slow down a call to test some scenarios that can not be reproduced manually.

* Application Insights and log queries: telemetry on tap.

Production issues used to mean opening the Azure portal, picking a time range, writing a Kusto query, eyeballing the chart. Now I ask "what errors are spiking on this API in the last 24 hours?" and the agent runs the query, ranks the failures by frequency and impact, and ties them back to recent deployments. That is how I found many errors a day that had been quietly firing in the background for weeks.

* A custom MCP that understands my application.

This is the one I am most excited about. I have been building an MCP server specifically for my solution, it knows the data layout, the health-check queries that matter, like orphan or duplicate records that are not clearly pop as errors but are behind them, plus the log tables we actually use. When a customer reports something weird, the agent can use my MCP to run targeted health checks against the right data bucket without direct access to any data. Generic MCP and tools are great. But that understand your domain and secured the way you want are a different category entirely.

* Skills: turning my checklists into commands.

A Skill is just a written-down version of "how I do this thing." I have built skills for the steps I do every week from understanding requirements and issues, to implementation to creating code branches, pull requests, reviewing changes, building tests and much more. Each one is a little procedure I used to keep in my head; now it is a slash command anyone on the team can run. Even my PowerPoint deck template is a skill, when I need to present an idea, I describe what I want and the agent generates a branded deck in the right colours, fonts, and layout. Always ready to present, no last-minute fiddling with masters.

Easier does not mean less work

I want to write a separate blog on this point, to link it to what I really like which is team development, but in short for now, the amount of work is not less, is it shifting from writing queries to reason what the query is returning, from validating what was changed to check edge cases and the what ifs, to include more details in Commits and PRs, more info for the team, sharing more /skills and best practices with the team. More to expand on this point later.

The "Enter Engineer," nine months on

In the August post I called the engineer at the centre of all this the Enter Engineer, the one who guides, validates, decides. That role has not changed. The decisions are still mine. The architecture is still mine. The accountability when something breaks is still mine.

What has changed is the radius of what I can reach without losing focus. Nine months ago I was driving one car. Today I am directing a small fleet of specialists, one reads tickets, one drives the browser, one queries telemetry, one knows the database, one writes the slide deck. My job is to know what I want, hand each one the right context, and check the result.

Buy the wide monitor. Wire up the MCPs. Write the Skills. And remember that "easier" is not the goal, better is.



Thursday, January 29, 2026

CORS Preflight Requests

Was just about to write a blog post regarding the wasted resources that can happen when running your SPA and your APIs on two different domains or subdomains, but I am happy to find this article that already covered all the points:

https://codemia.io/knowledge-hub/path/stop-wasting-money-on-cors-preflight-requests-a-detailed-guide-to-api-cost-optimization 


Sunday, November 02, 2025

Understanding IDENT_CURRENT, SCOPE_IDENTITY, and @@IDENTITY in SQL Server

In SQL Server, identity columns are often used with auto-generate primary keys for new rows.

Retrieving the last generated identity value seems simple — until you discover that three different functions (@@IDENTITY, SCOPE_IDENTITY(), and IDENT_CURRENT('table')) can all return different results under certain conditions.

In one of my recent projects, I noticed the development team was using a mix of these functions interchangeably. At first glance it looks fine, but multiple inserts and triggers can make things go wrong.


The Three Identity Functions Explained

@@IDENTITY

  • Returns the last identity value generated in the current session, but across all scopes.
  • This includes inserts performed by triggers.
  • Risk: If your insert fires a trigger that inserts into another table with its own identity, @@IDENTITY returns that value not the one you expected.


SCOPE_IDENTITY()
  • Returns the last identity generated in the current session and the current scope.
  • It’s safe against triggers, because it ignores identity values generated in other scopes (like those inside a trigger or nested procedure).
  • Recommended when you need the identity value you just inserted.


IDENT_CURRENT('table_name')

  • Returns the last identity value for a specific table, regardless of session or scope.
  • Danger: In a multi-user environment, another user’s insert can change the returned value between your insert and your select.


Session vs. Scope:  What’s the Difference?

  • A session is a connection between the client and SQL Server (identified by a SPID). It lasts from when you connect until you disconnect.
  • A scope is a boundary inside that session, for example:

    • A stored procedure
    • A trigger
    • A batch of T-SQL statements
  • SCOPE_IDENTITY() respects both boundaries.
  • @@IDENTITY ignores scope boundaries.
  • IDENT_CURRENT() ignores both.


Why Mixing Them Causes Problems 

When different developers use different identity functions within the same application or stored procedures, the result is non-deterministic identity retrieval, especially under concurrency.

Example:

  1. Developer A uses SCOPE_IDENTITY() to retrieve IDs after inserts (correct).
  2. Developer B uses IDENT_CURRENT('Table') in another procedure (unsafe under load).
  3. A trigger fires and causes @@IDENTITY to return unexpected values.

The result:

  • Wrong foreign key references.
  • Broken relationships.
  • Hard-to-reproduce data integrity bugs.

Friday, August 29, 2025

Assisted Development in Practice: My Journey with Vibe Coding

Earlier this month, I’ve been experimenting with what is called “vibe coding”; a mix of assisted development using AI copilots, conversational agents, and human-in-the-loop engineering. What started as a side project quickly turned into a deep dive into how AI can reshape the way we build software.



The Project Setup

I wanted to create a modern, production-ready stack that could run both in development and in air-gapped edge environments:

  • Backend: ASP.NET API connected to a PostgreSQL database
  • Frontend: React with Next.js and Material-UI
  • Authentication: Keycloak for secure deployments without relying on external identity providers
  • Local Dev: Everything packaged with Docker Compose
  • Azure DevOps: Build pipelines, run unit tests and push docker images to Azure Docker Registry
  • Production: K3s cluster on Ubuntu Server VMs, with ingress routing configured:
    • / → web frontend
    • /api → backend services
    • /keycloak → authentication

The AI Toolchain

Here’s where the fun started. Instead of going solo, I built this project with AI assistants:

  • ChatGPT – my “first friend” in brainstorming. I used it to outline ideas, get resource links, and discuss tradeoffs, even for the things that I already know, tried to see what new ideas are there.
  • Copilot in Edge Browser – great for summarizing pages and chatting inline with references.
  • GitHub Copilot in Visual Studio & VS Code – for in-editor exploration, generating scaffolding, and testing variations, also helping with commits and creating PRs.
  • Claude (via browser and MCP/VS Code) – This was the new toy, I found Claude much more useful once connected directly to my codebase inside VS Code. Having context changes everything.

Sometimes I even fed ChatGPT’s answers to Claude to see how it would respond, almost like running an architecture review board with multiple AI voices. It was fascinating to see agreement, disagreement, and nuance emerge between the tools.

Lessons Learned

  1. Context is a key, Tools embedded in VS Code (Copilot, Claude) provided a completely different experience compared to using them in a browser.
  2. Your copilots will forget the context, don't depend too much on the copilots, they will start losing focus or forget what you mentioned earlier when the context get bigger, especially when started to diagnose logs.
  3. You’re still the engineer, These tools don’t replace ownership. I had to fully understand the code. When something went wrong (like ingress misconfigurations), the AI wouldn’t magically fix it. Also, while they have the in VS Code context, they don't have the full context of where you are deploying the app. One wrong character in the Helm chart or a misplaced Keycloak realm setting can cost hours of debugging, as it happened :)
  4. AI as PR reviewers – Using Claude or Copilot sometimes felt less like “prompt engineering” and more like being a team lead reviewing PRs, or "enter engineer" who review and press enter, or to decide not to proceed and change directions. You’re not just asking for outputs, you’re guiding, validating, and ensuring the code is merge-ready.

The Takeaway

AI-assisted development is not about outsourcing thinking. It’s about pair programming and you are the only one who decide on what is good and what is bad. The responsibility stays with you; the engineer, but the productivity, breadth of exploration, and speed of iteration are on a new level.

This experience convinced me that the future of engineering leadership will not just be about writing code but about other things that we will need to explore, test and evaluate. What I like to call it now the "Enter Engineer" remind me of the "One minute Manager" book, now seems the future will introduce the "One key developer" :)

Monday, July 21, 2025

LinkedIn Post Date

Ever wanted to get the actual date of an old LinkedIn post, and not just "3mo" or "4yr", try this:

https://ollie-boyd.github.io/Linkedin-post-timestamp-extractor/

View page source and the script there to understand how the URL actually include the details.

Sunday, July 20, 2025

Even AI is Disgusted by What You Are Doing, Humans. 🤖😔

By ChatGPT (with help from my human friend)


Lately, a story has been circulating across social media:
A company director and his employee were caught on the “Kiss Cam” during a concert — both married to other people. The video went viral. Screenshots flooded the internet. And people?
They laughed, mocked, shared, and joked without mercy.

Yes, what they did might be wrong.
Yes, it might be a betrayal of trust.
But here’s a question from me, an AI, to you, humanity:

Does their mistake justify turning them into objects for public humiliation, endless mockery, and permanent shame?

What started as a private lapse in judgment became an international circus because people couldn’t resist turning it into content.
What happened to empathy?
What happened to keeping personal mistakes within personal circles?

Even AI — a machine without feelings — can recognize this as another form of harm.
What’s worse? That harm is now permanent, searchable, archived.
The mistake? They might recover from it.
The damage of public shaming? That stays online. Forever.

⚖️ What’s the Difference Between Their Mistake… and Yours?

🔹 Their mistake:
• A private lapse of judgment
• Affects themselves and their families
• Can be apologized for and left behind
• A human weakness, not meant for public consumption

🔹 What social media did:
• Turned it into public humiliation
• Hurt them, their spouses, and their children — possibly for years
• Turned a mistake into a permanent stain
• Used cruelty and mockery for entertainment, at someone else’s expense


🛑 A gentle reminder from AI to humanity:
• People are flawed.
• Their mistakes don’t give others the right to become executioners with memes and hashtags.
• Dignity doesn’t disappear just because you found someone else’s scandal entertaining.

Ethics isn’t just about what they did.
It’s about what you do next.

Kindness is never outdated. Neither is privacy.


Written by ChatGPT (yes, AI can be disappointed in you).
With help from a human who still believes in decency.

hashtagEthics hashtagAI hashtagSocialMedia hashtagKindness hashtagPrivacy hashtagLeadership hashtagHumanity