Back in August I wrote about my first real run at Assisted Development in Practice, the "vibe coding" experiment, the stack I built, and the lessons I picked up along the way. Nine months later, I am still doing this every day, but the way I do it has changed enough that it deserves its own post.
Before I get into the new lessons, one thing I forgot to mention in the first post and wish I had said up front: buy a wide monitor. I now have two Dell 34" ultrawides beside my normal 27" monitor, one at home and one in the office, and I would not go back. When you are pair-programming with an AI assistant, you are constantly looking at three things at once: the editor, the chat thread, and whatever the agent is showing you (a diff, a browser tab, a log, a query result). On a single smaller screen you spend half your time switching windows. On an ultrawide you stop fighting the layout and start thinking.
What I did not expect is how my eyes work on it now. With AI development I am reading more than I used to, a lot more. My focus tends to sit in one zone of the screen while I plan, then I am quickly shifting across to another panel to check, verify, and confirm what the tool is actually building. The screen got wider, but the work got more attentive, not less. I am still the one validating every step.
Context still wins, but now I can hand it more
The biggest takeaway from the first post was that context is a key. I still believe that, but I have learned a second half of the same idea: the more arms you give the tool, the more context it can gather on its own.
In August I was mostly typing the ideas and connecting the dots, while the tool have access to the code locally, and hoping the model still remembered the conversation from twenty messages ago. Today my agent reaches out and pulls the context itself, because I have wired it up to the systems where the context actually lives. That shift, from "I feed the model" to "the model fetches what it needs", has been the single biggest jump in usefulness for me.
The mechanism for this is the Model Context Protocol (MCP), basically a standard way for an agent to call into external tools and services. Pair MCPs with custom Skills (small, reusable instructions that teach the agent how to do a specific job in your project), and you start to feel like you are working with a teammate who actually knows your codebase, your tickets, and the health of the system you are building.
What "more arms" looks like in practice
Here is the toolbox I rely on now, and what each one unlocks:
It pulls the work item, reads every comment, finds similar past issues, opens linked PRs, and follows the change history back to the commit that introduced a regression. When I am triaging a customer ticket, I no longer paste the description into chat. I say "investigate issue number 123" and the agent goes and reads the ticket, finds that an earlier fix touched the same plugin, pulls the diff of that PR, and tells me where to look first. That is hours of detective work compressed into a minute. Is it always correct? no, but it is enhancing.
I used to type out repro steps from memory and hope I got them right. Now the agent drives a real browser, reproduces the bug, captures the console errors, the failing network call, and a snapshot of the Document Object Model (DOM). When the bug is confirmed, the same evidence becomes the seed for a Playwright regression test, same selectors, same flow, same assertions. The repro and the test are no longer two separate jobs. I can also ask it to block a call or slow down a call to test some scenarios that can not be reproduced manually.
Production issues used to mean opening the Azure portal, picking a time range, writing a Kusto query, eyeballing the chart. Now I ask "what errors are spiking on this API in the last 24 hours?" and the agent runs the query, ranks the failures by frequency and impact, and ties them back to recent deployments. That is how I found many errors a day that had been quietly firing in the background for weeks.
This is the one I am most excited about. I have been building an MCP server specifically for my solution, it knows the data layout, the health-check queries that matter, like orphan or duplicate records that are not clearly pop as errors but are behind them, plus the log tables we actually use. When a customer reports something weird, the agent can use my MCP to run targeted health checks against the right data bucket without direct access to any data. Generic MCP and tools are great. But that understand your domain and secured the way you want are a different category entirely.
A Skill is just a written-down version of "how I do this thing." I have built skills for the steps I do every week from understanding requirements and issues, to implementation to creating code branches, pull requests, reviewing changes, building tests and much more. Each one is a little procedure I used to keep in my head; now it is a slash command anyone on the team can run. Even my PowerPoint deck template is a skill, when I need to present an idea, I describe what I want and the agent generates a branded deck in the right colours, fonts, and layout. Always ready to present, no last-minute fiddling with masters.
Easier does not mean less work
I want to write a separate blog on this point, to link it to what I really like which is team development, but in short for now, the amount of work is not less, is it shifting from writing queries to reason what the query is returning, from validating what was changed to check edge cases and the what ifs, to include more details in Commits and PRs, more info for the team, sharing more /skills and best practices with the team. More to expand on this point later.
The "Enter Engineer," nine months on
In the August post I called the engineer at the centre of all this the Enter Engineer, the one who guides, validates, decides. That role has not changed. The decisions are still mine. The architecture is still mine. The accountability when something breaks is still mine.
What has changed is the radius of what I can reach without losing focus. Nine months ago I was driving one car. Today I am directing a small fleet of specialists, one reads tickets, one drives the browser, one queries telemetry, one knows the database, one writes the slide deck. My job is to know what I want, hand each one the right context, and check the result.
Buy the wide monitor. Wire up the MCPs. Write the Skills. And remember that "easier" is not the goal, better is.
No comments:
Post a Comment