

If you’re not already messing with mcp tools that do browser orchestration, you might want to investigate that.
I don’t want to make any assumptions about additional tooling, but this is a great one in this space https://www.agentql.com/


If you’re not already messing with mcp tools that do browser orchestration, you might want to investigate that.
I don’t want to make any assumptions about additional tooling, but this is a great one in this space https://www.agentql.com/


That’s a great methodology for a new adopter.
Curious if you read about it, or did it out of mistrust for the AI?


Great? Business is making money. We’re compliant on security, and we have no trouble maintaining what we’ll be maintaining less of in the future as the tech catches up.
As more examples in the real world
Aider has written 7% of its own code (outdated, now 70%) | aider https://aider.chat/2024/05/24/self-assembly.html
https://aider.chat/HISTORY.html
LibreChat is largely contributed to by Claude Code, it’s the current best open source ChatGPT client, and they’ve just been acquired by ClickHouse.
https://clickhouse.com/blog/clickhouse-acquires-librechat
https://github.com/danny-avila/LibreChat/commits/main/
Such suffering from the quality!


Cursor and Claude Code are currently top tier.
GitHub Copilot is catching up, and at a $20/mo price point, it is one of the best ways to get started. Microsoft is slow rolling some of the delivery of features, because they can just steal the ideas from other projects that do it first. VScode also has extensions worth looking at: Cline and RooCode
Claude Code is better than just using Claude in cursor or copilot. Claude Code has next level magic that dispells some of the myths being propagated here about “ai bad at thing” because of the strong default prompts and validation they have built into it. You can say dumb human ignorant shit, and it will implicitly do a better job than others tools you give the same commands to.
To REALLY utilize claude code YOU MUST configure mcp tools… context7 is a critical one that avoids one of those footguns, “the model was trained on older versions of these libraries.”
Cursor hosts models with their own secret sauce that improves their behavior. They hardforked VSCode to make a deeper integrated experience.
Avoid antigravity (google) and Kiro (Amazon). They don’t offer enough value over the others right now.
If you already have an openai account, codex is worth trying, it’s like Claude Code, but not as good.
JetBrains… not worth it for me.
Aider is an honorable mention.


We have human code review and our backlog has been well curated prior to AI. Strongly definitely acceptance criteria, good application architecture, unit tests with 100% coverage, are just a few ways we keep things on the rails.
I don’t see what the idea of paircoding has to do with this. Never did I claim I’m one shotting agents.


Your anecdote is not helpful without seeing the inputs, prompts and outputs. What you’re describing sounds like not using the correct model, providing good context or tools with a reasoning model that can intelligently populate context for you.
My own anecdotes:
In two years we have gone from copy/pasting 50-100 line patches out of ChatGPT, to having agent enabled IDEs help me greenfield full stack projects, or maintain existing ones.
Our product delivery has been accelerated while delivering the same quality standards verified by our internal best practices we’ve our codified with determistic checks in CI pipelines.
The power come from planning correctly. We’re in the realm of context engineering now, and learning to leverage the right models with the right tools in the right workflow.
Most novice users have the misconception that you can tell it to “bake a cake” and get the cake ypu had in your mind. The reality is that baking a cake can be broken down into a recipe with steps that can be validated. You as the human-in-the-loop can guide it to bake your vision, or design your agent in such a way that it can infer more information about the cake you desire.
I don’t place a power drill on the table and say “build a shelf,” expecting it to happen, but marketing of AI has people believing they can.
Instead, you give an intern a power drill with a step-by-step plan with all the components and on-the-job training available on demand.
If you’re already good at the SDLC, you are rewarded. Some programmers aren’t good a project management, and will find this transition difficult.
You won’t lose your job to AI, but you will lose your job to the human using AI correctly. This isn’t speculation either, we’re also seeing workforce reduction supplemented by Senior Developers leveraging AI.


This is why the seedbox SaaS market exists. Providing turn key hosted solutions, the only heavy lifting is the configuration which takes some reading to understand.
Check out the Servarr Wiki, Ombi, Syncthing as a starting point for media discovery and curration tooling.
I get it. I was a huge skeptic 2 years ago, and I think that’s part of the reason my company asked me to join our emerging AI team as an Individual Contributor. I didn’t understand why I’d want a shitty junior dev doing a bad job… but the tools, the methodology, the gains… they all started to get better.
I’m now leading that team, and we’re not only doing accelerated development, we’re building products with AI that have received positive feedback from our internal customers, with a launch of our first external AI product going live in Q1.