AI Coding Assistants
Discussions focus on experiences, prompting techniques, and tools like Claude Code and Aider for using LLMs to edit, refactor, and manage real-world codebases.
Activity Over Time
Top Contributors
Keywords
Sample Comments
Maintain a good agents.md with notes on code grammar/structure/architecture conventions your org uses, then for each problem, prompt it step-by-step as if you were a junior engineer's monologue.e.g. as I am dropped into a new codebase:1. Ask Claude to find the section of code that controls X2. Take a look manually3. Ask it to explain the chain of events4. Ask it to implement change Y, in order to modify X to do behavior we want5. Ask it about any implementation d
This type of posts has nothing to do with real world applications.With all due respect to the .agents/ markdown files, Claude code often, like other LLMs, get fixed on a certain narrative, and no matter what the instructions are, it repeats that wrong choice over and over and over again, while “apologizing”…Anything beyond a close and intimate review of its implementation is doomed to fail.What made things a bit better recently was setting Gemini cli and Claude code taking turns in
Aider is pretty good way to automate that. You can use it with Claude models. It lets you be completely precise down to a single file, and sit in chat/code/review loop - but it does a lot of the chores, like generating commit messages etc while saving you the copy paste effort.
Do we need this, when we have tools like Claude Code, Codex etc that you can talk to about the codebase they are started in?
Are there any tools out there that expose GPT or Claude to a codebase, and let it write PRs (semi) autonomously?
Some hints for people stuck like this:Consider using Aider. It's a great tool and cheaper to use than Code.Look at Aiders LLM leaderboard to figure out which LLMs to use.Use its architect mode (although you can get quite fast without it - I personally haven't needed it).Work incrementally.I use at least 3 branches. My main one, a dev one and a debug one. I develop on dev. When I encounter a bug I switch to debug. The reason is it can produce a lot of code to fix a bug. I
It's great for me. I have a claude.md at the root of every folder generally, outlined in piped text for minimal context addition about the rulesets for that folder, it always creates tests for what it's doing and is set to do so in a very specific folder in a very specific way otherwise it tries to create debug files instead. I also have set rules for re-use so that way it doesn't proliferate with "enhanced" class variants or structures and always tries to leverage what
If I could offer another suggestion from what's been discussed so far - try Claude Code - they are doing something different than the other offerings around how they manage context with the LLM and the results are quite different than everything else.Also, the big difference with this tool is that you spend more time planning, don't expect it to 1 shot, you need to think about how you go from epic to task first, THEN you let it execute.
I've been using it the same way. One approach that's worked well for me is to start a project and first ask it to analyse and make a plan with phases for what needs to be done, save that plan into the project, then get it to do each phase in sequence. Once it completes a phase, have it review the code to confirm if the phase is complete. Each phase of work and review is a new chat.This way helps ensure it works on manageable amounts of code at a time and doesn't overload its co
There is no "working prompt". There is context that is highly dependant on the task at hand. Here are some general tips:- tell it to ask you clarifying questions, repeatedly. it will uncover holes and faulty assumptions and focus the implementation once it gets going- small features, plan them, implement them in stages, commit, PR, review, new session- have conventions in place, coding style, best practices, what you want to see and don't want to see in a codebase. we hav