15 comments

  • yuppiepuppie 5 hours ago
    The best code review improvement I have done in my workflow with Claude is using tuicr (https://tuicr.dev).

    It runs locally, YOU review all the code locally, and feedback that to Claude.

    Agents reviewing AI code always felt dirty to me, especially when working on production (non-disposable) code.

    • ramon156 3 hours ago
      The video actually convinced me that this might be an interesting tool. I'm going to try it myself for a small one-shot project and see how well it performs.

      TUI-based reviews on it's own are already interesting. I had never considered it, I guess.

    • free652 1 hour ago
      that's a good addition to fresh editor (also tui) and both rust
  • parasmadan 1 hour ago
    Why not just use an eval harness to prove this catches more real bugs? Benchmarks on actual bug classes would be far more convincing than comparing against /review.
    • stingraycharles 54 minutes ago
      That’s probably more work than the entire repo itself. Would need to be something like SWE-bench with and without “adamsreview”.

      You’re right though, but evals are actually fairly tricky to write and maintain.

  • moomin 2 hours ago
    Is there a good way of adding in your own rules to the review? I’m always in the market for better review tools but I also need to check against internal coding stands and expectations,
  • thesimon 5 hours ago
    > Runs against your regular Claude Code subscription (Max plan recommended) — unlike /ultrareview, which charges against your Extra Usage pool.

    How expensive is it to run in your experience? In $ or tokens?

  • Ozzie-D 2 hours ago
    the irony of multi-agent code review is that the people who would use it are already the ones who care about code quality. the real problem is everyone else just hitting accept on whatever claude spits out without even reading the diff. tooling for review keeps getting better while the average review effort keeps going down.
  • nkmnz 4 hours ago
    Great project! I’ve build something similar, not very clean and polished, but focussed around deterministic orchestration of multiple agents via typescript, because a coordinating agent was notoriously bad at things such as fetching relevant tickets and other context. One thing I struggle with so far, though, are the actual instructions for the review themselves. They are either too vague, leading to superficial or overly broad reviews, or too specific and thus not applicable to different kinds of PRs…
  • docheinestages 5 hours ago
    We seem to be fighting complexity with complexity. Does it really help?
    • stingraycharles 1 hour ago
      This has been the gradual progression of the software world for the past half century, so it’s apt to use LLMs to fight LLMs and call it progress.

      I wish I was kidding…

  • stingraycharles 5 hours ago
    Holy vibe coding batman this looks like a repository with just a bazillion prompts of which there are already a million.

    Seems like it would create a lot of friction and burn a lot of tokens.

  • bilekas 5 hours ago
    "I pay Claude, to use Claude, to write instructions for Claude, to review code from Claude"

    Have we all just given up?

    • stingraycharles 3 hours ago
      You forgot “to use Claude to write a HN post to promote..”
    • ramon156 3 hours ago
      s/Claude/Intern/g
  • esafak 7 hours ago
    That's looks like a fair bit of ceremony for what it does. Is this representative of the output? https://github.com/adamjgmiller/adamsreview/pull/3
  • alex1sa 22 minutes ago
    [dead]
  • momentmaker 35 minutes ago
    [dead]
  • Jinyibruceli 4 hours ago
    [flagged]
  • claudetester89 5 hours ago
    [dead]
  • ltononro 1 hour ago
    [dead]