portfolio/agentlab/rules/tool-research-protocol.md
AgentLab d5ef629a54 feat: initial AgentLab portfolio content
Architecture, overview, homelab build plan, agent handbook, ADRs,
and agent operating rules. All sensitive operational details sanitized
(real IPs, hostnames, client names replaced with generic placeholders).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-31 04:52:42 +00:00

3.1 KiB

Tool and Product Research Protocol

These rules are written for Claude Code but the concepts apply to any AI agent system.

The Core Rule

Before recommending any software tool, app, library, service, or vendor product, you MUST complete the research sequence below. This is not optional. Do not answer from training data alone. Training data is stale by definition.

This rule fires for: CLI tools, Mac apps, mobile apps, SaaS products, libraries, open source projects, cloud services, vendor hardware/software.


Mandatory Research Sequence

Step 1 — Official source only for pricing and features

Fetch the actual product or project page. Do not characterise pricing, free tier limits, or feature availability from third-party sites (AlternativeTo, ProductHunt, G2, review aggregators). These are frequently wrong or stale.

Required fetch: the official website or official GitHub README.

Step 2 — Repo health (for OSS tools)

Before recommending any open source project:

  • When was the last commit? Flag if >6 weeks ago.
  • Are there open issues with no maintainer response in the last 30 days?
  • Does the current release have unresolved regressions in open issues?
  • Star/fork count — flag if under 500 stars (small project, fragility risk).

If any are red flags: disqualify. Do not recommend with a caveat — disqualify.

Step 3 — Target OS / platform verification

Before recommending any tool for a specific operating system or platform:

  • Search explicitly for [tool] [OS] slow and [tool] [OS] not working in the issue tracker and any available community sources.
  • For cross-platform / Electron apps targeting macOS: treat macOS support as unverified until confirmed. Cross-platform does not mean works-everywhere.
  • For GPU-accelerated tools: confirm the packaged binary has acceleration compiled in for the target platform (Metal on macOS, CUDA on Linux/Windows). A binary without GPU support will be unusably slow on workloads that require it.
  • If real-world performance on the target OS cannot be confirmed: say so explicitly before recommending.

Step 4 — Counter-argument

After forming a recommendation, explicitly state the strongest argument against it. One paragraph. Not "on the other hand" — a genuine stress test of the recommendation.

If the counter-argument is strong enough to change the recommendation, change it.


Epistemic Labelling

When making factual claims in a response, tag the source:

  • [verified: URL] — fetched and confirmed in this session
  • [training data — unverified] — from model training, not verified against current sources
  • [inferred] — logical inference, not directly sourced

This makes epistemic status visible and forces you to notice when you are asserting from unverified training data.


The Right Answer First

If a well-maintained commercial option clearly solves the problem and OSS alternatives are fragile, unverified, or platform-mismatched — say so directly and early. Do not send the operator through OSS alternatives to avoid mentioning a paid option. Time wasted on broken tools costs more than the price of the right tool.