Agent prompt
Copy this when you want an agent to use the service with the right
defaults and handoff points.
How it works
- Researches any topic with the default research flow
- Normalizes URLs and deduplicates near matches
- Can stop after LLM-checked source records
- Structures the output into reusable research runs
- Prepares output for downstream use
Outputs
- Source corpus
- Media manifest
- Structured artifacts
- Optional report and synthesis
- Agent-context payloads
- Obsidian vault export
- Evaluation records
Example runs
Three runs that show what a finished research pack looks like.
The pipeline
One run, six steps.
-
1. Plan the run
Give it a topic. It returns a plan: mode, depth, source policy, must-answer questions, and output shape. Approve before paying.
-
2. Collect a corpus
Searches and scrapes the public web. Normalizes URLs, dedupes near matches, and stores the raw markdown for every source.
-
3. Go deep on a topic
Set maxResearchLoops to run more acquisition-and-review passes. The corpus keeps growing until coverage flattens.
-
4. Extract from a specific link
Point it at one URL. You get back a structured extraction: source kind, depth, relevance, claims, and the raw markdown next to it.
-
5. Synthesize
Extractions become an agent brief, action manifest, cluster index, and claim-evidence map. Gaps and next actions are explicit.
-
6. Hand off to the next agent
Every run ships an agent-context payload another agent can consume without re-running the research.
This is an
auto project