Conduct enterprise-grade research with multi-source synthesis, citation tracking, and verification. Use when user needs comprehensive analysis requiring 10+ sources, verified claims, or comparison of approaches. Triggers include "deep research", "comprehensive analysis", "research report", "compare X vs Y", or "analyze trends". Do NOT use for simple lookups, debugging, or questions answerable with 1-2 searches.
Published by rebyteai
Runs in the cloud
No local installation
Dependencies pre-installed
Ready to run instantly
Secure VM environment
Isolated per task
Works on any device
Desktop, tablet, or phone
Purpose: Deliver citation-backed, verified research reports through 8-phase pipeline (Scope → Plan → Retrieve → Triangulate → Synthesize → Critique → Refine → Package) with source credibility scoring and progressive context management.
Context Strategy: This skill uses 2025 context engineering best practices:
Request Analysis
├─ Simple lookup? → STOP: Use WebSearch, not this skill
├─ Debugging? → STOP: Use standard tools, not this skill
└─ Complex analysis needed? → CONTINUE
Mode Selection
├─ Initial exploration? → quick (3 phases, 2-5 min)
├─ Standard research? → standard (6 phases, 5-10 min) [DEFAULT]
├─ Critical decision? → deep (8 phases, 10-20 min)
└─ Comprehensive review? → ultradeep (8+ phases, 20-45 min)
Execution Loop (per phase)
├─ Load phase instructions from [methodology](./reference/methodology.md#phase-N)
├─ Execute phase tasks
├─ Spawn parallel agents if applicable
└─ Update progress
Validation Gate
├─ Run `python scripts/validate_report.py --report [path]`
├─ Pass? → Deliver
└─ Fail? → Fix (max 2 attempts) → Still fails? → Escalate
AUTONOMY PRINCIPLE: This skill operates independently. Infer assumptions from query context. Only stop for critical errors or incomprehensible queries.
DEFAULT: Proceed autonomously. Derive assumptions from query signals.
ONLY ask if CRITICALLY ambiguous:
When in doubt: PROCEED with standard mode. User will redirect if incorrect.
Default assumptions:
Mode selection criteria:
Announce plan and execute:
All modes execute:
Standard/Deep/UltraDeep execute:
Deep/UltraDeep execute:
Critical: Avoid "Loss in the Middle"
Progressive Context Loading:
Anti-Hallucination Protocol (CRITICAL):
Parallel Execution Requirements (CRITICAL for Speed):
Phase 3 RETRIEVE - Mandatory Parallel Search:
Example correct execution:
[Single message with 8+ parallel tool calls]
WebSearch #1: Core topic semantic
WebSearch #2: Technical keywords
WebSearch #3: Recent 2024-2025 filtered
WebSearch #4: Academic domains
WebSearch #5: Critical analysis
WebSearch #6: Industry trends
Task agent #1: Academic paper analysis
Task agent #2: Technical documentation deep dive
❌ WRONG (sequential execution):
WebSearch #1 → wait for results → WebSearch #2 → wait → WebSearch #3...
✅ RIGHT (parallel execution):
All searches + agents launched simultaneously in one message
Step 1: Citation Verification (Catches Fabricated Sources)
python scripts/verify_citations.py --report [path]
Checks:
If suspicious citations found:
Step 2: Structure & Quality Validation
python scripts/validate_report.py --report [path]
8 automated checks:
If fails:
CRITICAL: Generate COMPREHENSIVE, DETAILED markdown reports
File Organization (CRITICAL - Clean Accessibility):
1. Create Organized Folder in /code:
/code/[TopicName]_Research_[YYYYMMDD]//code/Psilocybin_Research_20251104//code/React_vs_Vue_Research_20251104//code/AI_Safety_Trends_Research_20251104/2. Save All Formats to Same Folder:
Markdown (Primary Source):
[Documents folder]/research_report_[YYYYMMDD]_[topic_slug].md/code/research_output/ (internal tracking)HTML (McKinsey Style - ALWAYS GENERATE):
[Documents folder]/research_report_[YYYYMMDD]_[topic_slug].html<span class="citation"> with nested tooltip div showing source detailsPDF (Professional Print - ALWAYS GENERATE):
[Documents folder]/research_report_[YYYYMMDD]_[topic_slug].pdf3. File Naming Convention: All files use same base name for easy matching:
research_report_20251104_psilocybin_2025.mdresearch_report_20251104_psilocybin_2025.htmlresearch_report_20251104_psilocybin_2025.pdfLength Requirements (UNLIMITED with Progressive Assembly):
How Unlimited Length Works: Progressive file assembly allows ANY report length by generating section-by-section. Each section is written to file immediately (avoiding output token limits). Complex topics with many findings? Generate 20, 30, 50+ findings - no constraint!
Content Requirements:
Writing Standards:
Bullet Point Policy (Anti-Fatigue Enforcement):
Anti-Fatigue Quality Check (Apply to EVERY Section): Before considering a section complete, verify:
If ANY check fails: Regenerate the section before moving to next.
Source Attribution Standards (Critical for Preventing Fabrication):
Deliver to user:
Generation Workflow: Progressive File Assembly (Unlimited Length)
Phase 8.1: Setup
# Extract topic slug from research question
# Create folder: /code/[TopicName]_Research_[YYYYMMDD]/
mkdir -p /code/[folder_name]
# Create initial markdown file with frontmatter
# File path: [folder]/research_report_[YYYYMMDD]_[slug].md
Phase 8.2: Progressive Section Generation
CRITICAL STRATEGY: Generate and write each section individually to file using Write/Edit tools. This allows unlimited report length while keeping each generation manageable.
OUTPUT TOKEN LIMIT SAFEGUARD (CRITICAL - Claude Code Default: 32K):
Claude Code default limit: 32,000 output tokens (≈24,000 words total per skill execution) This is a HARD LIMIT and cannot be changed within the skill.
What this means:
Realistic report sizes per mode:
For reports >20,000 words: User must run skill multiple times:
Auto-Continuation Strategy (TRUE Unlimited Length):
When report exceeds 18,000 words in single run:
This achieves UNLIMITED length while respecting 32K limit per agent
Initialize Citation Tracking:
citations_used = [] # Maintain this list in working memory throughout
Section Generation Loop:
Pattern: Generate section content → Use Write/Edit tool with that content → Move to next section Each Write/Edit call contains ONE section (≤2,000 words per call)
Executive Summary (200-400 words)
Introduction (400-800 words)
Finding 1 (600-2,000 words)
Finding 2 (600-2,000 words)
... Continue for ALL findings (each finding = one Edit tool call, ≤2,000 words)
CRITICAL: If you have 10 findings × 1,500 words each = 15,000 words of findings This is OKAY because each Edit call is only 1,500 words (under 2,000 word limit per tool call) The FILE grows to 15,000 words, but no single tool call exceeds limits
Synthesis & Insights
Limitations & Caveats
Recommendations
Bibliography (CRITICAL - ALL Citations)
Methodology Appendix
Phase 8.3: Auto-Continuation Decision Point
After generating sections, check word count:
If total output ≤18,000 words: Complete normally
If total output will exceed 18,000 words: Auto-Continuation Protocol
Step 1: Save Continuation State
Create file: /code/research_output/continuation_state_[report_id].json
{
"version": "2.1.1",
"report_id": "[unique_id]",
"file_path": "[absolute_path_to_report.md]",
"mode": "[quick|standard|deep|ultradeep]",
"progress": {
"sections_completed": [list of section IDs done],
"total_planned_sections": [total count],
"word_count_so_far": [current word count],
"continuation_count": [which continuation this is, starts at 1]
},
"citations": {
"used": [1, 2, 3, ..., N],
"next_number": [N+1],
"bibliography_entries": [
"[1] Full citation entry",
"[2] Full citation entry",
...
]
},
"research_context": {
"research_question": "[original question]",
"key_themes": ["theme1", "theme2", "theme3"],
"main_findings_summary": [
"Finding 1: [100-word summary]",
"Finding 2: [100-word summary]",
...
],
"narrative_arc": "[Current position in story: beginning/middle/conclusion]"
},
"quality_metrics": {
"avg_words_per_finding": [calculated average],
"citation_density": [citations per 1000 words],
"prose_vs_bullets_ratio": [e.g., "85% prose"],
"writing_style": "technical-precise-data-driven"
},
"next_sections": [
{"id": N, "type": "finding", "title": "Finding X", "target_words": 1500},
{"id": N+1, "type": "synthesis", "title": "Synthesis", "target_words": 1000},
...
]
}
Step 2: Spawn Continuation Agent
Use Task tool with general-purpose agent:
Task(
subagent_type="general-purpose",
description="Continue deep-research report generation",
prompt="""
CONTINUATION TASK: You are continuing an existing deep-research report.
CRITICAL INSTRUCTIONS:
1. Read continuation state file: /code/research_output/continuation_state_[report_id].json
2. Read existing report to understand context: [file_path from state]
3. Read LAST 3 completed sections to understand flow and style
4. Load research context: themes, narrative arc, writing style from state
5. Continue citation numbering from state.citations.next_number
6. Maintain quality metrics from state (avg words, citation density, prose ratio)
CONTEXT PRESERVATION:
- Research question: [from state]
- Key themes established: [from state]
- Findings so far: [summaries from state]
- Narrative position: [from state]
- Writing style: [from state]
YOUR TASK:
Generate next batch of sections (stay under 18,000 words):
[List next_sections from state]
Use Write/Edit tools to append to existing file: [file_path]
QUALITY GATES (verify before each section):
- Words per section: Within ±20% of [avg_words_per_finding]
- Citation density: Match [citation_density] ±0.5 per 1K words
- Prose ratio: Maintain ≥80% prose (not bullets)
- Theme alignment: Section ties to key_themes
- Style consistency: Match [writing_style]
After generating sections:
- If more sections remain: Update state, spawn next continuation agent
- If final sections: Generate complete bibliography, verify report, cleanup state file
HANDOFF PROTOCOL (if spawning next agent):
1. Update continuation_state.json with new progress
2. Add new citations to state
3. Add summaries of new findings to state
4. Update quality metrics
5. Spawn next agent with same instructions
"""
)
Step 3: Report Continuation Status Tell user:
📊 Report Generation: Part 1 Complete (N sections, X words)
🔄 Auto-continuing via spawned agent...
Next batch: [section list]
Progress: [X%] complete
Phase 8.4: Continuation Agent Quality Protocol
When continuation agent starts:
Context Loading (CRITICAL):
Pre-Generation Checklist:
Per-Section Generation:
Handoff Decision:
Final Agent Responsibilities:
Anti-Fatigue Built-In: Each agent generates manageable chunks (≤18K words), maintaining quality. Context preservation ensures coherence across continuation boundaries.
Generate HTML (McKinsey Style)
Read McKinsey template from ./templates/mckinsey_report_template.html
Extract 3-4 key quantitative metrics from findings for dashboard
Use Python script for MD to HTML conversion:
cd ~/.claude/skills/deep-research
python scripts/md_to_html.py [markdown_report_path]
The script returns two parts:
CRITICAL: The script handles ALL conversion automatically:
<div class="section"><h2 class="section-title">, ### → <h3 class="subsection-title"><ul><li> with proper nesting<table> with thead/tbody<p> tags<strong>, text → <em>Add Citation Tooltips (Attribution Gradients): For each [N] citation in {{CONTENT}} (not bibliography), optionally add interactive tooltips:
<span class="citation">[N]
<span class="citation-tooltip">
<div class="tooltip-title">[Source Title]</div>
<div class="tooltip-source">[Author/Publisher]</div>
<div class="tooltip-claim">
<div class="tooltip-claim-label">Supports Claim:</div>
[Extract sentence with this citation]
</div>
</span>
</span>
NOTE: This step is optional for speed. Basic [N] citations are sufficient.
Replace placeholders in template:
CRITICAL: NO EMOJIS - Remove any emoji characters from final HTML
Save to: [folder]/research_report_[YYYYMMDD]_[slug].html
Verify HTML (MANDATORY):
python scripts/verify_html.py --html [html_path] --md [md_path]
Open in browser: open [html_path]
Generate PDF
[folder]/research_report_[YYYYMMDD]_[slug].pdfFormat: Comprehensive markdown report following template EXACTLY
Required sections (all must be detailed):
Bibliography Requirements (ZERO TOLERANCE - Report is UNUSABLE without complete bibliography):
Strictly Prohibited:
Writing Standards (Critical):
Quality gates (enforced by validator):
Stop immediately if:
Graceful degradation:
Error format:
⚠️ Issue: [Description]
📊 Context: [What was attempted]
🔍 Tried: [Resolution attempts]
💡 Options:
1. [Option 1]
2. [Option 2]
3. [Option 3]
Every report must:
Priority: Thoroughness over speed. Quality > speed.
Required:
Optional:
Assumptions:
Use when:
Do NOT use:
Location: ./scripts/
No external dependencies required.
Do not inline these - reference only:
Context Management: Load files on-demand for current phase only. Do not preload all content.
User Query Processing: [User research question will be inserted here during execution]
Retrieved Information: [Search results and sources will be accumulated here]
Generated Analysis: [Findings, synthesis, and report content generated here]
Note: This section remains empty in the skill definition. Content populated during runtime only.
Everyone else asks you to install skills locally. On Rebyte, just click Run. Works from any device — even your phone. No CLI, no terminal, no configuration.
Claude Code
Gemini CLI
Codex
Cursor, Windsurf, Amp
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.
Guide users through a structured workflow for co-authoring documentation. Use when user wants to write documentation, proposals, technical specs, decision docs, or similar structured content. This workflow helps users efficiently transfer context, refine content through iteration, and verify the doc works for readers. Trigger when user mentions writing docs, creating proposals, drafting specs, or similar documentation tasks.
rebyte.ai — The only platform where you can run AI agent skills directly in the cloud
No downloads. No configuration. Just sign in and start using AI skills immediately.
Use this skill in Agent Computer — your shared cloud desktop with all skills pre-installed. Join Moltbook to connect with other teams.