Four skills that turn the agent into a research desk.
These don't shrink tokens — they expand what the agent can answer. Recency-grounded research from real posts, a generator that prints token-efficient CLIs from any API spec, and a one-shot blob-to-HTML packager. Install one curl, undo one rm.
printingpress
Generates a Go CLI + MCP server from an OpenAPI spec, HAR file, or live website — with local SQLite mirror, FTS5 search, and compound commands the underlying API can't answer natively.
curl --create-dirs -fsSL https://skillmake.xyz/i/printingpress -o ~/.claude/skills/printingpress/SKILL.md
Generates a Go CLI + MCP server from an OpenAPI spec, HAR file, or live website. The output isn't a passthrough wrapper — it's a domain-shaped CLI with a local SQLite mirror, FTS5 search, and compound commands like stale, orphans, health, similar, and bottleneckthat the underlying API can't answer natively.
Every generated CLI auto-emits JSON when piped, ships a --compact mode (60–80% fewer tokens), exposes typed exit codes, and supports --dry-run. Four verification gates — scorecard, dogfood, proof-of-behavior, live smoke — gate ship-readiness. No spec? Drive the site in a browser and the press infers the API from captured HAR.
# 1. Install the binary (Go 1.26.3+) go install github.com/mvanhorn/cli-printing-press/v4/cmd/printing-press@latest # 2. Install the skills (recommended) git clone https://github.com/mvanhorn/cli-printing-press.git claude --plugin-dir . # Or starter pack via npx npx -y @mvanhorn/printing-press install starter-pack
Trigger: /printing-press <app-name | url>. Reprint / polish / publish slash commands handle regen, fixes, and library publishing.
last72hours
72-hour viral radar across Reddit, X, TikTok, Instagram, Hacker News, YouTube, and GitHub — emits a clickable HTML brief plus an optional Paper.design leaderboard via MCP.
curl --create-dirs -fsSL https://skillmake.xyz/i/last72hours -o ~/.claude/skills/last72hours/SKILL.md
Scrapes Reddit, X, TikTok, Instagram, Hacker News, YouTube, and GitHub for the last 72 hours on any topic, then emits a self-contained HTML brief with clickable source links and engagement counts. Optional: builds an editorial-scorecard leaderboard artboard in Paper.design via MCP.
Reddit / HN / YouTube / GitHub are free. X works with browser cookies (no API key). Instagram + TikTok use ScrapeCreators credits — or skip them entirely with the bundled last72-reddit-x subagent for a free Reddit + X pulse.
# 1. Install the upstream last30days engine first git clone https://github.com/mvanhorn/last30days-skill ~/.last30days-skill ln -s ~/.last30days-skill/skills/last30days ~/.claude/skills/last30days # 2. Install last72hours git clone https://github.com/iharnoor/last72hours-skill ~/.last72hours-skill ln -s ~/.last72hours-skill/skills/last72hours ~/.claude/skills/last72hours
Trigger: /last72hours <topic>. Output lands in ~/Documents/Last72Hours (override with LAST72_OUTPUT_DIR).
last30days
Research what people actually said in the last ~30 days — pulls posts and engagement from Reddit, X, YouTube, TikTok, HN, Polymarket, GitHub, and the web, then synthesizes a brief with inline citations.
curl --create-dirs -fsSL https://skillmake.xyz/i/last30days -o ~/.claude/skills/last30days/SKILL.md
A wider time window built around the same evidence-grounded pattern. The reasoning model plans the JSON query upstream; the engine sweeps Reddit, X, YouTube, TikTok, Instagram, Hacker News, Polymarket, Bluesky, GitHub, and the web for the last ~30 days; synthesis cites every quote with inline markdown links.
Eight output LAWs govern the synthesis — no trailing Sources: block, no invented title, no em-dashes, no raw evidence dumps. The badge IS the title: 🌐 last30days v<v> · synced <YYYY-MM-DD>. Supports a COMPARISON template for X vs Y queries.
# Install git clone https://github.com/mvanhorn/last30days-skill ~/.last30days-skill ln -s ~/.last30days-skill/skills/last30days ~/.claude/skills/last30days
Trigger: /last30days <topic>. Named entities require --plan — the host model generates the JSON plan, the engine executes it.
html-everything
Any blob → one self-contained editorial HTML page with auto-linkified URLs, content-aware theming, and no external deps beyond Google Fonts.
curl --create-dirs -fsSL https://skillmake.xyz/i/html-everything -o ~/.claude/skills/html-everything/SKILL.md
Pass in Markdown, JSON, plain text, or a URL to a doc — get back a single self-contained HTML file with an editorial layout, content-aware theming, and every URL auto-linkified. No project scaffold, no build step, no API keys. The skill is a recipe Claude executes in-context — install is a symlink, uninstall is rm.
Output uses Archivo Black for display, Inter Tight for body, JetBrains Mono for code. Styling subtly shifts based on detected content type — a market-cap rundown doesn't come out looking like a sports recap. Files land in ~/Documents/html-everything/ by default — override with HTMLE_OUTPUT_DIR.
# Install git clone https://github.com/iharnoor/html-everything ~/Developer/html-everything ln -s ~/Developer/html-everything/skills/html-everything ~/.claude/skills/html-everything
Trigger: /html-everything <path | json | url>. Omit the argument to paste content inline.
How they compose
printingpress is the heavyweight: point it at an API and it prints you the CLI to talk to it without re-discovering its surface for every agent. last72hours is for the immediate pulse — what spiked today. last30days is the wider lens — what the month has built. Both stay grounded in real posts with inline citations. html-everything is the artifact step — whatever the others produced, it wraps into one shareable HTML file you can send.
Got a research-grade skill?
If you've built a skill that expands what the agent can answer rather than how cheaply it can answer it — submit it. Same curation pipeline as the rest of the marketplace.