Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

10.2 Automation Backlog (Evidence-Bound by Stage)

This backlog is ordered by decision-grade fit with observed early winning tactics in Sections 1.1-1.6, then by later-stage scale utility.

Stage A: Launch-Window Aligned (First Priority)

  1. Founder/Operator pipeline automation.

    • Why this fits evidence: Polymarket/Kalshi show directional evidence of direct operator outreach and reply loops in launch periods (query artifacts + early-team comments); do not treat Polymarket artifacts alone as sufficient due partial month-one permalink recovery.
    • What to automate: contact stages, follow-up timers, owner assignment, and conversion notes. References: E32 E58 E59 E60
  2. Onboarding activation rescue.

    • Why this fits evidence: launch-period early-team replies repeatedly handled fee clarity, app readiness, API onboarding, and direct support handoff to reduce first-trade friction.
    • What to automate: drop-off detection, step-specific nudges, completion checklists. References: E59 E60 E61 E64 E65
  3. Curated market launch checklist automation.

    • Why this fits evidence: curation and settlement clarity were central in early growth.
    • What to automate: market draft templates, settlement-source checks, approval workflow. References: E3 E11
  4. Liquidity SLO monitoring and alerts.

    • Why this fits evidence: early books live or die on spread/depth/fill reliability.
    • What to automate: threshold alerts, owner escalation, and incident timelines. References: E3 E4
  5. Weekly growth scorecard automation.

    • Why this fits evidence: case-backed operator loops require weekly source-cohort decisions on what to scale/cut to protect liquidity and retained quality.
    • What to automate: KPI deltas by cohort/channel, decision queue, stop-loss flags. References: E2 E4 E58 E59 E60

Stage B: After Initial Liquidity Stability

  1. Incentive governor tied to retained quality.

    • Why this fits evidence: incentives work when retention/liquidity quality holds, not on raw signup growth.
    • What to automate: budget caps, anomaly flags, manual override path. References: E4 E8 E9
  2. Partner concentration guardrails.

    • Why this fits evidence: embedded distribution can accelerate growth and create dependency risk.
    • What to automate: concentration thresholds, alerts, mitigation task generation. References: E12
  3. Programmatic discovery quality gate (SEO + PSEO + LLM SEO).

    • Why this fits evidence: search/discovery now depends on both quality controls and answer-engine visibility dynamics.
    • What to automate: publish gate for SEO, PSEO, and LLM SEO pages (uniqueness checks, source citations, thin-content rejection, AI-referral monitoring). References: E6 E83 E84 E91

Stage C: Post-PMF Optional

  1. Expanded paid-channel automation (bidding, creative variants, budget routing).

    • Use only after qualified conversion and payback windows are stable. References: E75 E78 E81 E82
  2. Advanced experimentation orchestration.

    • Use after core launch loops are consistently measurable and model/tool churn is operationally managed. References: E92 E93 E94 E95

Build-vs-Buy Operating Decisions (Rerun: March 4, 2026)

Decision policy for this section:

  • Build by default when the workflow is simple, policy-sensitive, or tightly tied to market integrity.
  • Use tools when cross-system complexity, instrumentation depth, or evaluation volume is the bottleneck.
  • Require a documented cost floor and fallback path before production rollout.
  • Do not use funding/valuation headlines or social-engagement signals as primary adoption evidence.
Backlog ItemComplexityDefault DecisionEscalation TriggerCost FloorPrimary Evidence
1. Founder/operator pipeline automationMediumBuild in-houseAdd n8n only when 3+ systems and branching logic are requiredn8n self-host/community or cloud from $20/moE108 E112 E113
2. Onboarding activation rescueMedium-HighUse toolUse PostHog when funnel instrumentation + experiment replay are requiredFree tier then usage pricingE110 E114
3. Curated market launch checklistLow-MediumBuild in-houseNo external tool unless compliance workflow complexity materially increasesInternal engineering timeE11
4. Liquidity SLO monitoring/alertsMediumBuild in-houseKeep inside existing market-ops monitoring stackInternal infra costE3 E4
5. Weekly growth scorecardLowBuild in-houseExternal tooling only after multi-team reporting complexity appearsInternal SQL/reporting costE2 E4
6. Incentive governorMedium-High riskBuild in-houseKeep policy and override paths internal by defaultInternal controls costE4 E8 E9
7. Partner concentration guardrailsLowBuild in-houseUse external tooling only when partner/channel graph complexity exceeds internal reportingInternal SQL/alerting costE12
8. Discovery quality gate (SEO/PSEO/LLM SEO)HighHybrid: build + toolUse Langfuse when high-volume content/eval traces require structured scoringFree hobby; paid starts $29/moE6 E83 E84 E91 E109 E115
9. Paid-channel automationHighUse platform-native automation + human reviewKeep Meta/Reddit/TikTok native systems first; treat design-generation tools as watchlistVaries by platform; Paper free or $16-$20/user/mo ProE78 E81 E82 E117
10. Advanced experimentation orchestrationHighUse toolUse PostHog + Langfuse; add n8n only if workflow orchestration crosses multiple systemsFree entry tiers availableE108 E110 E109 E114 E115

Watchlist (Not Default for Mission-Critical Automation)

  • Lovable, Replit Agent, and Webctl remain watchlist tools for prototyping or non-critical workflows because this evidence set does not include primary operator-grade reliability outcomes for LONGSHOT-like production use.
  • Figma Make and Google Stitch remain directional design-automation signals; do not replace core production process by default.

References: E2 E3 E4 E6 E8 E9 E11 E12 E78 E81 E82 E83 E84 E91 E97 E98 E108 E109 E110 E111 E112 E113 E114 E115 E116 E117

TOF Channel Automation (Twitter/X, Telegram, Discord) (Rerun: March 4, 2026)

Channel constraint: these are LONGSHOT’s primary top-of-funnel surfaces, so automation must be biased toward triage + logging + measurement, not auto-posting + auto-reply at scale.

WorkflowComplexityDefault DecisionEscalation TriggerTool (If Used)Cost FloorEvidence
Twitter/X content pipeline (draft → approve → schedule → measure)MediumBuild in-house / use native schedulingMulti-account + team approvals + content queue; or you want agentic scheduling via MCPTypefully (MCP, Auto-DMs) or Hypefury (AI drafting + scheduling)Hypefury from $6/mo; Typefully Creator plan + see pricingE123 E124 E125 E126 E151
Inbound triage + lead capture (Twitter/X, Telegram, Discord)Medium-HighBuild in-house (human-in-loop queue + SLA)50+ inbound touches/day or >1 responder; need auto-summary + routing + dedupeBotpress (AI agent platform) or Mava (watchlist: Discord/Telegram support + AI)Free tiers exist; scale costs depend on usageE130 E131 E132 E133 E134 E135 E136
CRM + pipeline operations (contacts, deals, notes, reminders)High to buildUse toolOnly build if you need deep custom objects and cannot tolerate SaaS dependencyAttio (+ MCP-driven automation)Free entry; see pricing for seat scaleE128 E129
Enrichment + research automation (sales ops heavy)HighBuild small first, then tool if volume justifiesMulti-source enrichment + structured research at scale becomes a time sinkClayFree entry; paid by plan/creditsE140 E141
Outbound email follow-up (only after explicit list-quality + compliance)High riskAvoid by default; use tool only after you have tight ICP + deliverability disciplineYou have repeatable TOF capture and need systematic follow-upInstantly or SmartleadInstantly from $47/mo; Smartlead from $39/moE145 E146 E147 E148

ICP Automation Pack: Crypto-Native 5m-15m Up/Down Traders

Scope note: this is the ~90% automatable part of TOF operations (monitoring/search ingestion, deduping, enrichment, segment tagging, lead scoring, reply-queue generation, CRM sync, follow-up reminders, attribution). Keep public posting and first outbound messaging human-approved.

Evidence boundary: policy/automation guardrails are evidence-backed; keyword lists, score weights, and thresholds below are operator defaults that must be calibrated weekly against qualified outcomes (first trade, D7 retained, liquidity contribution), not treated as fixed truth.

1) X API Query Design to Reduce AI Slop and Engagement-Farm Noise

Use query families, not one query.

A. Setup-and-execution posts (higher-intent candidate set)

("BTC" OR "ETH" OR "SOL") ("5m" OR "15m" OR "ltf" OR "scalp" OR "scalping")
("long" OR "short" OR "entry" OR "stop" OR "sl" OR "tp" OR "take profit" OR "invalid")
lang:en -is:retweet

B. Microstructure/pain posts (higher conversion-intent hypothesis)

("BTC" OR "ETH" OR "SOL") ("spread" OR "slippage" OR "fees" OR "fill" OR "execution" OR "latency" OR "liquidity")
("5m" OR "15m" OR "scalp" OR "perp" OR "perps")
lang:en -is:retweet

C. Venue-specific intraday traders (candidate venue filter)

("5m" OR "15m" OR "scalp") ("Hyperliquid" OR "Binance Futures" OR "Bybit" OR "OKX" OR "dYdX" OR "GMX" OR "Drift")
("long" OR "short" OR "perp" OR "setup")
lang:en -is:retweet

Then apply a negative-keyword denylist at ingest:

  • airdrop, giveaway, follow back, f4f, vip signals, guaranteed, 100x, moonshot, copy trade now, dm for signal.
  • Add account-level noise filters: high repost ratio, repeated templated text, and no market-specific nouns over last N posts.

Operator note: X API query limits and operator availability vary by access level; if an operator is unavailable in your tier, run broader retrieval and enforce filters in post-processing. E152 E153

2) Real-Trader Scoring (Automated) vs Slop Scoring (Automated)

Compute both per account and per post:

  • trader_intent_score (0-100): +timeframe mention, +symbol mention, +entry/SL/TP specifics, +venue mention, +repeated intraday posting cadence.
  • slop_risk_score (0-100): +engagement-farm phrases, +template repetition, +extreme CTA density, +new-account instability, +low domain vocabulary diversity.

Calibration note: threshold values below are seed defaults; re-tune weekly using observed precision/recall against downstream qualified outcomes.

Routing policy:

  • intent >= 70 and slop <= 30 -> hot queue (reply/DM candidate).
  • intent 50-69 and slop <= 40 -> warm queue (monitor + one-touch test).
  • else -> archive/suppress.

3) CRM Model (System of Record for DM -> TG Pipeline)

Minimum lead object fields:

  • lead_id, source_channel, x_user_id, x_handle, profile_url
  • first_seen_at, last_signal_at, query_family, top_signals
  • intent_score, slop_risk_score, segment (5m_scalper, 15m_scalper, microstructure)
  • stage, owner, next_action_at, last_touch_at
  • dm_template_version, dm_sent_at, dm_reply_at
  • tg_invite_token, tg_invite_sent_at, tg_joined_at
  • first_trade_at, d7_retained, liquidity_contribution_30d

Recommended stage pipeline:

discovered -> qualified -> queued_for_reply -> dm_opt_in_sent -> dm_opt_in_yes -> tg_invite_sent -> tg_joined -> first_trade -> retained_d7.

4) Scalable Pipeline: X Discovery -> DM -> TG Group

  1. Ingest: run X queries every 5-10 min, store raw posts/users.
  2. Dedup: collapse by x_user_id and canonical post ID.
  3. Enrich: fetch last 30-90 posts + profile metadata.
  4. Score: compute intent and slop scores; route to hot/warm/archive.
  5. Queue: generate daily Top 25 accounts/threads for human review.
  6. DM: send short opt-in DM only to reviewed leads (no mass auto-DM).
  7. Invite: on explicit yes, generate single-use TG invite token and send.
  8. Sync: webhook TG join event back to CRM stage transition.
  9. Attribute: append UTM/invite-token attribution through first trade and D7 retention.

5) Automation Boundary (Do / Do Not)

  • Automate fully: ingestion, filtering, scoring, queueing, CRM sync, reminders, attribution, stage transitions.
  • Keep human-in-loop: first outbound DM, public replies, and any ambiguous compliance-sensitive claims.
  • Do not automate: bulk unsolicited outreach or auto-reply flooding in public communities.

6) Go/No-Go Thresholds (Precision@25 + Conversion-to-TG)

Target-setting note: the numeric bands below are operating defaults and must be recalibrated against observed cohort quality.

MetricDefinitionGreen (Go)Yellow (Tune)Red (No-Go)Required Action
precision@25qualified_leads_in_daily_top25 / 25>= 0.600.45-0.59< 0.45Green: keep query family + score weights. Yellow: retrain weights, expand denylist, tighten venue/timeframe terms. Red: pause new DM sends for this segment until precision recovers.
conversion_to_tg_from_dm_senttg_joined / dm_opt_in_sent>= 0.120.08-0.11< 0.08Green: scale outreach volume carefully. Yellow: improve DM copy and invite flow. Red: stop scaling and rerun ICP qualification + messaging.
conversion_to_tg_from_opt_in_yestg_joined / dm_opt_in_yes>= 0.600.45-0.59< 0.45Green: keep current invite handoff. Yellow: fix invite friction and response SLA. Red: treat as funnel breakage and fix before new outbound.

Go/No-Go gate:

  • Go: run at higher volume only if precision@25 and conversion_to_tg_from_dm_sent are both Green for 2 consecutive weeks.
  • No-Go: freeze incremental volume if either metric is Red for 2 consecutive weeks or drops >30% week-over-week.

References: E2 E4 E11 E120 E121 E128 E129 E130 E132 E152 E153