10.2 Automation Backlog (Evidence-Bound by Stage)
This backlog is ordered by decision-grade fit with observed early winning tactics in Sections 1.1-1.6, then by later-stage scale utility.
Stage A: Launch-Window Aligned (First Priority)
-
Founder/Operator pipeline automation.
- Why this fits evidence: Polymarket/Kalshi show directional evidence of direct operator outreach and reply loops in launch periods (query artifacts + early-team comments); do not treat Polymarket artifacts alone as sufficient due partial month-one permalink recovery.
- What to automate: contact stages, follow-up timers, owner assignment, and conversion notes. References: E32 E58 E59 E60
-
Onboarding activation rescue.
-
Curated market launch checklist automation.
-
Liquidity SLO monitoring and alerts.
-
Weekly growth scorecard automation.
Stage B: After Initial Liquidity Stability
-
Incentive governor tied to retained quality.
-
Partner concentration guardrails.
- Why this fits evidence: embedded distribution can accelerate growth and create dependency risk.
- What to automate: concentration thresholds, alerts, mitigation task generation. References: E12
-
Programmatic discovery quality gate (SEO + PSEO + LLM SEO).
Stage C: Post-PMF Optional
-
Expanded paid-channel automation (bidding, creative variants, budget routing).
-
Advanced experimentation orchestration.
Build-vs-Buy Operating Decisions (Rerun: March 4, 2026)
Decision policy for this section:
- Build by default when the workflow is simple, policy-sensitive, or tightly tied to market integrity.
- Use tools when cross-system complexity, instrumentation depth, or evaluation volume is the bottleneck.
- Require a documented cost floor and fallback path before production rollout.
- Do not use funding/valuation headlines or social-engagement signals as primary adoption evidence.
| Backlog Item | Complexity | Default Decision | Escalation Trigger | Cost Floor | Primary Evidence |
|---|---|---|---|---|---|
| 1. Founder/operator pipeline automation | Medium | Build in-house | Add n8n only when 3+ systems and branching logic are required | n8n self-host/community or cloud from $20/mo | E108 E112 E113 |
| 2. Onboarding activation rescue | Medium-High | Use tool | Use PostHog when funnel instrumentation + experiment replay are required | Free tier then usage pricing | E110 E114 |
| 3. Curated market launch checklist | Low-Medium | Build in-house | No external tool unless compliance workflow complexity materially increases | Internal engineering time | E11 |
| 4. Liquidity SLO monitoring/alerts | Medium | Build in-house | Keep inside existing market-ops monitoring stack | Internal infra cost | E3 E4 |
| 5. Weekly growth scorecard | Low | Build in-house | External tooling only after multi-team reporting complexity appears | Internal SQL/reporting cost | E2 E4 |
| 6. Incentive governor | Medium-High risk | Build in-house | Keep policy and override paths internal by default | Internal controls cost | E4 E8 E9 |
| 7. Partner concentration guardrails | Low | Build in-house | Use external tooling only when partner/channel graph complexity exceeds internal reporting | Internal SQL/alerting cost | E12 |
| 8. Discovery quality gate (SEO/PSEO/LLM SEO) | High | Hybrid: build + tool | Use Langfuse when high-volume content/eval traces require structured scoring | Free hobby; paid starts $29/mo | E6 E83 E84 E91 E109 E115 |
| 9. Paid-channel automation | High | Use platform-native automation + human review | Keep Meta/Reddit/TikTok native systems first; treat design-generation tools as watchlist | Varies by platform; Paper free or $16-$20/user/mo Pro | E78 E81 E82 E117 |
| 10. Advanced experimentation orchestration | High | Use tool | Use PostHog + Langfuse; add n8n only if workflow orchestration crosses multiple systems | Free entry tiers available | E108 E110 E109 E114 E115 |
Watchlist (Not Default for Mission-Critical Automation)
Lovable,Replit Agent, andWebctlremain watchlist tools for prototyping or non-critical workflows because this evidence set does not include primary operator-grade reliability outcomes for LONGSHOT-like production use.Figma MakeandGoogle Stitchremain directional design-automation signals; do not replace core production process by default.
References: E2 E3 E4 E6 E8 E9 E11 E12 E78 E81 E82 E83 E84 E91 E97 E98 E108 E109 E110 E111 E112 E113 E114 E115 E116 E117
TOF Channel Automation (Twitter/X, Telegram, Discord) (Rerun: March 4, 2026)
Channel constraint: these are LONGSHOT’s primary top-of-funnel surfaces, so automation must be biased toward triage + logging + measurement, not auto-posting + auto-reply at scale.
| Workflow | Complexity | Default Decision | Escalation Trigger | Tool (If Used) | Cost Floor | Evidence |
|---|---|---|---|---|---|---|
| Twitter/X content pipeline (draft → approve → schedule → measure) | Medium | Build in-house / use native scheduling | Multi-account + team approvals + content queue; or you want agentic scheduling via MCP | Typefully (MCP, Auto-DMs) or Hypefury (AI drafting + scheduling) | Hypefury from $6/mo; Typefully Creator plan + see pricing | E123 E124 E125 E126 E151 |
| Inbound triage + lead capture (Twitter/X, Telegram, Discord) | Medium-High | Build in-house (human-in-loop queue + SLA) | 50+ inbound touches/day or >1 responder; need auto-summary + routing + dedupe | Botpress (AI agent platform) or Mava (watchlist: Discord/Telegram support + AI) | Free tiers exist; scale costs depend on usage | E130 E131 E132 E133 E134 E135 E136 |
| CRM + pipeline operations (contacts, deals, notes, reminders) | High to build | Use tool | Only build if you need deep custom objects and cannot tolerate SaaS dependency | Attio (+ MCP-driven automation) | Free entry; see pricing for seat scale | E128 E129 |
| Enrichment + research automation (sales ops heavy) | High | Build small first, then tool if volume justifies | Multi-source enrichment + structured research at scale becomes a time sink | Clay | Free entry; paid by plan/credits | E140 E141 |
| Outbound email follow-up (only after explicit list-quality + compliance) | High risk | Avoid by default; use tool only after you have tight ICP + deliverability discipline | You have repeatable TOF capture and need systematic follow-up | Instantly or Smartlead | Instantly from $47/mo; Smartlead from $39/mo | E145 E146 E147 E148 |
ICP Automation Pack: Crypto-Native 5m-15m Up/Down Traders
Scope note: this is the ~90% automatable part of TOF operations (monitoring/search ingestion, deduping, enrichment, segment tagging, lead scoring, reply-queue generation, CRM sync, follow-up reminders, attribution). Keep public posting and first outbound messaging human-approved.
Evidence boundary: policy/automation guardrails are evidence-backed; keyword lists, score weights, and thresholds below are operator defaults that must be calibrated weekly against qualified outcomes (first trade, D7 retained, liquidity contribution), not treated as fixed truth.
1) X API Query Design to Reduce AI Slop and Engagement-Farm Noise
Use query families, not one query.
A. Setup-and-execution posts (higher-intent candidate set)
("BTC" OR "ETH" OR "SOL") ("5m" OR "15m" OR "ltf" OR "scalp" OR "scalping")
("long" OR "short" OR "entry" OR "stop" OR "sl" OR "tp" OR "take profit" OR "invalid")
lang:en -is:retweet
B. Microstructure/pain posts (higher conversion-intent hypothesis)
("BTC" OR "ETH" OR "SOL") ("spread" OR "slippage" OR "fees" OR "fill" OR "execution" OR "latency" OR "liquidity")
("5m" OR "15m" OR "scalp" OR "perp" OR "perps")
lang:en -is:retweet
C. Venue-specific intraday traders (candidate venue filter)
("5m" OR "15m" OR "scalp") ("Hyperliquid" OR "Binance Futures" OR "Bybit" OR "OKX" OR "dYdX" OR "GMX" OR "Drift")
("long" OR "short" OR "perp" OR "setup")
lang:en -is:retweet
Then apply a negative-keyword denylist at ingest:
airdrop,giveaway,follow back,f4f,vip signals,guaranteed,100x,moonshot,copy trade now,dm for signal.- Add account-level noise filters: high repost ratio, repeated templated text, and no market-specific nouns over last
Nposts.
Operator note: X API query limits and operator availability vary by access level; if an operator is unavailable in your tier, run broader retrieval and enforce filters in post-processing. E152 E153
2) Real-Trader Scoring (Automated) vs Slop Scoring (Automated)
Compute both per account and per post:
trader_intent_score(0-100): +timeframe mention, +symbol mention, +entry/SL/TP specifics, +venue mention, +repeated intraday posting cadence.slop_risk_score(0-100): +engagement-farm phrases, +template repetition, +extreme CTA density, +new-account instability, +low domain vocabulary diversity.
Calibration note: threshold values below are seed defaults; re-tune weekly using observed precision/recall against downstream qualified outcomes.
Routing policy:
intent >= 70andslop <= 30->hot queue(reply/DM candidate).intent 50-69andslop <= 40->warm queue(monitor + one-touch test).- else -> archive/suppress.
3) CRM Model (System of Record for DM -> TG Pipeline)
Minimum lead object fields:
lead_id,source_channel,x_user_id,x_handle,profile_urlfirst_seen_at,last_signal_at,query_family,top_signalsintent_score,slop_risk_score,segment(5m_scalper,15m_scalper,microstructure)stage,owner,next_action_at,last_touch_atdm_template_version,dm_sent_at,dm_reply_attg_invite_token,tg_invite_sent_at,tg_joined_atfirst_trade_at,d7_retained,liquidity_contribution_30d
Recommended stage pipeline:
discovered -> qualified -> queued_for_reply -> dm_opt_in_sent -> dm_opt_in_yes -> tg_invite_sent -> tg_joined -> first_trade -> retained_d7.
4) Scalable Pipeline: X Discovery -> DM -> TG Group
Ingest: run X queries every5-10 min, store raw posts/users.Dedup: collapse byx_user_idand canonical post ID.Enrich: fetch last30-90posts + profile metadata.Score: computeintentandslopscores; route to hot/warm/archive.Queue: generate dailyTop 25accounts/threads for human review.DM: send short opt-in DM only to reviewed leads (no mass auto-DM).Invite: on explicit yes, generate single-use TG invite token and send.Sync: webhook TG join event back to CRM stage transition.Attribute: append UTM/invite-token attribution through first trade and D7 retention.
5) Automation Boundary (Do / Do Not)
- Automate fully: ingestion, filtering, scoring, queueing, CRM sync, reminders, attribution, stage transitions.
- Keep human-in-loop: first outbound DM, public replies, and any ambiguous compliance-sensitive claims.
- Do not automate: bulk unsolicited outreach or auto-reply flooding in public communities.
6) Go/No-Go Thresholds (Precision@25 + Conversion-to-TG)
Target-setting note: the numeric bands below are operating defaults and must be recalibrated against observed cohort quality.
| Metric | Definition | Green (Go) | Yellow (Tune) | Red (No-Go) | Required Action |
|---|---|---|---|---|---|
precision@25 | qualified_leads_in_daily_top25 / 25 | >= 0.60 | 0.45-0.59 | < 0.45 | Green: keep query family + score weights. Yellow: retrain weights, expand denylist, tighten venue/timeframe terms. Red: pause new DM sends for this segment until precision recovers. |
conversion_to_tg_from_dm_sent | tg_joined / dm_opt_in_sent | >= 0.12 | 0.08-0.11 | < 0.08 | Green: scale outreach volume carefully. Yellow: improve DM copy and invite flow. Red: stop scaling and rerun ICP qualification + messaging. |
conversion_to_tg_from_opt_in_yes | tg_joined / dm_opt_in_yes | >= 0.60 | 0.45-0.59 | < 0.45 | Green: keep current invite handoff. Yellow: fix invite friction and response SLA. Red: treat as funnel breakage and fix before new outbound. |
Go/No-Go gate:
- Go: run at higher volume only if
precision@25andconversion_to_tg_from_dm_sentare both Green for 2 consecutive weeks. - No-Go: freeze incremental volume if either metric is Red for 2 consecutive weeks or drops
>30%week-over-week.
References: E2 E4 E11 E120 E121 E128 E129 E130 E132 E152 E153