The Problem
Running a solo product means wearing every hat. By the time I sat down each morning, I was already behind: checking analytics in one tab, scanning GitHub Issues in another, deciding what to fix versus what to plan. The information existed, but assembling it into a coherent picture was a daily tax on time and attention.
I wanted a system that would do that assembly for me overnight, so that every morning I could open a single GitHub Issue and know exactly where the product stood.
The best daily standup is the one that happens while you sleep.
What We Built
A GitHub Actions workflow that runs every night at 4 AM. It connects to Google Analytics 4 via the analytics MCP server, reads the open backlog from GitHub Issues, auto-fixes trivial problems, and produces a structured morning report posted as a new GitHub Issue.
No dashboards to check. No tabs to juggle. One issue, every morning, with everything that matters.
How It Works
The pipeline has five stages, executed in sequence inside a single GitHub Actions job:
Analytics Pull
The workflow installs the analytics MCP server and authenticates with a GA4 service account. Claude Code queries yesterday's traffic, the rolling 7-day summary, traffic sources, and the product's full engagement funnel, from first interaction through completion, plus outcome events.
Survey Rollup
Before the review runs, a Node.js script queries GA4 for the last 30 days of post-completion survey events. It aggregates helpfulness ratings, next-step choices, and improvement suggestions into a structured JSON file that Claude reads during the review.
Backlog Triage
Claude reads every open GitHub Issue labeled backlog and classifies each one: Trivial, Small, Medium, or Large. Trivial issues (typos, missing alt text, broken links) get fixed on the spot. Everything else gets a recommended approach and scope estimate.
Auto-Fix and Commit
For each trivial issue, Claude edits the files, runs npm test to confirm nothing breaks, commits the fix, and closes the issue. If tests fail, the change is reverted and the issue gets reclassified. Only HTML and CSS changes are auto-pushed; any JavaScript change requires human review.
Morning Report
Everything is assembled into a single GitHub Issue labeled nightly-report: analytics tables, funnel drop-off analysis, survey sentiment, auto-fix receipts, a prioritized plan for non-trivial work, and open questions for the product owner to answer.
The Architecture
The entire system runs on GitHub Actions infrastructure. No servers to maintain, no cron jobs to babysit.
Analytics Implementation
The analytics layer tracks the full user journey through the product's multi-step flow. Every stage fires a GA4 event via GTM:
- Entry: fires once per session on first interaction
- Step progression: each step in the flow fires its own event, letting us track exactly where users advance and where they disengage
- Recalculation: fires on every parameter change, typically 5-10x per completed session, indicating how much users are exploring different scenarios
- Completion: the most reliable proxy for a user finishing the flow
- Outcomes: separate events for each result type, revealing whether the tool's defaults produce realistic distributions
The nightly review computes meaningful ratios from these events rather than treating raw counts as absolute measures. The ratio of recalculations to completions tells us how much users are experimenting (healthy range: 3-8x). The outcome distribution reveals whether defaults need adjustment.
Data Synthesis
Raw analytics numbers are only useful if someone interprets them. The nightly prompt instructs Claude to synthesize, not just report:
- Funnel drop-off analysis. Identify the weakest step and hypothesize why users are disengaging there.
- Tracking health. Flag any event that returns zero counts as a potential instrumentation gap rather than assuming zero usage.
- Survey sentiment. Calculate weighted average helpfulness and surface the most-requested improvement, giving the product owner a direct line to user feedback without reading individual responses.
- Trend context. Compare yesterday to the 7-day rolling average so that a single quiet day does not trigger unnecessary alarm.
Issue Surfacing and Auto-Fix
The backlog triage is the part that saves the most time. Every open issue labeled backlog gets a classification and a treatment plan. The auto-fix scope is deliberately narrow:
In Scope for Auto-Fix
- Spelling and grammar typos in HTML copy
- Broken internal links
- Missing or empty
altattributes on images - Missing
aria-labelon icon-only buttons - Heading hierarchy mismatches
- Small copy corrections
Requires Human Review
- Any JavaScript logic, calculations, or data flow changes
- CSS layout or spacing changes
- New interactive elements
- Changes affecting multiple files
- Anything touching the core calculation engine
- Security-related changes
The guardrail is simple: npm test must pass after every auto-fix. If tests fail, the change is reverted and the issue is reclassified as Small for human attention. This keeps the automation safe without requiring a separate approval workflow for trivial corrections.
The Report Structure
Every morning report follows the same template, making it easy to scan:
- Analytics: Yesterday: sessions, users, bounce rate, top pages
- Analytics: Last 7 Days: rolling summary plus traffic source breakdown
- Product Engagement: funnel event counts, completion rate, recalculations per session, weakest step
- Outcome Distribution: result type breakdown
- User Survey: helpfulness distribution, next steps, improvement requests
- Auto-fixes Applied: commit hashes and issue references
- Plan: Backlog Items: recommended approach for each non-trivial issue
- Open Questions: specific, answerable questions for the product owner
- Tracking Health: flags for any analytics events returning zero
What We Learned
After running this system nightly, a few things became clear:
- Structured prompts beat open-ended ones. The review prompt specifies exact output sections, table formats, and which ratios to compute. Claude follows the structure reliably because the structure is explicit.
- Auto-fix scope should start narrow. We limited auto-fixes to 5-line, HTML/CSS-only changes. The temptation to expand scope is real, but the cost of a bad auto-fix is higher than the cost of a human reviewing a small issue.
- Tracking health is a report section, not an afterthought. Flagging zero-count events caught a GTM misconfiguration within the first week that would have gone unnoticed in a dashboard.
- The report is the standup. For a solo product, this replaced the morning ritual of checking three different tools. One issue, one read, then straight to work.
What's Next
The nightly automation is a foundation. The natural extensions include:
- Week-over-week trend comparison in the analytics sections
- Automated Lighthouse audits with performance regression alerts
- Expanding auto-fix scope to include simple CSS changes once the test suite covers visual regressions
- A weekly digest issue that aggregates the nightly reports into a summary with actionable priorities
The pipeline is live, the reports are consistent, and the morning workflow is one issue instead of ten tabs. Everything from here is iteration.