Back to Home Engineering

Nightly Automation with Claude Code and GA4

March 2026 10 min read Ben Fider

The Problem

Running a solo product means wearing every hat. By the time I sat down each morning, I was already behind: checking analytics in one tab, scanning GitHub Issues in another, deciding what to fix versus what to plan. The information existed, but assembling it into a coherent picture was a daily tax on time and attention.

I wanted a system that would do that assembly for me overnight, so that every morning I could open a single GitHub Issue and know exactly where the product stood.

The best daily standup is the one that happens while you sleep.

What We Built

A GitHub Actions workflow that runs every night at 4 AM. It connects to Google Analytics 4 via the analytics MCP server, reads the open backlog from GitHub Issues, auto-fixes trivial problems, and produces a structured morning report posted as a new GitHub Issue.

No dashboards to check. No tabs to juggle. One issue, every morning, with everything that matters.

How It Works

The pipeline has five stages, executed in sequence inside a single GitHub Actions job:

Analytics Pull

The workflow installs the analytics MCP server and authenticates with a GA4 service account. Claude Code queries yesterday's traffic, the rolling 7-day summary, traffic sources, and the product's full engagement funnel, from first interaction through completion, plus outcome events.

Survey Rollup

Before the review runs, a Node.js script queries GA4 for the last 30 days of post-completion survey events. It aggregates helpfulness ratings, next-step choices, and improvement suggestions into a structured JSON file that Claude reads during the review.

Backlog Triage

Claude reads every open GitHub Issue labeled backlog and classifies each one: Trivial, Small, Medium, or Large. Trivial issues (typos, missing alt text, broken links) get fixed on the spot. Everything else gets a recommended approach and scope estimate.

Auto-Fix and Commit

For each trivial issue, Claude edits the files, runs npm test to confirm nothing breaks, commits the fix, and closes the issue. If tests fail, the change is reverted and the issue gets reclassified. Only HTML and CSS changes are auto-pushed; any JavaScript change requires human review.

Morning Report

Everything is assembled into a single GitHub Issue labeled nightly-report: analytics tables, funnel drop-off analysis, survey sentiment, auto-fix receipts, a prioritized plan for non-trivial work, and open questions for the product owner to answer.

The Architecture

The entire system runs on GitHub Actions infrastructure. No servers to maintain, no cron jobs to babysit.

GitHub Actions (4 AM CDT) | +-- Install: Claude Code CLI, analytics MCP, Node.js | +-- Authenticate: GA4 service account credentials | +-- Run: survey rollup script | +-- GA4 Data API --> structured JSON | +-- git commit + push (if changed) | +-- Run: Claude Code (nightly review prompt) | +-- Analytics MCP --> GA4 API (sessions, funnel, outcomes) | +-- GitHub Issues --> open backlog | +-- Trivial issues --> edit files --> test --> commit | +-- Output --> report | +-- Auto-push HTML/CSS-only commits (JS changes held for review) | +-- Create morning report as GitHub Issue

Analytics Implementation

The analytics layer tracks the full user journey through the product's multi-step flow. Every stage fires a GA4 event via GTM:

  • Entry: fires once per session on first interaction
  • Step progression: each step in the flow fires its own event, letting us track exactly where users advance and where they disengage
  • Recalculation: fires on every parameter change, typically 5-10x per completed session, indicating how much users are exploring different scenarios
  • Completion: the most reliable proxy for a user finishing the flow
  • Outcomes: separate events for each result type, revealing whether the tool's defaults produce realistic distributions

The nightly review computes meaningful ratios from these events rather than treating raw counts as absolute measures. The ratio of recalculations to completions tells us how much users are experimenting (healthy range: 3-8x). The outcome distribution reveals whether defaults need adjustment.

Data Synthesis

Raw analytics numbers are only useful if someone interprets them. The nightly prompt instructs Claude to synthesize, not just report:

  • Funnel drop-off analysis. Identify the weakest step and hypothesize why users are disengaging there.
  • Tracking health. Flag any event that returns zero counts as a potential instrumentation gap rather than assuming zero usage.
  • Survey sentiment. Calculate weighted average helpfulness and surface the most-requested improvement, giving the product owner a direct line to user feedback without reading individual responses.
  • Trend context. Compare yesterday to the 7-day rolling average so that a single quiet day does not trigger unnecessary alarm.

Issue Surfacing and Auto-Fix

The backlog triage is the part that saves the most time. Every open issue labeled backlog gets a classification and a treatment plan. The auto-fix scope is deliberately narrow:

The guardrail is simple: npm test must pass after every auto-fix. If tests fail, the change is reverted and the issue is reclassified as Small for human attention. This keeps the automation safe without requiring a separate approval workflow for trivial corrections.

The Report Structure

Every morning report follows the same template, making it easy to scan:

  1. Analytics: Yesterday: sessions, users, bounce rate, top pages
  2. Analytics: Last 7 Days: rolling summary plus traffic source breakdown
  3. Product Engagement: funnel event counts, completion rate, recalculations per session, weakest step
  4. Outcome Distribution: result type breakdown
  5. User Survey: helpfulness distribution, next steps, improvement requests
  6. Auto-fixes Applied: commit hashes and issue references
  7. Plan: Backlog Items: recommended approach for each non-trivial issue
  8. Open Questions: specific, answerable questions for the product owner
  9. Tracking Health: flags for any analytics events returning zero

What We Learned

After running this system nightly, a few things became clear:

  • Structured prompts beat open-ended ones. The review prompt specifies exact output sections, table formats, and which ratios to compute. Claude follows the structure reliably because the structure is explicit.
  • Auto-fix scope should start narrow. We limited auto-fixes to 5-line, HTML/CSS-only changes. The temptation to expand scope is real, but the cost of a bad auto-fix is higher than the cost of a human reviewing a small issue.
  • Tracking health is a report section, not an afterthought. Flagging zero-count events caught a GTM misconfiguration within the first week that would have gone unnoticed in a dashboard.
  • The report is the standup. For a solo product, this replaced the morning ritual of checking three different tools. One issue, one read, then straight to work.

What's Next

The nightly automation is a foundation. The natural extensions include:

  • Week-over-week trend comparison in the analytics sections
  • Automated Lighthouse audits with performance regression alerts
  • Expanding auto-fix scope to include simple CSS changes once the test suite covers visual regressions
  • A weekly digest issue that aggregates the nightly reports into a summary with actionable priorities

The pipeline is live, the reports are consistent, and the morning workflow is one issue instead of ten tabs. Everything from here is iteration.

BF
Ben Fider
Founder & Owner, Framepath Partners

Automate Your Development Operations

Interested in how AI-powered automation can streamline your team's development operations?