Risk Signals Reference

Radar monitors dozens of signals across five categories. This page explains what each signal detects, why it matters, and what to look at.

Velocity Signals

These flags watch your team’s throughput and milestone adherence.

Commit Velocity Decline

What it detects: Commits per week (or tickets closed per week) trending down over a rolling 30-day window.

Why it matters: Velocity decline often signals blockers, unclear requirements, team friction, or low morale. Sometimes it’s planned (crunch recovery, switching engines mid-project). But if you’re not expecting it, this is a red flag.

What to check:

  • Did something change? New blockers, missing dependencies, unclear specs?
  • Is your team burned out? Check in with leads one-on-one.
  • Are tickets getting bigger and harder? Maybe complexity ramped up.

Milestone Slip Pattern

What it detects: Tickets repeatedly pushed from one sprint to the next, especially towards milestone cutoffs.

Why it matters: One slip happens. Two slips are bad planning. Three or more is either systematic underestimation or scope ambiguity. It erodes team confidence and extends timelines.

What to check:

  • Are estimates realistic? Compare estimated hours to actual hours for completed tickets.
  • Is scope clear? Are team members reworking tickets because requirements changed mid-sprint?
  • Is your PM tool keeping up? Sometimes milestone slip patterns are just stale data.

Silent Codebase

What it detects: No commits for 3+ days as you approach a milestone.

Why it matters: In a healthy project, code flows every day (or most days). Silence can mean the team is in deep crunch and committing less frequently. Or they’re stuck and haven’t figured it out yet.

What to check:

  • Is the team actually working, just not committing? Check in.
  • Are they blocked waiting for builds or dependencies?
  • Is this planned (planned cutoff before a milestone)?

Build Health Signals

These flags watch your CI/CD pipeline.

Build Failure Rate Rising

What it detects: The percentage of builds that fail is trending up week-over-week.

Why it matters: Rising failure rates usually mean accumulating tech debt, flaky tests, or integration problems. They slow down your team and erode confidence in the pipeline.

What to check:

  • Are the failures in the same area? Review the last 20 failed builds for patterns.
  • Are tests flaky? Some failures might be environmental noise, not real bugs.
  • Are you merging broken code? If code reaches main branch broken, your gating isn’t tight enough.

Build Time Growth

What it detects: The average build time is increasing week-over-week.

Why it matters: Longer builds are a death by a thousand cuts. Developers commit less frequently, feedback loops stretch out, context switches pile up. An extra 5 minutes per build doesn’t sound like much until you realize it’s 40 minutes a day per person.

What to check:

  • Are you building more code? Size of the codebase is the primary driver.
  • Are you compiling unused assets? Clean up the build pipeline.
  • Is the build server underpowered? Might be infrastructure, not code.

Scope Signals

These flags watch what you’re building and when.

Scope Creep

What it detects: Ticket count in your current milestone is growing despite the milestone being locked.

Why it matters: Scope lock is sacred. If your locked milestone is getting 5, 10, 20 new tickets, you’re extending the project without extending the timeline. Something has to give: quality, team burnout, or both.

What to check:

  • Who is adding tickets after lockdown? Should they be?
  • Are these essential fixes or nice-to-haves? If nice-to-haves, remove them.
  • Can any of these move to post-launch?

Late-Stage Churn

What it detects: High commit activity on features marked “Done” in your PM tool.

Why it matters: Done work that needs rework is wasted motion. It signals either unclear acceptance criteria, poor QA, or scope ambiguity. It also kills schedule predictability.

What to check:

  • Are done items moving back to “In Progress”? If so, your definition of done is unclear.
  • Is QA finding bugs in done work? Your acceptance criteria might be insufficient.
  • Are features requested rework post-review? Spec was probably incomplete.

Team Signals

These flags watch team structure and collaboration patterns.

Key Person Risk

What it detects: Commits concentrated in 1-2 developers. If they leave or get sick, the project stalls.

Why it matters: Single points of failure are dangerous. When 40% of commits come from one person, you have a knowledge concentration problem and a bus factor problem.

What to check:

  • Is this person in a specialized role (graphics lead, gameplay)? That’s expected.
  • Or are they just the only person committing? That suggests communication breakdown or team imbalance.
  • Can you redistribute knowledge? Pair programming, code review, documentation?

Integration Avoidance

What it detects: Long-lived branches that don’t merge frequently, or a pattern of big batch merges instead of frequent small ones.

Why it matters: Merging is hard, so teams sometimes avoid it. They work on long branches and merge infrequently. This creates integration risk: surprises at merge time, conflicts that take hours to untangle, delayed feedback.

What to check:

  • How old are your oldest branches? Branches older than 2-3 weeks are a red flag.
  • When you do merge, how much conflict? If merges are painful, something is wrong with branch strategy.
  • Can you merge more frequently? Small, frequent merges are easier.

Code Quality Signals

These flags watch your codebase health over time.

Test Erosion

What it detects: Source code growing while test code stays flat or shrinks. Test coverage percentage declining.

Why it matters: Tests are the safety net for refactoring. If you’re adding features without adding tests, you’re building technical debt. Regressions will happen.

What to check:

  • Are you writing tests for new code? Make it a habit.
  • Are you removing old tests? If so, why? Old tests are valuable.
  • Is test infrastructure in the way? If tests are hard to write, fix that first.

Dependency Rot

What it detects: Your dependency list is stale. Packages haven’t been updated in months. Known vulnerabilities exist in your dependency tree.

Why it matters: Stale dependencies accumulate security vulnerabilities, miss bug fixes, and create compatibility problems later. They also make it harder to onboard new team members who expect up-to-date tooling.

What to check:

  • Run a dependency audit. Most package managers have one.
  • Are there known security issues? Fix those first.
  • Can you schedule a dependency update sprint? Get it on the roadmap.

What to do with this

Check Radar weekly. If a signal is red or orange, look at the recommended action. Most of the time, the action is “check in with your team” or “look at this specific metric.” Radar isn’t telling you what to do. It’s telling you what to look at.

Some teams integrate Radar into their weekly standups. Some assign one person to check in on red flags. Find what works for your studio.

Over time, you’ll get a feel for which signals matter most to your project. A 10% velocity dip might be noise. A 30% dip is a conversation. Adjust your thresholds and build context.