Measure ATR's impact
See the ATR ROI dashboard in action on Port's demo environment.
This page answers the questions that matter after ATR is running: how many tickets did AI handle this sprint, where does delegation stall, and are the routing rules improving over time?
Without measurement, you are flying blind. Teams add more tickets to ATR, the rules drift, and nobody knows whether the workflow is delivering value or routing everything to humans. The dashboard closes that loop.
What it tracks
The dashboard reads from work item entities in Port. Every routing decision, stage transition, and engineer response writes back to the entity as structured data - so the dashboard is always current.
Key metrics:
| Metric | What it shows |
|---|---|
| Agent vs human split | How many tickets were delegated to AI vs routed to a human this period. |
| Stage breakdown | Where work items currently sit across Draft, Plan, Develop, Deploy, Completed. |
| Delegation trend | Agent-routed volume over time. Are you delegating more or less than last sprint? |
| Stall points | Which stages have the highest dwell time. Where does work stop moving? |
| Routing decision breakdown | How many tickets passed vs failed each scorecard criterion. |
How it works
Each work item entity stores the routing decision result, the engineer's response, and stage transition timestamps as properties. The dashboard aggregates these across all work items using Port's visualization layer.
When an engineer clicks Delegate to Claude Code in Slack, that response writes back to the work item entity. When the PR merges, the stage updates to Deploy. When the deployment confirms, it moves to Completed. The dashboard reflects each transition in real time.
Reading the dashboard
Agent vs human split is the headline metric. If AI is handling 30% of tickets and your target is 60%, the gap tells you something: either the routing rules are too strict, ticket quality is too low, or the services in scope are not a good fit for delegation.
Stall points tell you where to focus. If 40% of work items are stuck in Plan, the workflow is producing routing decisions but engineers are not acting on them. If items pile up in Develop, the coding agent may be producing PRs that fail review repeatedly.
Routing decision breakdown shows which criteria are blocking the most tickets. If blast radius is failing 80% of the time, your catalog's blast radius data may be stale or the threshold may be calibrated too conservatively.
Improving the rules over time
The dashboard is the feedback loop that makes ATR better each sprint. After each cycle:
- Check the routing decision breakdown. If one criterion is blocking most tickets, investigate whether the data behind it is accurate.
- Check stall points. If work items pile up at a specific stage, the gate criteria for that transition may need adjustment.
- Track the delegation trend. If the agent-routed volume is flat or declining, widen the eligible scope gradually.
Next steps
- Routing decision - how the scorecard rules and AI narrative that feed this dashboard are configured.
- Work item blueprint pattern - the entity model that stores the data the dashboard reads from.
- Work item blueprint implementation guide - technical setup for the full ATR stack.