User Manual
Project Commander is a Jira Cloud app that can be accessed as a full-page app or as a dashboard gadget.
Don't want to install anything? The standalone web app lets you try Project Commander with your own Jira data directly in your browser — no Atlassian Marketplace installation required.
Visit projectcommander.app/app and enter your beta access code. Or click Try Interactive Demo to explore with sample project data — no account needed.
After entering a valid code, provide your name and email. Click Start Using Project Commander.
The Connect page has two tabs: Connect Jira and Upload CSV. To connect to Jira, enter your site URL, email, and an API token. Check the Terms of Use checkbox and click Connect to Jira. Or click Try Interactive Demo at the bottom to explore with sample data.
Read-Only Mode
The standalone app starts in read-only mode for safety — it reads your Jira data to generate analysis but does not create, modify, or delete anything in Jira. When you're ready, enable write mode in Settings to unlock full functionality including drag-drop issue moves, Auto-Level, sprint creation, and more.
*.atlassian.net domainsClick the Disconnect button in the top banner to end your session and clear your credentials from the browser.
The standalone web app can import project data from any CSV file — no Jira required. On the Connect page, select the Upload CSV tab.
Drag and drop a CSV file or click to browse. The file must have a header row. Common formats from Jira, Azure DevOps, Asana, Monday.com, and spreadsheets are supported.
Project Commander auto-detects columns like Summary, Status, Story Points, Start Date, Due Date, Assignee, Priority, and Sprint. Review the mapping and adjust any fields that weren't detected.
A Summary column is required. All other columns are optional. If no Sprint column is present, the app switches to Tasks mode — a flat list with date-based planning instead of sprint lanes.
Map your status values (e.g., "Open", "In Review", "Closed") to three categories: To Do, In Progress, and Done. The app auto-guesses based on common status names.
Review the summary and click Import. Your data is stored locally in the browser (IndexedDB) — nothing is sent to a server.
When your CSV has no Sprint column, the Sprints tab becomes the Tasks tab. Issues appear as a flat list sorted by date. All other tabs (Dashboard, What-If, Team & Capacity, Scope, Alerts) work normally using weekly time buckets instead of sprints.
In CSV mode, the Level Work banner appears above all tabs. Click it to automatically redistribute task dates so no single week exceeds your team's capacity. The algorithm respects priorities and dependencies. After leveling, you can Accept the changes or Cancel to revert. An Undo option is available after accepting.
Work Leveling can be disabled in Settings → Advanced → Enable Work Leveling.
When you first open Project Commander, click the Settings gear icon (⚙) in the tab bar to configure the app. Three settings determine what the app shows you.
The JQL filter defines which Jira issues appear across all tabs. Enter any valid JQL query, for example:
project = PROJ — all issues in a projectproject = PROJ AND sprint in openSprints() — only open sprint issuesproject = PROJ AND labels = "release-2.0" — issues with a specific labelAll tabs share the same issue data from this filter.
If your team uses Jira sprints, enable Sprint Mode and select your Board. The app searches your Jira Scrum boards — type to filter, then click to select. If the app is opened from within a Jira board, the board is auto-detected.
Sprint Mode unlocks the Sprints tab (sprint planning, drag-drop, auto-level) and the What-If tab. All other tabs work with or without Sprint Mode.
Choose how your team measures work:
Click Save Configuration. The app will load your data and display the appropriate tabs.
Quick Start
At minimum you need a JQL filter. Sprint Mode and a Board are only required if you want the Sprints tab. You can use the Team & Capacity, Scope, Alerts, and What-If (Project view) tabs with just a JQL filter.
| Tab | Purpose | Requires |
|---|---|---|
| Dashboard | Single-screen project health overview with key metrics and navigation | Always visible |
| Sprints | Sprint planning with drag-drop, auto-level, and capacity tracking | Sprint Mode ON + Board selected |
| What-If | What-if analysis and Monte Carlo simulation — Sprint view (by sprint) and Project view (by week) | Always visible (Sprint view requires Sprint Mode ON) |
| Team & Capacity | Team capacity, time off, holidays, and demand vs capacity chart | Always visible |
| Scope | Scope and burndown timeline chart with delivery forecast | Always visible |
| Alerts | Issue problems and dependency analysis | Always visible |
| Epics | Epic progress, forecasts, scope growth, and cross-epic dependencies | Always visible |
The Dashboard is your project's home screen — a single view that answers the fundamental question: "Are we on track?" It surfaces the current project status, the delivery forecast, plan quality warnings, diagnostics, and suggested actions. It is always visible as the first tab, regardless of Sprint Mode or Board settings.
The top of the Dashboard shows four stat cards that summarize your project at a glance. Each card is clickable — it navigates to the relevant detail tab.
Of the work due by today, how much is done? Compares completed work to work that was planned to be done by now (based on issue due dates or sprint end dates):
Shows completed and planned-by-now totals in your configured units. A gear icon opens a checkbox to include all Done work, not just work that was planned by today.
Shows the projected completion date using the per-sprint, per-user simulation engine. Click the gear icon to open per-card settings with two controls:
Below the projected date, the card shows Remaining (total estimated work not yet completed) and Capacity to target (total team capacity available from now until the target date). When Remaining exceeds Capacity to target, the forecast date will be past the target date. When one team member is driving the delay, the card shows a bottleneck indicator.
The deadline you are measuring against. Two choices:
The target date is shared across the Dashboard, What-If, and What-If (Project view) tabs. Setting it in one place updates all three.
Overall percentage of work completed across all issues in scope. Calculated as sum(estimates for Done issues) ÷ sum(all estimates) × 100. Falls back to issue count if no estimates exist.
A warnings banner appears above the stat cards when the sprint plan has issues that undermine forecast reliability. The banner surfaces problems you should address to get a trustworthy delivery forecast. When all warnings are resolved, the banner disappears and the forecast is based on a realistic, leveled plan.
Overloaded resources: One or more team members have more work assigned than their capacity allows across one or more sprints. The forecast reflects this overload honestly — it will show a later date because work overflows sprint to sprint — but the plan itself is unrealistic until rebalanced.
"3 team members overloaded across 2 sprints — Auto-Level to rebalance"
Dependency conflicts: Issues are ordered across sprints in a way that violates their dependencies — for example, a blocker in Sprint 6 that blocks work scheduled in Sprint 5. The forecast assumes work proceeds as planned, but dependency violations may cause delays that are not captured in the simulation.
"2 dependency conflicts in Sprints 5-6 — review in Sprints tab"
External dates at risk: Issues with hard, externally-driven deadlines (locked issues with manually-set due dates) are placed in sprints that end after their deadline. These represent commitments to customers, regulators, or stakeholders that the current plan will miss.
"1 externally-constrained issue at risk of missing its deadline"
The warnings guide a natural planning workflow:
Warnings do not block the forecast
You can always see the projected date. But when warnings are present, the forecast reflects the problems in the plan: overloaded sprints push the date out, and the bottleneck indicator shows which team member is driving the delay. The warnings banner appears on the Dashboard only. The What-If tab does not show warnings because its purpose is to explore scenarios — including broken ones.
A single expandable table showing all project health factors. Click any row with a caret to expand for a detailed breakdown.
| Factor | What It Shows | Calculation |
|---|---|---|
| Scope | Project-wide scope growth percentage since start | (current scope − original scope) / original scope × 100. ● Green if ≤10%. ● Amber if 11–25%. ● Red if >25%. |
| Team Capacity | Team utilization percentage | remaining demand / total capacity × 100. ● Green if 60–90%. ● Amber if <60% or 91–110%. ● Red if >110%. |
| Delivery Rate | Percentage of capacity actually delivered, with trend | average(completed / capacity) across recent sprints, with trend direction (improving, stable, declining). ● Green if ≥85%. ● Amber if 65–84%. ● Red if <65%. |
| Estimate Accuracy | Time spent vs original estimate — are estimates reliable? | total time spent / total original estimate × 100. ● Green if 80–110%. ● Amber if <80% or 111–130%. ● Red if >130%. |
| Team Balance | Workload distribution across team members | Compares each member’s load% (demand / capacity × 100). ● Green if all members 50–100%. ● Amber if any member >100% or <50%. ● Red if any member >115% while another is <60%. |
| Dependency Conflicts | Cross-sprint dependency violations | Counts issues blocked by something in a later sprint. ● Green if 0. ● Amber if 1–2. ● Red if ≥3. |
| Alerts | Error and warning counts from the Alerts tab | Errors: done-with-remaining, dependency conflicts, circular dependencies. Warnings: overdue, child-after-parent, missing dates, missing estimates. ● Green if 0 alerts. ● Amber if warnings only. ● Red if errors present. |
Each row shows a status dot (green/amber/red/black) and a plain-language explanation.
Each factor row is expandable — click to see a detailed breakdown of the calculation, contributing issues, and trend data.
Two buttons appear in the top-right corner of the Dashboard:
The Weekly Digest includes:
A collapsible panel that auto-generates an AI analysis of your project status. Each bullet point is prefixed with PROJECT STATUS: or TEAM & PLAN HEALTH: and is expandable for more detail. Recommendations are listed separately.
Below the insights, an Ask AI section lets you type natural-language questions about your project. For example: "Which sprints are at risk?", "Who is overloaded?", or "What should I focus on this week?" The AI receives your project context — sprints, issues, capacity, velocity, and demand vs capacity data — and returns a focused answer.
This feature requires an AI API key configured in Settings (see Settings Reference).
No Data State
If no JQL filter is configured, the Dashboard shows a "No Data Available" message prompting you to open Settings and configure a JQL filter.
The Delivery Forecast predicts when your project will finish based on the team's throughput and the remaining work. It appears as a stat card on both the Dashboard and What-If tabs, and drives the projected completion date throughout the app.
Click the gear icon (⚙) on any stat card to expand its settings. The Delivery Forecast card reveals the Projection Method dropdown and Scope Growth Method controls.
The forecast walks through each future sprint (or week, if Sprint Mode is off) and simulates work being completed using a stepped per-sprint, per-user simulation:
If one team member is overloaded while others have spare capacity, the forecast reflects this honestly. The project finishes when the last person clears their queue, not when the team average says so.
The Delivery Projection Method dropdown controls how each team member's per-sprint capacity is calculated. All four methods use the same stepped simulation — only the source of the capacity number differs.
Uses the capacity set for each sprint. Each sprint can have its own capacity override — a custom number, team-calculated capacity, or effective capacity — configured in the Sprints tab. Per-user capacity can be set individually for each team member within each sprint using the inline capacity editor in the per-user table. Sprints without an override use the default capacity from Settings. When per-user limits are not set, the sprint limit is split proportionally based on each member's team capacity.
This represents what the PM has budgeted for each sprint.
Uses the bottom-up calculation from the Team & Capacity tab — each member's hours per week, multiplied by their utilization percentage, minus holidays and PTO for that sprint. This represents what the team can actually work based on their availability.
Applies each team member's historical efficiency rate to their team capacity. If a member typically delivers 80% of their planned capacity, their effective capacity is their team capacity multiplied by 0.8. This is the most realistic method for teams with established delivery history.
Uses each team member's actual average delivery rate from completed sprints. If Alice has averaged 12 points per sprint over the last 5 sprints, that is her projected throughput — adjusted to zero for any sprint where she has PTO. This method is purely empirical: it uses what the team has delivered, not what they are configured to deliver.
The system selects the first available method in this order: Sprint Capacity, Effective, Team, Velocity. You can override by selecting a different method from the dropdown.
Each sprint card shows a per-user table with columns for Member, Demand, Capacity, and Status. The Capacity column is editable — click on a member's capacity value to set their individual capacity for that sprint. This override is saved per-user per-sprint and takes priority over the global settings default.
This allows fine-grained control: you can give Alice 20 points of capacity in Sprint 5 (she's focused on this sprint) and Bob only 10 (he's splitting time with another project), even though both have the same global default.
Below the projected date, the forecast card shows:
When Remaining exceeds Capacity to target, the forecast date will be past the target date.
When the forecast is driven by one overloaded team member rather than overall team capacity, the forecast card identifies the bottleneck:
May 30, 2026 (Sprint 10) — 6 days late
Bottleneck: Alice — overloaded in Sprints 5-7
This tells you WHY the date is late and WHO to rebalance, so you can make an informed decision about whether to reassign work.
When Sprint Mode is off, the forecast uses weekly steps instead of sprint steps. Each week's capacity comes from team member settings (hours per week multiplied by utilization, minus any PTO or holidays that week). Issues are bucketed by their due dates into weeks. The simulation and throughput methods work identically — only the time unit changes from sprints to weeks.
Scope growth models the rate at which new work enters the project. When enabled, it reduces the effective capacity each sprint by the amount of new work expected to arrive, extending the forecast accordingly.
The growth rate represents the net new work added per week: issues created minus issues removed (cancelled, rejected, or marked as won't do). Done issues are counted as additions — they were real scope when created. Only issues in removed statuses are subtracted.
The calculation uses issue creation dates and the current estimation mode:
On the Scope tab, this computes the historical growth rate over the selected period (Weekly, Monthly, All Time, etc.). On the Dashboard, it uses the project's full history from the earliest issue creation date to today. Both tabs use the same underlying calculation.
You specify a fixed growth rate in points or hours per week. Use this to model specific scenarios — for example, "what if we add 10 points of new work every week?"
Growth adds to the unassigned demand pool each sprint:
unassignedDemand += growthRate × sprintWeeks
This new work competes for spare team capacity after assigned work is handled. If growth exceeds the team's total throughput, the forecast shows "Never" — the team cannot finish because new work arrives faster than it is completed. The method remains selectable so you can see the impact and compare against other methods.
When the forecast shows "Never"
If scope growth exceeds team throughput, the project's remaining work grows every sprint. The forecast correctly reports that the project cannot finish under these conditions. To resolve this, either reduce scope growth (cut incoming work) or increase team capacity.
The Sprints tab is where you plan and manage your work across sprints. Each sprint appears as a collapsible card showing its issues, capacity, and team workload. To see the Sprints tab, enable Sprint Mode and select a Board in Settings.
The page is organized top to bottom:
The header row gives you a quick summary without expanding anything:
Click the Sprint Details toggle below the header to expand additional information:
When a sprint is expanded, you see its issues in a table. The columns shown are the ones you selected in Display Columns (Settings). Every row has:
Due dates in the past are highlighted in red. Assignees show a colored avatar chip.
Click any column header to sort the issues in that sprint. Click again to reverse the order, and a third time to clear the sort. A small arrow (↑/↓) shows which column is sorted and in which direction.
Sorting is per-sprint (each sprint sorts independently) and resets when you reload. It is not saved. If you manually reorder an issue (by dragging within the sprint), the sort is cleared.
Each sprint has a capacity value that represents how much work it can hold. A dropdown in the sprint header controls where this number comes from:
| Option | Per Sprint mode | Per User mode |
|---|---|---|
| Settings default | Uses the Capacity Limit from Settings | Each assignee gets the per-user limit from Settings; sprint total = users × limit |
| Team / Capacity settings | Uses calculated capacity from team config | Each person's capacity is calculated from the Team & Capacity tab & sprint dates |
| Custom / Custom for sprint | You type a sprint total | You type a per-user value; sprint total = users × your value |
The sprint uses the Capacity Limit from Settings. In Per Sprint mode, this is the whole sprint's capacity. In Per User mode, each assignee gets the per-user limit, and the sprint total is the sum (displayed as "N × limit = total").
In Per User mode (labeled "Capacity settings"), capacity is calculated from team members configured on the Team & Capacity tab:
sum(member's weekly points × utilization% × overlap fraction). Overlap fraction accounts for weeks that start or end mid-sprint. Holidays and time-off days reduce the overlap fraction proportionally.sum(member's weekly hours × utilization%), minus hours for holidays and time off. Days = hours ÷ 8 (8-hour workday).The display shows a formula: assignees × per-user capacity = sprint total.
In Per Sprint mode (labeled "Team"), capacity is calculated from team config. If no team members are configured, the option is greyed out.
Opens an inline number editor where you type a capacity value. In Per Sprint mode, this is the sprint total. In Per User mode, you enter a per-user value and the sprint total is calculated as users × your value (displayed as "N × value = total").
The sprint header shows capacity remaining — the total capacity minus completed work. This lets you compare remaining demand against remaining capacity at a glance. If remaining demand exceeds capacity remaining, the demand stat turns red and an "over by X" indicator appears. When no issues are done yet, capacity remaining equals the full sprint capacity.
You manage the sprint lifecycle directly from each sprint card's action buttons.
Click + Create Sprint below the sprint list. The new sprint gets dates that follow on from the last sprint, using the Sprint Length you configured (2, 3, or 4 weeks).
Click Start on a future sprint to make it active. If you already have an active sprint, a warning asks you to confirm that you want two sprints running at the same time.
Click Complete on an active sprint. If there are unfinished issues, a dialog lets you choose where to move them — to another sprint or back to the backlog. When the sprint completes, its velocity data is automatically captured for the Velocity section.
Click the delete button and confirm. The sprint is removed and its issues move to the backlog.
Open the Sprint Details panel, then click the goal text. Type your changes and press Enter or click away to save.
You can align issue dates with sprint boundaries:
This is useful for features like the Team & Capacity chart and What-If (Project view) that rely on issue dates.
Drag any issue row from one sprint and drop it onto another sprint card. The issue is moved in Jira immediately. You can also drag issues to the backlog at the bottom.
Check the boxes next to several issues, then drag any one of them. All selected issues move together. A badge shows how many you're moving (e.g., "5 issues"). A bar above the sprint list shows your selection count with a Clear selection button.
Drag an issue up or down within the same sprint to change its position. A blue line shows where it will land. This custom order is saved and persists across sessions.
Reordering is disabled while a column sort is active. If you reorder an issue, the sort clears.
While dragging, move your cursor near the top or bottom edge of the screen. The page scrolls automatically so you can reach sprints that aren't currently visible.
Click the lock icon on an issue to prevent it from being dragged. Locked issues also stay in place during Auto-Level.
Auto-Level is a planning tool that redistributes issues across sprints so that no sprint exceeds its capacity. It uses each sprint's capacity setting: Manual numbers if set, Team-calculated values if selected (Per User mode), or the Settings default otherwise. It respects dependencies (blockers always go in earlier sprints) and leaves locked issues and sprints alone. Everything happens as a preview first — nothing is saved to Jira until you explicitly accept.
| Strategy | How It Decides What Goes Where | Best For | Trade-off |
|---|---|---|---|
| Priority | Puts the highest-priority issues first, filling sprints front to back | Teams that want to ensure high-priority items ship first | Early sprints may contain a mix of large and small issues (size doesn't matter, only priority) |
| Size | Puts the smallest issues first, filling sprints front to back | Teams that want to maximize the number of items completed early | High-priority but large items may end up in later sprints |
| Due Date | Puts the soonest-due issues first, filling sprints front to back | Teams working against external deadlines | An issue with a tight deadline but low priority will be placed before a high-priority issue with no deadline |
| Balanced | Tries to spread work evenly. Places large issues first, picking the sprint where each one fits best based on remaining room, how much work each person already has in that sprint, and whether the sprint end date aligns with the issue's due date. | Teams that want predictable, consistent sprint loads | High-priority items may not all end up in the earliest sprints |
| Velocity | A toggle pill that uses historical efficiency to set sprint limits. When active, the other four strategy pills are disabled. See Using Velocity as Capacity below. | Teams with enough sprint history to have reliable velocity data | If recent velocity was unusually low (holiday sprint), the algorithm becomes overly conservative |
All strategies respect dependencies: if issue A blocks issue B, then A is always placed in an earlier (or same) sprint as B. If circular dependencies exist, they are detected and flagged so you know to resolve them.
Algorithm Details
Auto-Level uses a greedy bin-packing algorithm with dependency constraints. First, it builds a dependency graph and performs a topological sort so blockers always come before blocked issues. Each issue gets a minimum sprint index = 1 + max(blocker's sprint index), ensuring blockers are placed first. Then, in strategy order, each issue is placed into the earliest sprint (starting from its minimum) that has available capacity. If no sprint has room, a new sprint is created (up to 10). Completed work in each sprint is subtracted from capacity so remaining room is calculated accurately.
Earliest Available Sprint
Priority, Size, and Due Date strategies pack issues into the earliest sprint with available capacity. This means an issue can move backward to an earlier sprint if there is room, not just forward. This ensures sprints are filled front-to-back as efficiently as possible.
If the existing sprints don't have enough room, Auto-Level creates new ones (up to 10). New sprints get dates that follow on from the last sprint using your Sprint Length setting, and their capacity defaults to the average of your existing sprints.
When Capacity Mode is set to Per User in Settings, Auto-Level tracks each person's workload per sprint individually. If someone is overloaded in a sprint, their excess issues are moved to a sprint where they have room. This produces a different result from Per Sprint mode: an issue might stay in Sprint 1 in Per Sprint mode (team total is fine) but get moved to Sprint 2 in Per User mode (because its assignee is overloaded).
If you have velocity data (from completed sprints), the Velocity toggle pill appears in the strategy row. Click it to activate velocity-based capacity. This tells Auto-Level to calculate sprint limits based on each team member's historical efficiency against their real available capacity (from the Team & Capacity tab), rather than using the configured limit or raw team availability.
When enabled:
For best results, configure your team on the Team & Capacity tab first. If no team capacity data is available, the feature falls back to the original behavior (flat historical average velocity).
Click Compare All to run all four strategies at once and see a side-by-side comparison. The Delivery Forecast panel appears showing:
Strategy colors: Priority (blue), Size (orange), Due Date (green), Balanced (purple), Original baseline (gray).
Below the sprint cards, the Velocity section shows how your team has performed in past sprints. This data feeds into capacity calculations and the "Use velocity as capacity" option in Auto-Level.
Five summary tiles across the top:
mean(completed work per sprint) across the lookback window. Units match your Estimation Mode (points or hours/days).avg velocity ÷ sprint length in weeksmean(completed ÷ planned) across recent sprints — what percentage of planned work the team actually delivers. When Capacity tab data is available, a separate efficiency-vs-capacity metric tracks completed work against real available hours.mean(completed ÷ committed) — what percentage of committed work gets finished each sprint. Shows trend direction (improving, stable, declining) by comparing recent vs older sprints.A row of user chips lets you see velocity for specific team members. Click a name to filter; click "All" to show the whole team. You can select multiple people.
A table showing each completed sprint: name, length, capacity, completed work, per-week rate, and efficiency. The best-performing and worst-performing sprints are highlighted.
Click any row to expand it and see individual issues from that sprint — which were completed, which were left incomplete, and which were added or removed mid-sprint.
Velocity data accumulates automatically each time you complete a sprint. If you're starting fresh or want to backfill history, use these buttons:
The What-If tab lets you explore how changes in team performance, scope, and estimates would affect your delivery timeline. It combines adjustable sliders with the Delivery Forecast and the Demand vs Capacity chart to give immediate visual feedback. A Sprint / Project toggle at the top switches between two views.
| Aspect | Sprint view | Project view |
|---|---|---|
| Data source | Sprints from board | Issues with due dates (from JQL) |
| Time buckets | Sprints | Weeks (Monday boundaries) |
| Requires | Sprint Mode ON | Issues with due dates |
| Capacity | Per-sprint capacity | Weekly capacity (holiday and time-off aware) |
| Chart labels | Sprint names | Week dates (e.g., "Mar 10") |
Both views share the same sliders, Monte Carlo simulation, AI analysis, and KPI cards. The only difference is the time buckets used.
Four sliders adjust different aspects of the project simulation. Each ranges from -50% to +50%:
Scales each team member's throughput. At +20%, every team member delivers 20% more per sprint. At -30%, everyone delivers 30% less. Models the team working faster or slower than historical averages.
Scales each team member's available capacity. Simulates adding or losing resources — for example, a team member going on extended leave (less) or a new hire ramping up (more).
Scales the estimated size of each issue. Moving right simulates issues being larger than estimated (common in early-stage projects). Moving left simulates estimates being conservative.
Scales the total remaining work. Moving right simulates scope additions beyond the current backlog. Moving left simulates scope cuts.
Velocity and Capacity compound together on the supply side. Issue Estimation and Scope compound together on the demand side. Moving Velocity +10% and Capacity +10% produces a 21% increase in effective throughput (1.10 × 1.10 = 1.21). Similarly, Issue Estimation +20% and Scope +10% produces 32% more demand.
As you adjust the sliders, both the Delivery Forecast stat card and the Demand vs Capacity chart update in real time, using the same underlying simulation.
In Simulation mode, each slider has per-variable preset buttons that set the uncertainty range for that variable:
Presets are per-variable — you can set Velocity to Optimistic while keeping Scope at Conservative. In What-If mode (fixed sliders, not simulation), there are no presets — you set each slider to a specific value.
An AI chat panel lets you describe scenarios in natural language (e.g., "What if we lose a developer for 2 sprints?"). Requires an AI API key in Settings. The AI returns slider recommendations with an Apply button, impact assessment, and prioritized recommendations.
The Demand vs Capacity chart shows how your team's workload compares to their capacity across each sprint (or week, if Sprint Mode is off). Each sprint appears as a pair of stacked bars — one for demand, one for capacity — so you can see at a glance where the team is overloaded and who is driving it. This chart appears on the What-If tab and updates in real time as you adjust sliders.
Each sprint has two stacked bars side by side:
Shows each team member's available capacity for that sprint, stacked by person. Each segment's height represents that member's net capacity after accounting for their utilization rate, holidays, and PTO. The total bar height is the sprint's total team capacity.
Shows each team member's assigned work for that sprint, stacked by person in the same order and color. Unassigned issues appear as a grey segment at the top. The total bar height is the sprint's total demand including any overflow carried from the previous sprint.
Each team member is assigned a consistent color across both bars. When a member's demand segment is taller than their capacity segment, their portion is highlighted to flag the imbalance.
When a sprint's total demand exceeds its total capacity, the excess work carries forward to the next sprint. A dashed line separates planned demand from overflow. Sprints receiving overflow show the carried work as a hatched segment at the base of their demand bar.
| Pattern | What It Means | What to Do |
|---|---|---|
| Balanced sprint | Demand and capacity bars are roughly equal height. Work is evenly distributed across team members. | No action needed — the sprint is healthy. |
| Overloaded sprint | Demand bar is taller than capacity bar. Overflow will cascade to the next sprint, pushing later work out. | Move issues to later sprints, or increase capacity (add team members, reduce PTO). |
| Individual bottleneck | One team member's demand segment is much larger than their capacity segment, even though the sprint's total demand may be within total capacity. | Reassign work from the overloaded member to someone with spare capacity. |
| Idle capacity | A member's capacity segment is visible but their demand segment is small or absent. | Assign more work to that person, or consider reallocating them to another project. |
Two vertical markers on the chart indicate key milestones:
The projected completion marker shows the sprint where all remaining work, including cascaded overflow, is finally absorbed. This matches the date shown in the Delivery Forecast stat card.
Monte Carlo simulation replaces the fixed What-If sliders with randomized ranges to produce a probability distribution of completion dates. Instead of asking "what if velocity is +10%?", it asks "what is the range of likely outcomes given realistic uncertainty?"
Each variable has a range slider with two dots:
For each trial, the simulator picks a random value between the red and green dots. Wider ranges model more uncertainty; tighter ranges model a more predictable project.
The date by which you have a 50% chance of finishing. Half of the simulated scenarios finished before this date, half after. Use this as the "most likely" outcome.
The date by which you have an 85% chance of finishing. This is the standard planning target for teams that want high confidence in their commitments. When giving dates to stakeholders, P85 is the safest choice.
The spread of possible outcomes plotted as a cumulative probability curve. Key markers on the chart:
A narrow curve means the project timeline is predictable regardless of individual variation. A wide curve means small changes in team performance produce large swings in the delivery date — a signal that the plan is fragile and needs attention.
Traditional Monte Carlo applies the same random factor to the entire team — everyone is fast together or slow together. Project Commander uses a blended per-user randomization model: each iteration draws a global trend (50% weight) that captures team-wide factors like holiday weeks or infrastructure outages, then adds per-user noise (50% weight) that captures individual variation. Alice might have a productive sprint while Bob is blocked by a complex issue, but both are affected by the same team-wide trend.
This blended approach produces wider, more realistic distributions than pure team-level randomization while avoiding the over-optimism of fully independent models. A team of 5 where each member varies semi-independently has far more possible outcomes than a team where everyone moves in lockstep.
| Use Case | Tool | Why |
|---|---|---|
| Explore a specific scenario ("What if we lose a team member?") | What-If sliders | The sliders give you a single deterministic answer for a specific set of assumptions. |
| Understand the range of likely outcomes for planning | Monte Carlo simulation | The probability distribution tells you not just when you might finish, but how confident you can be in that date. |
| Set a commitment date for stakeholders | Monte Carlo P85 | The P85 date gives you 85% confidence — enough margin for most stakeholder commitments. |
| Compare different planning strategies | Compare All + simulation curves | Overlay Monte Carlo distributions from different Auto-Level strategies to see which gives the tightest, most favorable distribution. |
The Team & Capacity tab combines demand analysis and team configuration in one place. The Demand vs. Capacity chart appears first, followed by team configuration below.
The tab has three collapsible sections. Click any section header to expand or collapse it:
A period bar at the top lets you choose the time range you're looking at: Weekly, Biweekly, Monthly, Quarterly, or Yearly. Arrow buttons navigate forward and back. All the numbers in the Team Members table (net capacity, demand, issue count) adjust to show values for the selected period.
A table showing each team member with these columns:
| Column | What It Shows |
|---|---|
| Name | Team member name with a colored avatar. A view time off link opens a modal showing all of that person's time off (see below). |
| Hrs/Wk or Pts/Wk | Capacity per week (editable). In Time Mode shows hours per week; in Points Mode shows points per week. Edit to set all weeks in the current period to that value. |
| Total | Total capacity for the selected period — the sum of all weekly values |
| Util % | Utilization percentage (editable). Adjusts how much of a member's capacity is available for project work. Defaults to 100%. Persists to team member configuration. |
| Time Off | Total hours deducted for holidays and PTO in the period. Click the toggle to expand a detail row showing each holiday and PTO entry with dates and hours. |
| Net Hrs / Net Pts | Net capacity after deductions. Calculated as Total capacity minus holiday deductions minus PTO hours. In Time Mode shows Net Hrs (or Net Days); in Points Mode shows Net Pts. This is the number used for workload status. |
| Demand | How much work is assigned to them in the period |
| Issues | Number of issues assigned, with expandable dropdown showing issue keys and summaries |
| Status | A color-coded badge comparing demand against net capacity (after deductions) |
You can select multiple team members using the checkboxes. The Cap/Wk column shows "varies" if weeks in the period have different values.
Capacity is stored as explicit per-week values. Each team member has a capacity value for each ISO week (Monday–Sunday). Weeks with no value set default to zero.
This model gives you full control — reduce capacity for holiday weeks, ramp new members up gradually, or set different values for different sprints. All downstream calculations (sprint capacity, demand vs capacity, risk) resolve to weekly granularity.
If you have existing team data from before the weekly model, a migration banner appears with a "Populate weeks" button that fills the selected period from your previous rates.
When the Delivery Forecast or Sprints tab needs a capacity number for a sprint, the system combines each team member's weekly capacity map with the sprint's date range:
Points Mode:
Sprint capacity = Σ(member) [ Σ(week overlapping sprint) [ weeklyPoints × utilization% × overlapFraction ] ]
Time Mode:
Sprint capacity = Σ(member) [ Σ(week overlapping sprint) [ weeklyHours × utilization% ] − holidayHours − ptoHours ]
Where:
Inactive team members contribute zero capacity. This calculation is used by the Delivery Forecast (Team Capacity method), Auto-Level, and the Demand vs Capacity chart when "Team capacity" is selected as the capacity source.
Each team member gets a color-coded status based on how their assigned work (demand) compares to their net capacity — capacity after holiday and PTO deductions:
| Status | When It Appears | Color |
|---|---|---|
| OVERLOADED | Demand is more than 115% of capacity | Red |
| OPTIMAL | Demand is 90–115% of capacity | Green |
| AVAILABLE | Demand is 60–90% of capacity | Yellow |
| UNDERLOADED | Demand is less than 60% of capacity | Gray |
A calendar view for managing PTO. Click a date to mark a single day off. Use Ctrl+click to toggle individual days on and off. Use Shift+click to select a range of dates.
Time off and company holidays are automatically deducted from each member's capacity. The deductions appear in the Time Off column of the Team Members table. Click the toggle arrow to expand a detail row listing each holiday and PTO entry with its date and hours deducted. The resulting Net column shows capacity after all deductions.
Click the view time off link next to any team member's name to open a modal showing all of their time off across all time periods. The modal includes:
Add company-wide holidays by entering a date and name. Check the Recurring box to have the holiday repeat every year automatically. Holidays reduce available capacity for all team members.
The chart shows daily capacity (green) and daily demand (blue) side by side so you can see whether your team can keep up with the work. Click legend items to toggle additional traces like cumulative totals, scope, and completion percentage. Red-shaded areas highlight overload zones where demand exceeds capacity.
Above the chart, summary stats show cumulative capacity and cumulative demand for the selected period, and whether demand exceeds capacity (shown in red as “Over by”).
Below the chart, a collapsible Resource Breakdown table lists each team member with their issue count, demand, load percentage, and status badge (Available, Loaded, or Overloaded).
Below the chart, controls let you customize the analysis:
The Scope tab shows how much work has been added over time and how much has been completed, all on one timeline chart. In Points Mode, values are in story points. In Time Mode, values are in hours or days.
Two lines tell the story:
The chart legend is interactive — click any legend item to show or hide its trace. Hidden items appear dimmed with a strikethrough label.
A row of colored user chips appears above the chart (always visible when there are multiple assignees). Click a name to filter the burndown line to just that person's completed work. The scope line always shows the total scope. Click “All” or clear selections to reset.
Inline controls appear between the user chips and the chart:
Choose how the timeline is grouped: Weekly, Biweekly, Monthly, or Quarterly. Arrow buttons navigate between periods.
Below the chart, stat cards show Total Scope, Completed Work, Remaining Work, % Complete, and issue counts. A breakdown table lists individual issues grouped by the selected period.
The Scope tab has its own Delivery Forecast stat card with the same controls as the Dashboard: Delivery Projection Method dropdown and Scope Growth Method checkbox + dropdown. The projection method and scope growth settings are shared — changing them on one tab updates the other.
The Alerts tab scans your issues for common problems and analyzes dependency relationships. A badge on the tab shows the total number of alerts found.
Issues are grouped into six collapsible sections. Click a category header to expand or collapse it:
| Category | What It Catches |
|---|---|
| Done with Remaining Work | Issue is marked Done but still has time remaining on it |
| Overdue | Issue is not Done and its due date has passed |
| Dependency Conflicts | Issue is blocked by something that finishes after it should start. Also detects circular dependencies. |
| Child After Parent | A subtask is due after its parent task |
| Missing Dates | Issue in an active or future sprint has no start date or no due date |
| Missing Estimates | Issue in an active or future sprint has no estimate. In Points Mode this checks for story points; in Time Mode this checks for original estimate. |
Below the alert categories, a collapsible Dependency Analysis section gives you the full picture of blocking relationships:
The Epics tab gives you a bird's-eye view of your project organized by epic. It groups child issues under their parent epics and shows progress, scope changes, and status — all in a single sortable table. Works in both Sprint Mode and non-Sprint Mode.
A search box at the top filters epics by name or key. Dropdowns let you filter by status (All Epics, On Track, At Risk, Behind, To Do, Done, Overdue) and by owner. Click any column header to sort.
Each row shows one epic with:
Click any epic row to expand it and see all child issues with their individual status, assignee, points, and sprint assignment.
| Status | Condition |
|---|---|
| Done | 100% complete or parent issue is in a Done status |
| Overdue | Due date is in the past and not Done |
| On Track | ≥ 60% complete |
| At Risk | 30–59% complete |
| Behind | 1–29% complete |
| To Do | 0% complete or no child issues |
Three tabs filter the dependency list:
| Tab | What It Shows |
|---|---|
| All | Every cross-epic dependency |
| Violations | The blocker epic is not on track AND blocked issues are not yet done — delivery is at risk |
| At Risk | The blocker epic is not on track but no unfinished blockers are blocking the dependent epic yet |
Each row shows the blocker epic on the left, an arrow labeled "blocks" in the middle, and the blocked epic on the right. A status badge indicates the health of the relationship:
Each blocked epic shows how many issues are blocked and their total points or hours.
Estimation Modes
All values in the Epics tab automatically adapt to your estimation mode. In Points Mode you see story points; in Time Mode you see hours. The same applies to capacity, forecasts, and scope calculations.
The Estimation Mode setting is the most important choice you make. It determines how work is measured, where capacity comes from, and what units every tab uses. Choose it once and the rest follows automatically.
| Aspect | Points Mode | Time Mode |
|---|---|---|
| What it measures | Story points | Remaining estimate (hours or days) |
| Where capacity comes from | Capacity Limit from Settings (Settings default), or team Points Per Sprint when Team/Capacity settings is selected | Capacity Limit from Settings (Settings default), or team hours/availability when Team/Capacity settings is selected |
| Jira field used | Story Points | Remaining Estimate |
| Display units | pts | hrs or days (based on Time Unit setting) |
| Time Unit setting | Hidden (not needed) | Visible — choose Hours or Days |
| Progress indicator | Points (done / total) | Estimate (remaining / original) or Work Ratio |
All health cards, workload bars, and scope values use story points in Points Mode, or hours/days in Time Mode. The unit label adjusts automatically (pts, hrs, or days).
The scope and burndown chart shows story points in Points Mode, or hours/days in Time Mode.
The "Missing Estimates" alert checks for story points in Points Mode, or original estimate in Time Mode.
The What-If sliders, cascade chart, and Monte Carlo simulation all work in story points (Points Mode) or hours/days (Time Mode).
Switching Estimation Mode automatically adjusts related settings:
You only need to choose your Estimation Mode — the Jira field and Progress Indicator follow automatically.
Jira stores time in seconds internally. Project Commander uses an 8-hour workday for conversion:
When Time Unit is set to Days, all time values throughout the app are shown in workdays instead of hours.
Points Mode vs Time Mode — the key difference
Estimation Mode controls which Jira field measures work (story points vs remaining estimate). In both modes, each sprint's Default/Team/Manual toggle determines where capacity comes from — see How Capacity Works. The difference is what Team calculates in Per User mode: in Points Mode it sums weekly points values; in Time Mode it sums weekly hours values for the sprint's date range.
Project Commander detects blocking relationships from Jira's "blocks" and "is blocked by" issue links. Dependencies affect multiple features across the app.
Issues with dependency conflicts show a warning icon (⚠) next to their key in the issue table. The icon has a tooltip describing the conflict (e.g., "Blocked by PROJ-45 which ends after this issue starts").
The Dependency Conflicts alert category lists all issues where a blocker finishes after the blocked issue should start. The Dependency Analysis section provides the full dependency graph with summary stats, edge list, root/leaf issues, and a conflicts-only filter.
Click the Dependencies button in the action header to open a modal with three tabs:
Each dependency shows the blocker and blocked issue with their sprint name, sprint state badge (active/future), and completion status. Violation rows are marked with a warning icon.
Auto-Level always respects dependencies. A blocking issue is placed in an earlier (or same) sprint as the issues it blocks. If circular dependencies exist, they are detected and reported, and the affected issues are still placed using the chosen strategy.
Click the Settings gear icon (⚙) in the tab bar to open the configuration panel. Settings are saved per Jira site and shared across all users. This section provides a comprehensive reference for every setting and how it affects the app.
| Setting | Type | Default | Visible When |
|---|---|---|---|
| JQL Filter | Text area | Empty | Always |
| Epics | Checkbox | On | Always |
| Sprint Mode | Checkbox | On | Always |
| Board | Search/select with auto-detect | Empty | Sprint Mode ON |
| Include Backlog | Checkbox | On | Sprint Mode ON + Board selected |
| Capacity Mode | Radio: Per Sprint / Per User | Per Sprint | Sprint Mode ON |
| Progress Indicator | Dropdown | Points (Points Mode) / Estimate (Time Mode) | Sprint Mode ON |
| Capacity Limit | Number (min: 1) | 40 (sprint) / 20 (user) | Sprint Mode ON |
| Sprint Length | Dropdown: 1/2/3/4 weeks | 2 weeks | Sprint Mode ON |
| Velocity Lookback | Dropdown: 3 / 5 / 8 / 10 sprints | Last 5 sprints | Sprint Mode ON (also settable from Dashboard and What-If) |
| Estimation Mode | Radio: Points / Time | Points | Always |
| Time Unit | Dropdown: Hours / Days | Hours | Estimation Mode = Time |
| Display Columns | Multi-select with search | Key, Summary, Assignee, Story Points | Always |
| AI Features | Provider dropdown + API key + Model selector | Empty | Always |
| Read-only Mode | Checkbox | Off | Always (standalone app) |
| Enable Work Leveling | Checkbox | On | CSV mode only |
Estimation Mode is the most important setting. It controls which Jira field is read, what units are shown everywhere, and how capacity is sourced. Every other setting cascades from this choice.
Only appears when Estimation Mode is Time. A pure display conversion using an 8-hour workday. "Scope +96 hrs" becomes "Scope +12 days". The underlying data never changes — only the display.
Two choices: Per Sprint or Per User. Only shown when Sprint Mode is on.
Switching from Per Sprint to Per User changes the Auto-Level algorithm completely. Per Sprint mode distributes issues to fit sprint totals. Per User mode balances per-person workload, moving specific people's lowest-priority issues to sprints where that person has room.
The Settings default capacity when no other source is active. The label changes based on your Capacity Mode:
This is the fallback — you can override it per sprint or per person directly in the Sprints tab. See Capacity Precedence Hierarchy below for the full override chain.
A checkbox (default: on). When turned off:
Select the Scrum board that contains your sprints. The app lists all Scrum boards on your Jira site — type to search, then click to select. If the app is opened from within a Jira board, the board is auto-detected. Required for the Sprints tab. Changing the board reloads all sprint data, velocity history, and team configuration (team settings are stored per board).
Standard Jira JQL syntax. This query fetches the issues used across Dashboard, Scope, Alerts, Team & Capacity, and Epics tabs. The Sprints tab shows all issues from the board regardless of this filter.
When enabled, unscheduled backlog issues from the board appear below the sprint list and are included in demand calculations across all views. The default is off — you see only committed/planned work. Turn it on when you want the full picture of everything that needs to get done.
Controls how many past periods are included in velocity calculations and scope/burndown charts. This is a global setting that can be changed from four places: Settings, Dashboard, Sprints tab (Velocity section), and What-If. Range: 3–10, default 5. Fewer sprints means a more volatile average — a single good or bad sprint has more weight.
Duration of new sprints created by Auto-Level: 1, 2, 3, or 4 weeks. Also affects velocity normalization — velocity is converted to "per week" by dividing by sprint length. If you change sprint length, consider adjusting your capacity limit proportionally.
Choose which columns appear in sprint issue tables. The default set is Key, Summary, Assignee, and Story Points. There are 20 standard columns available. You can also search for Jira custom fields by typing at least 2 characters in the search box. This only affects the issue table display — no impact on calculations.
Configure an AI provider (Anthropic Claude, OpenAI ChatGPT, or Google Gemini) with an API key to enable AI-powered features: the AI Insights section on the Dashboard, and AI risk analysis on the What-If tab. You can optionally select a specific model from the provider's lineup.
When enabled, your Jira data is never modified. Drag-drop, sprint actions, and date syncing are disabled. You can still run Auto-Level preview (dry run) to see what it would do, but the "Accept" button is blocked. All viewing and analytics features still work normally.
Only visible in CSV mode (when data is loaded from a CSV file rather than Jira). When enabled, a Level Work banner appears at the top of the Scope and Team & Capacity tabs offering to redistribute tasks so no week exceeds capacity. Leveling shifts issue due dates to smooth out demand peaks. Disable this setting to hide the leveling banner.
Controls the small progress display in each sprint card header. Available options depend on Estimation Mode:
Switching Estimation Mode automatically adjusts related settings to stay consistent:
| When You Switch To | What Changes Automatically |
|---|---|
| Points | Uses Story Points field, Progress Indicator → Points |
| Time | Uses Remaining Estimate field, Progress Indicator → Estimate, Time Unit selector appears |
Capacity has multiple layers. The app resolves the effective capacity for each sprint using a priority order where the first match wins.
How the major settings depend on and affect each other:
| Setting | Determines / Gates / Feeds |
|---|---|
| Estimation Mode | Determines which Jira field is read (Story Points vs Remaining Estimate), unit labels everywhere, gates Time Unit dropdown and Progress Indicator options, feeds sprint headers, all Dashboard cards, Team & Capacity chart, What-If simulation inputs, Auto-Level issue sizing, Velocity units, Alerts field checks. Does NOT affect dependencies or risk badges. |
| Capacity Mode | Gates Capacity Limit field label, determines what "over capacity" means (team total vs individual person), determines which Auto-Level algorithm runs, determines sprint header layout (single bar vs per-person), gates Effective velocity option visibility. |
| Sprint Mode | Gates Sprints tab, Auto-Level, Velocity tracking, Scope creep detection, Sprint risk badges, Capacity Mode radio, Plan What-If checkbox. |
| Team Config | Feeds calculated capacity per sprint (when limit is "Team capacity"), Team & Capacity chart, Delivery Forecast card, What-If simulation baseline, Auto-Level bin sizes. |
| Velocity Lookback | Feeds velocity average, Delivery Forecast projected date, Effective velocity option in Auto-Level, Monte Carlo variance. |
| Sprint Length | Feeds velocity normalization (pts/week), new sprint date ranges from Auto-Level, Delivery Forecast weekly throughput. |
| Include Backlog | Feeds which issues are in scope for Dashboard demand, Demand vs Capacity analysis, Forecast remaining work, and Alerts issue set. |
| JQL Filter | Feeds Scope, Alerts, Team & Capacity, Epics tabs. NOT the Sprints tab or Auto-Level. |
| Display-only settings | Display Columns (issue table layout), Epics toggle (show/hide tab), Read-only Mode (disable writes), AI Provider/Key (enable AI chat). No calculation impact. |
The Delivery Forecast, Scope Growth, What-If sliders, Demand vs Capacity chart, and Monte Carlo simulation all share the same underlying engine: a per-sprint, per-user stepped simulation that walks through future sprints consuming work against individual capacity.
Every forecasting feature in Project Commander runs the same simulation:
All views are driven by the same model:
If the stat card says May 30, the chart's completion marker points to the same sprint, and the Monte Carlo P50 clusters around the same date. There is one source of truth for the forecast. Changing any input — team capacity, issue estimates, scope growth rate, or What-If slider positions — flows through the same engine and updates all views consistently.
Why this matters
Because everything uses the same engine, you never get contradictory forecasts from different parts of the app. The Dashboard, What-If, and Monte Carlo views are different lenses on the same underlying simulation. When you improve your plan (e.g., by running Auto-Level to rebalance), the improvement shows up everywhere simultaneously.
sk-ant-, OpenAI keys start with sk-.| Indicator | Meaning |
|---|---|
| Default button highlighted | Sprint uses capacity limit from Settings |
| Team button highlighted | Per User mode: capacity calculated from team config. Per Sprint mode: same as Default. |
| Manual button highlighted | Sprint capacity is a user-entered number |
| Team button greyed out | No team members configured on the Team & Capacity tab |
| Red avatar ring | User is over their capacity limit |
| Green avatar ring | User is within capacity |
| Gray avatar | User is filtered out — click to restore |
| Purple move badge | Issue was moved by Auto-Level |
| Orange move badge | Issue was manually moved during an Auto-Level session |
| ⚠ icon on issue key | Dependency conflict (blocker finishes after this issue starts) |
| Lock icon (filled) | Issue or sprint is locked (excluded from Auto-Level and drag) |
| Blue sort arrow (↑/↓) | Column is sorted ascending or descending |
| Blue reorder line | Drop target indicator when reordering issues |
| Red due date | Issue is past due (due date before today) |
| T marker (purple) | Target sprint/week on cascade bar chart |
| P marker (green/red) | Projected delivery sprint/week on cascade bar chart |
| Action | Shortcut |
|---|---|
| Select multiple issues | Click checkboxes individually |
| Select a range of time off days | Shift + click |
| Toggle individual time off days | Ctrl + click |
| Close column search dropdown | Esc |
| Add column from search | Enter |
Project Commander stores all its data securely within your Jira Cloud instance using Atlassian Forge storage:
Issue data from your JQL filter is fetched from Jira on each page load and shared across all tabs. No issue data is stored permanently by the app.
Jira stores time values in seconds. Project Commander converts using an 8-hour workday:
Each sprint's Default/Team/Manual toggle determines its capacity source:
| Toggle State | Source |
|---|---|
| Default selected | The Capacity Limit from Settings |
| Team selected | Per User mode: calculated from team configuration (Points Mode: points per sprint; Time Mode: available hours minus holidays and time off). Per Sprint mode: same as Default. |
| Manual selected | The number the user entered |
During Auto-Level, a "Use velocity as capacity" checkbox can override these with efficiency-adjusted limits based on each member's historical completion rate against their real available capacity.
Risk and Team & Capacity tabs always use team-based calculations when available, regardless of individual sprint toggles. They fall back to the Capacity Limit from Settings when no team is configured.
ceil(overflow ÷ avg capacity), capped at 5| Card | Formula |
|---|---|
| Schedule Adherence | Compares planned-by-now vs completed-by-now. Planned = sum(estimates) for issues due ≤ today. Completed = sum(estimates) for Done issues due ≤ today. Ahead if completed > 105% of planned, Behind if < 95%, On Pace otherwise. |
| Finish on Time? | Remaining = sum(remaining estimate) for non-Done issues. Weekly capacity = sum(hours/week × utilization%) per team member. Total capacity = weekly capacity × weeks until target. Ratio = capacity ÷ remaining. Yes if ratio ≥ 1.0, At Risk if ≥ 0.9, No if < 0.9. |
| Forecast | Velocity-based: weeks needed = adjusted remaining work ÷ weekly throughput. Projected date = today + (weeks × 7). Remaining work is multiplied by estimate accuracy ratio when estimates are consistently low. Sprint plan-based: end date of last open sprint. |
| Progress | sum(estimates for Done issues) ÷ sum(all estimates) × 100. Falls back to count(Done) ÷ count(All) if no estimates exist. |
| Status | Condition |
|---|---|
| Sufficient | Remaining capacity ≥ remaining work (or all work already done) |
| Tight | Remaining capacity ≥ 90% of remaining work |
| At Risk | Remaining capacity < 90% of remaining work |
Per-sprint badges: DELIVERABLE if capacity ≥ demand, TIGHT if capacity ≥ 90% of demand, OVERCOMMITTED otherwise.
For active sprints: original commitment = sum(points) of issues at sprint start (from Jira changelog). Current scope = sum(points) of current issues. % change = (current − original) ÷ original × 100. The expandable panel lists each issue added or removed mid-sprint, with dates from the changelog.
| Factor | Green (OK) | Amber (Caution) | Red (At Risk) |
|---|---|---|---|
| Scope Growth | ≤ 10% | 11% – 25% | > 25% |
| Team Capacity | 60% – 90% | < 60% or 91% – 110% | > 110% |
| Team Balance | All 50% – 100% | Any >100% or <50% | Any >115% with another <60% |
| Delivery Rate | ≥ 85% | 65% – 84% | < 65% |
| Estimate Accuracy | 80% – 110% | < 80% or 111% – 130% | > 130% |
| Dependency Conflicts | 0 | 1 – 2 | ≥ 3 |
| Constant | Value | Used In |
|---|---|---|
| Workday | 8 hours | All time conversions (hours ↔ days) |
| Time (seconds → hours) | ÷ 3,600 | Jira remaining estimate conversion |
| Time (seconds → days) | ÷ 28,800 | Jira remaining estimate conversion (8h day) |
| Max new sprints (Auto-Level) | 10 | Auto-Level sprint creation limit |
| Monte Carlo iterations | 2,000 (Sprint) / 5,000 (Project) / 10,000 (Compare All) | What-If simulation |
| Sprint length options | 1, 2, 3, or 4 weeks | New sprint creation, Auto-Level |