Project Commander

User Manual

1. Installation

Project Commander is a Jira Cloud app that can be accessed as a full-page app or as a dashboard gadget.

Installing from Atlassian Marketplace

  1. Go to the Atlassian Marketplace
  2. Search for "Project Commander"
  3. Click Get it now
  4. Select your Jira Cloud site
  5. Confirm the installation

Accessing the Full-Page App

  1. In Jira, click Apps in the top navigation menu
  2. Select Project Commander from the dropdown
  3. The full-page app opens with all features available

Adding the Dashboard Gadget

  1. Navigate to a Jira dashboard
  2. Click Add gadget
  3. Search for "Project Commander"
  4. Click Add gadget
  5. The gadget will appear on your dashboard with the same features as the full-page app

2. Standalone Web App

Don't want to install anything? The standalone web app lets you try Project Commander with your own Jira data directly in your browser — no Atlassian Marketplace installation required.

Step 1: Enter Your Beta Access Code

Visit projectcommander.app/app and enter your beta access code. Or click Try Interactive Demo to explore with sample project data — no account needed.

Beta access code entry screen with Try Interactive Demo option

Step 2: Register

After entering a valid code, provide your name and email. Click Start Using Project Commander.

Registration form with name and email fields

Step 3: Connect to Jira

The Connect page has two tabs: Connect Jira and Upload CSV. To connect to Jira, enter your site URL, email, and an API token. Check the Terms of Use checkbox and click Connect to Jira. Or click Try Interactive Demo at the bottom to explore with sample data.

Connect to Jira form with Connect Jira and Upload CSV tabs

Read-Only Mode

The standalone app starts in read-only mode for safety — it reads your Jira data to generate analysis but does not create, modify, or delete anything in Jira. When you're ready, enable write mode in Settings to unlock full functionality including drag-drop issue moves, Auto-Level, sprint creation, and more.

Security & Credentials

Disconnecting

Click the Disconnect button in the top banner to end your session and clear your credentials from the browser.

3. CSV Import (Standalone Only)

The standalone web app can import project data from any CSV file — no Jira required. On the Connect page, select the Upload CSV tab.

Step 1: Upload Your File

Drag and drop a CSV file or click to browse. The file must have a header row. Common formats from Jira, Azure DevOps, Asana, Monday.com, and spreadsheets are supported.

Step 2: Column Mapping

Project Commander auto-detects columns like Summary, Status, Story Points, Start Date, Due Date, Assignee, Priority, and Sprint. Review the mapping and adjust any fields that weren't detected.

A Summary column is required. All other columns are optional. If no Sprint column is present, the app switches to Tasks mode — a flat list with date-based planning instead of sprint lanes.

Step 3: Status Mapping

Map your status values (e.g., "Open", "In Review", "Closed") to three categories: To Do, In Progress, and Done. The app auto-guesses based on common status names.

Step 4: Confirm & Import

Review the summary and click Import. Your data is stored locally in the browser (IndexedDB) — nothing is sent to a server.

Tasks Mode (No Sprints)

When your CSV has no Sprint column, the Sprints tab becomes the Tasks tab. Issues appear as a flat list sorted by date. All other tabs (Dashboard, What-If, Team & Capacity, Scope, Alerts) work normally using weekly time buckets instead of sprints.

Work Leveling

In CSV mode, the Level Work banner appears above all tabs. Click it to automatically redistribute task dates so no single week exceeds your team's capacity. The algorithm respects priorities and dependencies. After leveling, you can Accept the changes or Cancel to revert. An Undo option is available after accepting.

Work Leveling can be disabled in Settings → Advanced → Enable Work Leveling.

4. Getting Started

When you first open Project Commander, click the Settings gear icon (⚙) in the tab bar to configure the app. Three settings determine what the app shows you.

Step 1: Set Your JQL Filter

The JQL filter defines which Jira issues appear across all tabs. Enter any valid JQL query, for example:

All tabs share the same issue data from this filter.

Step 2: Enable Sprint Mode (Optional)

If your team uses Jira sprints, enable Sprint Mode and select your Board. The app searches your Jira Scrum boards — type to filter, then click to select. If the app is opened from within a Jira board, the board is auto-detected.

Sprint Mode unlocks the Sprints tab (sprint planning, drag-drop, auto-level) and the What-If tab. All other tabs work with or without Sprint Mode.

Step 3: Choose an Estimation Mode

Choose how your team measures work:

Step 4: Save

Click Save Configuration. The app will load your data and display the appropriate tabs.

Quick Start

At minimum you need a JQL filter. Sprint Mode and a Board are only required if you want the Sprints tab. You can use the Team & Capacity, Scope, Alerts, and What-If (Project view) tabs with just a JQL filter.

Tab Overview

TabPurposeRequires
DashboardSingle-screen project health overview with key metrics and navigationAlways visible
SprintsSprint planning with drag-drop, auto-level, and capacity trackingSprint Mode ON + Board selected
What-IfWhat-if analysis and Monte Carlo simulation — Sprint view (by sprint) and Project view (by week)Always visible (Sprint view requires Sprint Mode ON)
Team & CapacityTeam capacity, time off, holidays, and demand vs capacity chartAlways visible
ScopeScope and burndown timeline chart with delivery forecastAlways visible
AlertsIssue problems and dependency analysisAlways visible
EpicsEpic progress, forecasts, scope growth, and cross-epic dependenciesAlways visible

5. Dashboard Tab

The Dashboard is your project's home screen — a single view that answers the fundamental question: "Are we on track?" It surfaces the current project status, the delivery forecast, plan quality warnings, diagnostics, and suggested actions. It is always visible as the first tab, regardless of Sprint Mode or Board settings.

Dashboard: Are we on track, projected vs target date, AI Insights Dashboard: Diagnostics table with project health factors

Stat Cards

The top of the Dashboard shows four stat cards that summarize your project at a glance. Each card is clickable — it navigates to the relevant detail tab.

Dashboard stat cards: Current Status, Delivery Forecast, Target Date, Progress

Current Status

Of the work due by today, how much is done? Compares completed work to work that was planned to be done by now (based on issue due dates or sprint end dates):

Shows completed and planned-by-now totals in your configured units. A gear icon opens a checkbox to include all Done work, not just work that was planned by today.

Delivery Forecast

Shows the projected completion date using the per-sprint, per-user simulation engine. Click the gear icon to open per-card settings with two controls:

Below the projected date, the card shows Remaining (total estimated work not yet completed) and Capacity to target (total team capacity available from now until the target date). When Remaining exceeds Capacity to target, the forecast date will be past the target date. When one team member is driving the delay, the card shows a bottleneck indicator.

Target Date

The deadline you are measuring against. Two choices:

The target date is shared across the Dashboard, What-If, and What-If (Project view) tabs. Setting it in one place updates all three.

Progress

Overall percentage of work completed across all issues in scope. Calculated as sum(estimates for Done issues) ÷ sum(all estimates) × 100. Falls back to issue count if no estimates exist.

Plan Quality Warnings

A warnings banner appears above the stat cards when the sprint plan has issues that undermine forecast reliability. The banner surfaces problems you should address to get a trustworthy delivery forecast. When all warnings are resolved, the banner disappears and the forecast is based on a realistic, leveled plan.

Plan quality warnings banner showing overloaded resources and dependency conflicts

Warning Types

Overloaded resources: One or more team members have more work assigned than their capacity allows across one or more sprints. The forecast reflects this overload honestly — it will show a later date because work overflows sprint to sprint — but the plan itself is unrealistic until rebalanced.

"3 team members overloaded across 2 sprints — Auto-Level to rebalance"

Dependency conflicts: Issues are ordered across sprints in a way that violates their dependencies — for example, a blocker in Sprint 6 that blocks work scheduled in Sprint 5. The forecast assumes work proceeds as planned, but dependency violations may cause delays that are not captured in the simulation.

"2 dependency conflicts in Sprints 5-6 — review in Sprints tab"

External dates at risk: Issues with hard, externally-driven deadlines (locked issues with manually-set due dates) are placed in sprints that end after their deadline. These represent commitments to customers, regulators, or stakeholders that the current plan will miss.

"1 externally-constrained issue at risk of missing its deadline"

Recommended Workflow

The warnings guide a natural planning workflow:

  1. Auto-Level — balance sprints and resources so no one is overcommitted.
  2. Accept — apply the leveled plan.
  3. Lock issues with external constraints so Auto-Level won't move them.
  4. Sync Due Dates — set issue due dates from sprint placement. Locked issues keep their externally-driven dates.
  5. Resolve dependency conflicts in the Sprints tab.
  6. Review the forecast — now based on a realistic, leveled plan with dependencies honored and external dates respected.

Warnings do not block the forecast

You can always see the projected date. But when warnings are present, the forecast reflects the problems in the plan: overloaded sprints push the date out, and the bottleneck indicator shows which team member is driving the delay. The warnings banner appears on the Dashboard only. The What-If tab does not show warnings because its purpose is to explore scenarios — including broken ones.

Diagnostics Table

A single expandable table showing all project health factors. Click any row with a caret to expand for a detailed breakdown.

Diagnostics table with Scope, Team Capacity, Team Balance, Delivery Rate factors
FactorWhat It ShowsCalculation
Scope Project-wide scope growth percentage since start (current scope − original scope) / original scope × 100.  Green if ≤10%.  Amber if 11–25%.  Red if >25%.
Team Capacity Team utilization percentage remaining demand / total capacity × 100.  Green if 60–90%.  Amber if <60% or 91–110%.  Red if >110%.
Delivery Rate Percentage of capacity actually delivered, with trend average(completed / capacity) across recent sprints, with trend direction (improving, stable, declining).  Green if ≥85%.  Amber if 65–84%.  Red if <65%.
Estimate Accuracy Time spent vs original estimate — are estimates reliable? total time spent / total original estimate × 100.  Green if 80–110%.  Amber if <80% or 111–130%.  Red if >130%.
Team Balance Workload distribution across team members Compares each member’s load% (demand / capacity × 100).  Green if all members 50–100%.  Amber if any member >100% or <50%.  Red if any member >115% while another is <60%.
Dependency Conflicts Cross-sprint dependency violations Counts issues blocked by something in a later sprint.  Green if 0.  Amber if 1–2.  Red if ≥3.
Alerts Error and warning counts from the Alerts tab Errors: done-with-remaining, dependency conflicts, circular dependencies. Warnings: overdue, child-after-parent, missing dates, missing estimates.  Green if 0 alerts.  Amber if warnings only.  Red if errors present.

Each row shows a status dot (green/amber/red/black) and a plain-language explanation.

Each factor row is expandable — click to see a detailed breakdown of the calculation, contributing issues, and trend data.

Weekly Digest & Export

Two buttons appear in the top-right corner of the Dashboard:

Weekly Digest preview showing project status, key metrics, changes, and alerts

The Weekly Digest includes:

AI Insights / Ask AI

A collapsible panel that auto-generates an AI analysis of your project status. Each bullet point is prefixed with PROJECT STATUS: or TEAM & PLAN HEALTH: and is expandable for more detail. Recommendations are listed separately.

Below the insights, an Ask AI section lets you type natural-language questions about your project. For example: "Which sprints are at risk?", "Who is overloaded?", or "What should I focus on this week?" The AI receives your project context — sprints, issues, capacity, velocity, and demand vs capacity data — and returns a focused answer.

This feature requires an AI API key configured in Settings (see Settings Reference).

No Data State

If no JQL filter is configured, the Dashboard shows a "No Data Available" message prompting you to open Settings and configure a JQL filter.

6. Delivery Forecast

The Delivery Forecast predicts when your project will finish based on the team's throughput and the remaining work. It appears as a stat card on both the Dashboard and What-If tabs, and drives the projected completion date throughout the app.

Dashboard stat cards with expanded settings: Delivery Forecast showing Projection Method and Scope Growth Method dropdowns

Click the gear icon () on any stat card to expand its settings. The Delivery Forecast card reveals the Projection Method dropdown and Scope Growth Method controls.

How It Works

The forecast walks through each future sprint (or week, if Sprint Mode is off) and simulates work being completed using a stepped per-sprint, per-user simulation:

  1. For each future sprint, the system calculates each team member's available capacity — how much work they can realistically complete in that sprint, accounting for their configured hours, utilization rate, holidays, and PTO.
  2. Each team member's assigned issues are consumed up to their capacity for that sprint. Work they cannot finish overflows to the next sprint.
  3. Unassigned issues are consumed by whatever spare capacity remains after each member's assigned work is handled.
  4. The forecast date is the end of the sprint where all work — assigned, unassigned, and overflow — is fully absorbed.

If one team member is overloaded while others have spare capacity, the forecast reflects this honestly. The project finishes when the last person clears their queue, not when the team average says so.

Throughput Methods

The Delivery Projection Method dropdown controls how each team member's per-sprint capacity is calculated. All four methods use the same stepped simulation — only the source of the capacity number differs.

Sprint Capacity

Uses the capacity set for each sprint. Each sprint can have its own capacity override — a custom number, team-calculated capacity, or effective capacity — configured in the Sprints tab. Per-user capacity can be set individually for each team member within each sprint using the inline capacity editor in the per-user table. Sprints without an override use the default capacity from Settings. When per-user limits are not set, the sprint limit is split proportionally based on each member's team capacity.

This represents what the PM has budgeted for each sprint.

Team Capacity

Uses the bottom-up calculation from the Team & Capacity tab — each member's hours per week, multiplied by their utilization percentage, minus holidays and PTO for that sprint. This represents what the team can actually work based on their availability.

Effective Capacity

Applies each team member's historical efficiency rate to their team capacity. If a member typically delivers 80% of their planned capacity, their effective capacity is their team capacity multiplied by 0.8. This is the most realistic method for teams with established delivery history.

Velocity

Uses each team member's actual average delivery rate from completed sprints. If Alice has averaged 12 points per sprint over the last 5 sprints, that is her projected throughput — adjusted to zero for any sprint where she has PTO. This method is purely empirical: it uses what the team has delivered, not what they are configured to deliver.

The system selects the first available method in this order: Sprint Capacity, Effective, Team, Velocity. You can override by selecting a different method from the dropdown.

Per-User Capacity in the Sprints Tab

Each sprint card shows a per-user table with columns for Member, Demand, Capacity, and Status. The Capacity column is editable — click on a member's capacity value to set their individual capacity for that sprint. This override is saved per-user per-sprint and takes priority over the global settings default.

This allows fine-grained control: you can give Alice 20 points of capacity in Sprint 5 (she's focused on this sprint) and Bob only 10 (he's splitting time with another project), even though both have the same global default.

Remaining and Capacity to Target

Below the projected date, the forecast card shows:

When Remaining exceeds Capacity to target, the forecast date will be past the target date.

Bottleneck Detection

When the forecast is driven by one overloaded team member rather than overall team capacity, the forecast card identifies the bottleneck:

May 30, 2026 (Sprint 10) — 6 days late
Bottleneck: Alice — overloaded in Sprints 5-7

This tells you WHY the date is late and WHO to rebalance, so you can make an informed decision about whether to reassign work.

Sprint Mode Off

When Sprint Mode is off, the forecast uses weekly steps instead of sprint steps. Each week's capacity comes from team member settings (hours per week multiplied by utilization, minus any PTO or holidays that week). Issues are bucketed by their due dates into weeks. The simulation and throughput methods work identically — only the time unit changes from sprints to weeks.

7. Scope Growth

Scope growth models the rate at which new work enters the project. When enabled, it reduces the effective capacity each sprint by the amount of new work expected to arrive, extending the forecast accordingly.

How Scope Growth Is Calculated

The growth rate represents the net new work added per week: issues created minus issues removed (cancelled, rejected, or marked as won't do). Done issues are counted as additions — they were real scope when created. Only issues in removed statuses are subtracted.

The calculation uses issue creation dates and the current estimation mode:

Growth Models

Average (Avg over period)

On the Scope tab, this computes the historical growth rate over the selected period (Weekly, Monthly, All Time, etc.). On the Dashboard, it uses the project's full history from the earliest issue creation date to today. Both tabs use the same underlying calculation.

Manual

You specify a fixed growth rate in points or hours per week. Use this to model specific scenarios — for example, "what if we add 10 points of new work every week?"

How Scope Growth Affects the Forecast

Growth adds to the unassigned demand pool each sprint:

unassignedDemand += growthRate × sprintWeeks

This new work competes for spare team capacity after assigned work is handled. If growth exceeds the team's total throughput, the forecast shows "Never" — the team cannot finish because new work arrives faster than it is completed. The method remains selectable so you can see the impact and compare against other methods.

When the forecast shows "Never"

If scope growth exceeds team throughput, the project's remaining work grows every sprint. The forecast correctly reports that the project cannot finish under these conditions. To resolve this, either reduce scope growth (cut incoming work) or increase team capacity.

8. Sprint View

The Sprints tab is where you plan and manage your work across sprints. Each sprint appears as a collapsible card showing its issues, capacity, and team workload. To see the Sprints tab, enable Sprint Mode and select a Board in Settings.

Sprints: All sprints overview with demand, capacity, progress, and Epics tab Sprints toolbar with Velocity, Demand by User, and Auto-Level sections Sprints: Expanded sprint detail with issues table, sprint goal, and capacity Sprints: Velocity tracking, work by user, auto-level toolbar

What You See

The page is organized top to bottom:

  1. Action bar — Dependencies button, refresh, and other actions
  2. Toolbar — a unified row with collapsible sections (click each label arrow to expand):
    • Velocity — historical sprint performance chart and KPI tiles (see Velocity Tracking)
    • Demand by User — colored chips per assignee showing workload across all sprints. Click a chip to filter sprints to that person.
    • Auto-Level — strategy pills for auto-leveling (see Auto-Level)
    • Collapse/Expand All — toggle all sprint cards and backlog open or closed
    All sections start collapsed by default.
  3. Create Sprint — a dedicated row for creating new sprints
  4. Sprint cards — one card per sprint, in board order. Each card has a compact header row and an expandable details panel. Below the header is the issue table.
  5. Backlog — if Include Backlog is on, a backlog section appears at the bottom with unscheduled issues.

Sprint Card Header

The header row gives you a quick summary without expanding anything:

Sprint Details Panel

Click the Sprint Details toggle below the header to expand additional information:

Expanded sprint details showing Capacity & Team section with per-user demand vs capacity

Issue Table

When a sprint is expanded, you see its issues in a table. The columns shown are the ones you selected in Display Columns (Settings). Every row has:

Due dates in the past are highlighted in red. Assignees show a colored avatar chip.

Column Sorting

Click any column header to sort the issues in that sprint. Click again to reverse the order, and a third time to clear the sort. A small arrow (↑/↓) shows which column is sorted and in which direction.

Sorting is per-sprint (each sprint sorts independently) and resets when you reload. It is not saved. If you manually reorder an issue (by dragging within the sprint), the sort is cleared.

How Capacity Works

Each sprint has a capacity value that represents how much work it can hold. A dropdown in the sprint header controls where this number comes from:

OptionPer Sprint modePer User mode
Settings defaultUses the Capacity Limit from SettingsEach assignee gets the per-user limit from Settings; sprint total = users × limit
Team / Capacity settingsUses calculated capacity from team configEach person's capacity is calculated from the Team & Capacity tab & sprint dates
Custom / Custom for sprintYou type a sprint totalYou type a per-user value; sprint total = users × your value

Settings default

The sprint uses the Capacity Limit from Settings. In Per Sprint mode, this is the whole sprint's capacity. In Per User mode, each assignee gets the per-user limit, and the sprint total is the sum (displayed as "N × limit = total").

Team / Capacity settings

In Per User mode (labeled "Capacity settings"), capacity is calculated from team members configured on the Team & Capacity tab:

The display shows a formula: assignees × per-user capacity = sprint total.

In Per Sprint mode (labeled "Team"), capacity is calculated from team config. If no team members are configured, the option is greyed out.

Custom / Custom for sprint

Opens an inline number editor where you type a capacity value. In Per Sprint mode, this is the sprint total. In Per User mode, you enter a per-user value and the sprint total is calculated as users × your value (displayed as "N × value = total").

Capacity Display

The sprint header shows capacity remaining — the total capacity minus completed work. This lets you compare remaining demand against remaining capacity at a glance. If remaining demand exceeds capacity remaining, the demand stat turns red and an "over by X" indicator appears. When no issues are done yet, capacity remaining equals the full sprint capacity.

9. Sprint Management

You manage the sprint lifecycle directly from each sprint card's action buttons.

Creating a Sprint

Click + Create Sprint below the sprint list. The new sprint gets dates that follow on from the last sprint, using the Sprint Length you configured (2, 3, or 4 weeks).

Starting a Sprint

Click Start on a future sprint to make it active. If you already have an active sprint, a warning asks you to confirm that you want two sprints running at the same time.

Completing a Sprint

Click Complete on an active sprint. If there are unfinished issues, a dialog lets you choose where to move them — to another sprint or back to the backlog. When the sprint completes, its velocity data is automatically captured for the Velocity section.

Deleting a Sprint

Click the delete button and confirm. The sprint is removed and its issues move to the backlog.

Editing the Goal

Open the Sprint Details panel, then click the goal text. Type your changes and press Enter or click away to save.

Syncing Issue Dates

You can align issue dates with sprint boundaries:

This is useful for features like the Team & Capacity chart and What-If (Project view) that rely on issue dates.

10. Drag and Drop

Moving Issues Between Sprints

Drag any issue row from one sprint and drop it onto another sprint card. The issue is moved in Jira immediately. You can also drag issues to the backlog at the bottom.

Moving Multiple Issues

Check the boxes next to several issues, then drag any one of them. All selected issues move together. A badge shows how many you're moving (e.g., "5 issues"). A bar above the sprint list shows your selection count with a Clear selection button.

Reordering Within a Sprint

Drag an issue up or down within the same sprint to change its position. A blue line shows where it will land. This custom order is saved and persists across sessions.

Reordering is disabled while a column sort is active. If you reorder an issue, the sort clears.

Auto-Scroll

While dragging, move your cursor near the top or bottom edge of the screen. The page scrolls automatically so you can reach sprints that aren't currently visible.

Locked Issues

Click the lock icon on an issue to prevent it from being dragged. Locked issues also stay in place during Auto-Level.

11. Auto-Level

Auto-Level is a planning tool that redistributes issues across sprints so that no sprint exceeds its capacity. It uses each sprint's capacity setting: Manual numbers if set, Team-calculated values if selected (Per User mode), or the Settings default otherwise. It respects dependencies (blockers always go in earlier sprints) and leaves locked issues and sprints alone. Everything happens as a preview first — nothing is saved to Jira until you explicitly accept.

Auto-Level toolbar with strategy pills: Priority, Size, Due Date, Balanced

How to Use It

  1. Click the Auto-Level button in the Sprints toolbar. A dropdown appears with strategy options: Priority, Size, Due Date, Balanced, Velocity, and Compare All.
  2. Click a strategy pill to start a session. The sprints rearrange in preview mode immediately. Each moved issue shows a purple badge indicating where it came from.
  3. Review the results. You can:
    • Click a different strategy pill to try another approach
    • Manually drag issues to fine-tune (these show orange badges)
    • Click Undo to reset and try again
  4. When you're happy, click Accept to save all changes to Jira. Or click Exit to discard everything and leave the auto-level session.

Strategies

StrategyHow It Decides What Goes WhereBest ForTrade-off
Priority Puts the highest-priority issues first, filling sprints front to back Teams that want to ensure high-priority items ship first Early sprints may contain a mix of large and small issues (size doesn't matter, only priority)
Size Puts the smallest issues first, filling sprints front to back Teams that want to maximize the number of items completed early High-priority but large items may end up in later sprints
Due Date Puts the soonest-due issues first, filling sprints front to back Teams working against external deadlines An issue with a tight deadline but low priority will be placed before a high-priority issue with no deadline
Balanced Tries to spread work evenly. Places large issues first, picking the sprint where each one fits best based on remaining room, how much work each person already has in that sprint, and whether the sprint end date aligns with the issue's due date. Teams that want predictable, consistent sprint loads High-priority items may not all end up in the earliest sprints
Velocity A toggle pill that uses historical efficiency to set sprint limits. When active, the other four strategy pills are disabled. See Using Velocity as Capacity below. Teams with enough sprint history to have reliable velocity data If recent velocity was unusually low (holiday sprint), the algorithm becomes overly conservative

All strategies respect dependencies: if issue A blocks issue B, then A is always placed in an earlier (or same) sprint as B. If circular dependencies exist, they are detected and flagged so you know to resolve them.

Algorithm Details

Auto-Level uses a greedy bin-packing algorithm with dependency constraints. First, it builds a dependency graph and performs a topological sort so blockers always come before blocked issues. Each issue gets a minimum sprint index = 1 + max(blocker's sprint index), ensuring blockers are placed first. Then, in strategy order, each issue is placed into the earliest sprint (starting from its minimum) that has available capacity. If no sprint has room, a new sprint is created (up to 10). Completed work in each sprint is subtracted from capacity so remaining room is calculated accurately.

Earliest Available Sprint

Priority, Size, and Due Date strategies pack issues into the earliest sprint with available capacity. This means an issue can move backward to an earlier sprint if there is room, not just forward. This ensures sprints are filled front-to-back as efficiently as possible.

What Gets Moved

New Sprints

If the existing sprints don't have enough room, Auto-Level creates new ones (up to 10). New sprints get dates that follow on from the last sprint using your Sprint Length setting, and their capacity defaults to the average of your existing sprints.

Per-User Mode

When Capacity Mode is set to Per User in Settings, Auto-Level tracks each person's workload per sprint individually. If someone is overloaded in a sprint, their excess issues are moved to a sprint where they have room. This produces a different result from Per Sprint mode: an issue might stay in Sprint 1 in Per Sprint mode (team total is fine) but get moved to Sprint 2 in Per User mode (because its assignee is overloaded).

Using Velocity as Capacity

If you have velocity data (from completed sprints), the Velocity toggle pill appears in the strategy row. Click it to activate velocity-based capacity. This tells Auto-Level to calculate sprint limits based on each team member's historical efficiency against their real available capacity (from the Team & Capacity tab), rather than using the configured limit or raw team availability.

When enabled:

For best results, configure your team on the Team & Capacity tab first. If no team capacity data is available, the feature falls back to the original behavior (flat historical average velocity).

Compare All

Click Compare All to run all four strategies at once and see a side-by-side comparison. The Delivery Forecast panel appears showing:

Strategy colors: Priority (blue), Size (orange), Due Date (green), Balanced (purple), Original baseline (gray).

12. Velocity Tracking

Below the sprint cards, the Velocity section shows how your team has performed in past sprints. This data feeds into capacity calculations and the "Use velocity as capacity" option in Auto-Level.

Velocity tracking: KPI tiles and historical sprint performance

KPI Tiles

Five summary tiles across the top:

Filtering by User

A row of user chips lets you see velocity for specific team members. Click a name to filter; click "All" to show the whole team. You can select multiple people.

Velocity History

A table showing each completed sprint: name, length, capacity, completed work, per-week rate, and efficiency. The best-performing and worst-performing sprints are highlighted.

Click any row to expand it and see individual issues from that sprint — which were completed, which were left incomplete, and which were added or removed mid-sprint.

Building Your Velocity Data

Velocity data accumulates automatically each time you complete a sprint. If you're starting fresh or want to backfill history, use these buttons:

13. What-If Analysis

The What-If tab lets you explore how changes in team performance, scope, and estimates would affect your delivery timeline. It combines adjustable sliders with the Delivery Forecast and the Demand vs Capacity chart to give immediate visual feedback. A Sprint / Project toggle at the top switches between two views.

What-If: Sliders and cascade chart
AspectSprint viewProject view
Data sourceSprints from boardIssues with due dates (from JQL)
Time bucketsSprintsWeeks (Monday boundaries)
RequiresSprint Mode ONIssues with due dates
CapacityPer-sprint capacityWeekly capacity (holiday and time-off aware)
Chart labelsSprint namesWeek dates (e.g., "Mar 10")

Both views share the same sliders, Monte Carlo simulation, AI analysis, and KPI cards. The only difference is the time buckets used.

What-If Project view with weekly time buckets

Sliders

Four sliders adjust different aspects of the project simulation. Each ranges from -50% to +50%:

Velocity (Slower / Faster)

Scales each team member's throughput. At +20%, every team member delivers 20% more per sprint. At -30%, everyone delivers 30% less. Models the team working faster or slower than historical averages.

Capacity (Less / More)

Scales each team member's available capacity. Simulates adding or losing resources — for example, a team member going on extended leave (less) or a new hire ramping up (more).

Issue Estimation (Smaller / Larger)

Scales the estimated size of each issue. Moving right simulates issues being larger than estimated (common in early-stage projects). Moving left simulates estimates being conservative.

Scope (Less / More)

Scales the total remaining work. Moving right simulates scope additions beyond the current backlog. Moving left simulates scope cuts.

How Sliders Compound

Velocity and Capacity compound together on the supply side. Issue Estimation and Scope compound together on the demand side. Moving Velocity +10% and Capacity +10% produces a 21% increase in effective throughput (1.10 × 1.10 = 1.21). Similarly, Issue Estimation +20% and Scope +10% produces 32% more demand.

As you adjust the sliders, both the Delivery Forecast stat card and the Demand vs Capacity chart update in real time, using the same underlying simulation.

Presets (Simulation Mode Only)

In Simulation mode, each slider has per-variable preset buttons that set the uncertainty range for that variable:

Presets are per-variable — you can set Velocity to Optimistic while keeping Scope at Conservative. In What-If mode (fixed sliders, not simulation), there are no presets — you set each slider to a specific value.

KPI Cards

AI Risk Analysis

An AI chat panel lets you describe scenarios in natural language (e.g., "What if we lose a developer for 2 sprints?"). Requires an AI API key in Settings. The AI returns slider recommendations with an Apply button, impact assessment, and prioritized recommendations.

14. Demand vs Capacity Chart

The Demand vs Capacity chart shows how your team's workload compares to their capacity across each sprint (or week, if Sprint Mode is off). Each sprint appears as a pair of stacked bars — one for demand, one for capacity — so you can see at a glance where the team is overloaded and who is driving it. This chart appears on the What-If tab and updates in real time as you adjust sliders.

Demand vs Capacity chart with shaded overload area, capacity line, target date, completion forecast, and team members table

Reading the Chart

Each sprint has two stacked bars side by side:

Capacity Bar (left)

Shows each team member's available capacity for that sprint, stacked by person. Each segment's height represents that member's net capacity after accounting for their utilization rate, holidays, and PTO. The total bar height is the sprint's total team capacity.

Demand Bar (right)

Shows each team member's assigned work for that sprint, stacked by person in the same order and color. Unassigned issues appear as a grey segment at the top. The total bar height is the sprint's total demand including any overflow carried from the previous sprint.

Colors

Each team member is assigned a consistent color across both bars. When a member's demand segment is taller than their capacity segment, their portion is highlighted to flag the imbalance.

Overflow Cascade

When a sprint's total demand exceeds its total capacity, the excess work carries forward to the next sprint. A dashed line separates planned demand from overflow. Sprints receiving overflow show the carried work as a hatched segment at the base of their demand bar.

What the Chart Tells You

PatternWhat It MeansWhat to Do
Balanced sprint Demand and capacity bars are roughly equal height. Work is evenly distributed across team members. No action needed — the sprint is healthy.
Overloaded sprint Demand bar is taller than capacity bar. Overflow will cascade to the next sprint, pushing later work out. Move issues to later sprints, or increase capacity (add team members, reduce PTO).
Individual bottleneck One team member's demand segment is much larger than their capacity segment, even though the sprint's total demand may be within total capacity. Reassign work from the overloaded member to someone with spare capacity.
Idle capacity A member's capacity segment is visible but their demand segment is small or absent. Assign more work to that person, or consider reallocating them to another project.

Projected Completion Marker

Two vertical markers on the chart indicate key milestones:

The projected completion marker shows the sprint where all remaining work, including cascaded overflow, is finally absorbed. This matches the date shown in the Delivery Forecast stat card.

15. Monte Carlo Simulation

Monte Carlo simulation replaces the fixed What-If sliders with randomized ranges to produce a probability distribution of completion dates. Instead of asking "what if velocity is +10%?", it asks "what is the range of likely outcomes given realistic uncertainty?"

Monte Carlo simulation: dual-dot uncertainty sliders and S-curve probability chart

How It Works

  1. For each of the four variables (Velocity, Capacity, Issue Estimation, Scope), you set a low and high value representing the plausible range using dual-dot sliders.
  2. The system runs 2,000 scenarios (Sprint view) or 5,000 scenarios (Project view). In each scenario, a global trend is drawn for each variable, then each team member receives a blend of 50% global trend and 50% per-user noise within the configured ranges. One scenario might have Alice performing at +15% velocity while Bob performs at −5% — reflecting real-world variation where team members don't all have identical sprints, but correlated trends (like holiday weeks) still affect everyone.
  3. Each scenario runs the full per-sprint, per-user stepped simulation with its randomized inputs and records the projected completion date.
  4. The results are assembled into a probability distribution.

Dual-Dot Sliders

Each variable has a range slider with two dots:

For each trial, the simulator picks a random value between the red and green dots. Wider ranges model more uncertainty; tighter ranges model a more predictable project.

Reading the Results

P50 (Median)

The date by which you have a 50% chance of finishing. Half of the simulated scenarios finished before this date, half after. Use this as the "most likely" outcome.

P85 (Conservative)

The date by which you have an 85% chance of finishing. This is the standard planning target for teams that want high confidence in their commitments. When giving dates to stakeholders, P85 is the safest choice.

Distribution Curve (S-Curve)

The spread of possible outcomes plotted as a cumulative probability curve. Key markers on the chart:

S-curve chart showing cumulative probability of completion by sprint

A narrow curve means the project timeline is predictable regardless of individual variation. A wide curve means small changes in team performance produce large swings in the delivery date — a signal that the plan is fragile and needs attention.

Why Per-User Randomization Matters

Traditional Monte Carlo applies the same random factor to the entire team — everyone is fast together or slow together. Project Commander uses a blended per-user randomization model: each iteration draws a global trend (50% weight) that captures team-wide factors like holiday weeks or infrastructure outages, then adds per-user noise (50% weight) that captures individual variation. Alice might have a productive sprint while Bob is blocked by a complex issue, but both are affected by the same team-wide trend.

This blended approach produces wider, more realistic distributions than pure team-level randomization while avoiding the over-optimism of fully independent models. A team of 5 where each member varies semi-independently has far more possible outcomes than a team where everyone moves in lockstep.

When to Use Simulation vs What-If

Use CaseToolWhy
Explore a specific scenario ("What if we lose a team member?") What-If sliders The sliders give you a single deterministic answer for a specific set of assumptions.
Understand the range of likely outcomes for planning Monte Carlo simulation The probability distribution tells you not just when you might finish, but how confident you can be in that date.
Set a commitment date for stakeholders Monte Carlo P85 The P85 date gives you 85% confidence — enough margin for most stakeholder commitments.
Compare different planning strategies Compare All + simulation curves Overlay Monte Carlo distributions from different Auto-Level strategies to see which gives the tightest, most favorable distribution.

16. Team & Capacity Tab

The Team & Capacity tab combines demand analysis and team configuration in one place. The Demand vs. Capacity chart appears first, followed by team configuration below.

Team & Capacity: Demand vs capacity chart over time Team & Capacity: Team members with hours, utilization, time off, and status

Three Sections

The tab has three collapsible sections. Click any section header to expand or collapse it:

  1. Team Members — your team roster and workload status
  2. Time Off — PTO calendar for individual team members
  3. Company Holidays — company-wide days off

Period Selector

A period bar at the top lets you choose the time range you're looking at: Weekly, Biweekly, Monthly, Quarterly, or Yearly. Arrow buttons navigate forward and back. All the numbers in the Team Members table (net capacity, demand, issue count) adjust to show values for the selected period.

Period selector: Weekly, Biweekly, Monthly, Quarterly, Yearly with navigation arrows

Team Members

A table showing each team member with these columns:

ColumnWhat It Shows
NameTeam member name with a colored avatar. A view time off link opens a modal showing all of that person's time off (see below).
Hrs/Wk or Pts/WkCapacity per week (editable). In Time Mode shows hours per week; in Points Mode shows points per week. Edit to set all weeks in the current period to that value.
TotalTotal capacity for the selected period — the sum of all weekly values
Util %Utilization percentage (editable). Adjusts how much of a member's capacity is available for project work. Defaults to 100%. Persists to team member configuration.
Time OffTotal hours deducted for holidays and PTO in the period. Click the toggle to expand a detail row showing each holiday and PTO entry with dates and hours.
Net Hrs / Net PtsNet capacity after deductions. Calculated as Total capacity minus holiday deductions minus PTO hours. In Time Mode shows Net Hrs (or Net Days); in Points Mode shows Net Pts. This is the number used for workload status.
DemandHow much work is assigned to them in the period
IssuesNumber of issues assigned, with expandable dropdown showing issue keys and summaries
StatusA color-coded badge comparing demand against net capacity (after deductions)

You can select multiple team members using the checkboxes. The Cap/Wk column shows "varies" if weeks in the period have different values.

Team members table with capacity, utilization, time off, demand, and status badges

Weekly Capacity Model

Capacity is stored as explicit per-week values. Each team member has a capacity value for each ISO week (Monday–Sunday). Weeks with no value set default to zero.

This model gives you full control — reduce capacity for holiday weeks, ramp new members up gradually, or set different values for different sprints. All downstream calculations (sprint capacity, demand vs capacity, risk) resolve to weekly granularity.

If you have existing team data from before the weekly model, a migration banner appears with a "Populate weeks" button that fills the selected period from your previous rates.

How Sprint Capacity Is Calculated

When the Delivery Forecast or Sprints tab needs a capacity number for a sprint, the system combines each team member's weekly capacity map with the sprint's date range:

Points Mode:

Sprint capacity = Σ(member) [ Σ(week overlapping sprint) [ weeklyPoints × utilization% × overlapFraction ] ]

Time Mode:

Sprint capacity = Σ(member) [ Σ(week overlapping sprint) [ weeklyHours × utilization% ] − holidayHours − ptoHours ]

Where:

Inactive team members contribute zero capacity. This calculation is used by the Delivery Forecast (Team Capacity method), Auto-Level, and the Demand vs Capacity chart when "Team capacity" is selected as the capacity source.

Status Badges

Each team member gets a color-coded status based on how their assigned work (demand) compares to their net capacity — capacity after holiday and PTO deductions:

StatusWhen It AppearsColor
OVERLOADEDDemand is more than 115% of capacityRed
OPTIMALDemand is 90–115% of capacityGreen
AVAILABLEDemand is 60–90% of capacityYellow
UNDERLOADEDDemand is less than 60% of capacityGray

Time Off

A calendar view for managing PTO. Click a date to mark a single day off. Use Ctrl+click to toggle individual days on and off. Use Shift+click to select a range of dates.

Time off and company holidays are automatically deducted from each member's capacity. The deductions appear in the Time Off column of the Team Members table. Click the toggle arrow to expand a detail row listing each holiday and PTO entry with its date and hours deducted. The resulting Net column shows capacity after all deductions.

View All Time Off

Click the view time off link next to any team member's name to open a modal showing all of their time off across all time periods. The modal includes:

All Time Off modal showing PTO and holidays grouped by month

Company Holidays

Add company-wide holidays by entering a date and name. Check the Recurring box to have the holiday repeat every year automatically. Holidays reduce available capacity for all team members.

Demand vs Capacity Chart

Team & Capacity: Demand vs Capacity chart with shaded overload area, capacity line, target date, completion forecast, and team members table

The chart shows daily capacity (green) and daily demand (blue) side by side so you can see whether your team can keep up with the work. Click legend items to toggle additional traces like cumulative totals, scope, and completion percentage. Red-shaded areas highlight overload zones where demand exceeds capacity.

Summary Stats

Above the chart, summary stats show cumulative capacity and cumulative demand for the selected period, and whether demand exceeds capacity (shown in red as “Over by”).

Resource Breakdown

Below the chart, a collapsible Resource Breakdown table lists each team member with their issue count, demand, load percentage, and status badge (Available, Loaded, or Overloaded).

Chart Options

Below the chart, controls let you customize the analysis:

17. Scope Tab

Scope Tab: Chart with scope, burndown, and forecast lines Scope Tab: Period breakdown table

The Scope tab shows how much work has been added over time and how much has been completed, all on one timeline chart. In Points Mode, values are in story points. In Time Mode, values are in hours or days.

The Chart

Two lines tell the story:

The chart legend is interactive — click any legend item to show or hide its trace. Hidden items appear dimmed with a strikethrough label.

Filtering by User

A row of colored user chips appears above the chart (always visible when there are multiple assignees). Click a name to filter the burndown line to just that person's completed work. The scope line always shows the total scope. Click “All” or clear selections to reset.

Chart Controls

Inline controls appear between the user chips and the chart:

Period Selector

Choose how the timeline is grouped: Weekly, Biweekly, Monthly, or Quarterly. Arrow buttons navigate between periods.

Stat Cards and Breakdown

Below the chart, stat cards show Total Scope, Completed Work, Remaining Work, % Complete, and issue counts. A breakdown table lists individual issues grouped by the selected period.

Delivery Forecast (Scope Tab)

The Scope tab has its own Delivery Forecast stat card with the same controls as the Dashboard: Delivery Projection Method dropdown and Scope Growth Method checkbox + dropdown. The projection method and scope growth settings are shared — changing them on one tab updates the other.

18. Alerts Tab

Alerts Tab: Issue alerts with dependency conflicts and warnings

The Alerts tab scans your issues for common problems and analyzes dependency relationships. A badge on the tab shows the total number of alerts found.

Alert Categories

Issues are grouped into six collapsible sections. Click a category header to expand or collapse it:

CategoryWhat It Catches
Done with Remaining WorkIssue is marked Done but still has time remaining on it
OverdueIssue is not Done and its due date has passed
Dependency ConflictsIssue is blocked by something that finishes after it should start. Also detects circular dependencies.
Child After ParentA subtask is due after its parent task
Missing DatesIssue in an active or future sprint has no start date or no due date
Missing EstimatesIssue in an active or future sprint has no estimate. In Points Mode this checks for story points; in Time Mode this checks for original estimate.

Dependency Analysis

Below the alert categories, a collapsible Dependency Analysis section gives you the full picture of blocking relationships:

Dependency analysis section with summary stats and blocking relationships

19. Epics Tab

Epics Tab: Progress table with all epics

The Epics tab gives you a bird's-eye view of your project organized by epic. It groups child issues under their parent epics and shows progress, scope changes, and status — all in a single sortable table. Works in both Sprint Mode and non-Sprint Mode.

Filtering and Sorting

A search box at the top filters epics by name or key. Dropdowns let you filter by status (All Epics, On Track, At Risk, Behind, To Do, Done, Overdue) and by owner. Click any column header to sort.

Epic Progress Table

Each row shows one epic with:

Click any epic row to expand it and see all child issues with their individual status, assignee, points, and sprint assignment.

Expanded epic row showing child issues with status, assignee, and points

How Status Is Derived

StatusCondition
Done100% complete or parent issue is in a Done status
OverdueDue date is in the past and not Done
On Track≥ 60% complete
At Risk30–59% complete
Behind1–29% complete
To Do0% complete or no child issues

Filter Tabs

Three tabs filter the dependency list:

TabWhat It Shows
AllEvery cross-epic dependency
ViolationsThe blocker epic is not on track AND blocked issues are not yet done — delivery is at risk
At RiskThe blocker epic is not on track but no unfinished blockers are blocking the dependent epic yet

Dependency Rows

Each row shows the blocker epic on the left, an arrow labeled "blocks" in the middle, and the blocked epic on the right. A status badge indicates the health of the relationship:

Each blocked epic shows how many issues are blocked and their total points or hours.

Estimation Modes

All values in the Epics tab automatically adapt to your estimation mode. In Points Mode you see story points; in Time Mode you see hours. The same applies to capacity, forecasts, and scope calculations.

20. Estimation Modes

The Estimation Mode setting is the most important choice you make. It determines how work is measured, where capacity comes from, and what units every tab uses. Choose it once and the rest follows automatically.

Points Mode vs Time Mode

AspectPoints ModeTime Mode
What it measuresStory pointsRemaining estimate (hours or days)
Where capacity comes fromCapacity Limit from Settings (Settings default), or team Points Per Sprint when Team/Capacity settings is selectedCapacity Limit from Settings (Settings default), or team hours/availability when Team/Capacity settings is selected
Jira field usedStory PointsRemaining Estimate
Display unitsptshrs or days (based on Time Unit setting)
Time Unit settingHidden (not needed)Visible — choose Hours or Days
Progress indicatorPoints (done / total)Estimate (remaining / original) or Work Ratio

How Each Tab Behaves

Dashboard Tab

All health cards, workload bars, and scope values use story points in Points Mode, or hours/days in Time Mode. The unit label adjusts automatically (pts, hrs, or days).

Sprints Tab

Team & Capacity Tab

Scope Tab

The scope and burndown chart shows story points in Points Mode, or hours/days in Time Mode.

Alerts Tab

The "Missing Estimates" alert checks for story points in Points Mode, or original estimate in Time Mode.

What-If (Sprint / Project view)

The What-If sliders, cascade chart, and Monte Carlo simulation all work in story points (Points Mode) or hours/days (Time Mode).

What Happens When You Switch

Switching Estimation Mode automatically adjusts related settings:

You only need to choose your Estimation Mode — the Jira field and Progress Indicator follow automatically.

Time Unit Conversion

Jira stores time in seconds internally. Project Commander uses an 8-hour workday for conversion:

When Time Unit is set to Days, all time values throughout the app are shown in workdays instead of hours.

Setting Up Time Mode

  1. Open Settings and set Estimation Mode to Time
  2. Go to the Capacity tab
  3. Add your team members and set their weekly capacity (Hrs/Wk or Pts/Wk)
  4. Add any Company Holidays (for reference)
  5. Use the Time Off calendar to track PTO, then adjust weekly capacity as needed
  6. Sprint capacity will now be automatically calculated from your team's weekly values

Points Mode vs Time Mode — the key difference

Estimation Mode controls which Jira field measures work (story points vs remaining estimate). In both modes, each sprint's Default/Team/Manual toggle determines where capacity comes from — see How Capacity Works. The difference is what Team calculates in Per User mode: in Points Mode it sums weekly points values; in Time Mode it sums weekly hours values for the sprint's date range.

21. Dependencies

Project Commander detects blocking relationships from Jira's "blocks" and "is blocked by" issue links. Dependencies affect multiple features across the app.

Where Dependencies Appear

Sprints Tab

Issues with dependency conflicts show a warning icon (⚠) next to their key in the issue table. The icon has a tooltip describing the conflict (e.g., "Blocked by PROJ-45 which ends after this issue starts").

Alerts Tab

The Dependency Conflicts alert category lists all issues where a blocker finishes after the blocked issue should start. The Dependency Analysis section provides the full dependency graph with summary stats, edge list, root/leaf issues, and a conflicts-only filter.

Dependency Map Modal

Click the Dependencies button in the action header to open a modal with three tabs:

Each dependency shows the blocker and blocked issue with their sprint name, sprint state badge (active/future), and completion status. Violation rows are marked with a warning icon.

Auto-Level

Auto-Level always respects dependencies. A blocking issue is placed in an earlier (or same) sprint as the issues it blocks. If circular dependencies exist, they are detected and reported, and the affected issues are still placed using the chosen strategy.

22. Settings Reference

Click the Settings gear icon (⚙) in the tab bar to open the configuration panel. Settings are saved per Jira site and shared across all users. This section provides a comprehensive reference for every setting and how it affects the app.

Settings panel showing JQL Filter, Sprint Mode, Board, Capacity Mode, Estimation Mode Advanced Settings: Progress Indicator, Sprint Length, Velocity Lookback, Epics, Plan What-If, Read-only Mode, Display Columns, AI Features

All Settings

SettingTypeDefaultVisible When
JQL Filter Text area Empty Always
Epics Checkbox On Always
Sprint Mode Checkbox On Always
 Board Search/select with auto-detect Empty Sprint Mode ON
 Include Backlog Checkbox On Sprint Mode ON + Board selected
 Capacity Mode Radio: Per Sprint / Per User Per Sprint Sprint Mode ON
 Progress Indicator Dropdown Points (Points Mode) / Estimate (Time Mode) Sprint Mode ON
 Capacity Limit Number (min: 1) 40 (sprint) / 20 (user) Sprint Mode ON
 Sprint Length Dropdown: 1/2/3/4 weeks 2 weeks Sprint Mode ON
 Velocity Lookback Dropdown: 3 / 5 / 8 / 10 sprints Last 5 sprints Sprint Mode ON (also settable from Dashboard and What-If)
Estimation Mode Radio: Points / Time Points Always
Time Unit Dropdown: Hours / Days Hours Estimation Mode = Time
Display Columns Multi-select with search Key, Summary, Assignee, Story Points Always
AI Features Provider dropdown + API key + Model selector Empty Always
Read-only Mode Checkbox Off Always (standalone app)
Enable Work Leveling Checkbox On CSV mode only

Estimation Mode — The Master Switch

Estimation Mode is the most important setting. It controls which Jira field is read, what units are shown everywhere, and how capacity is sourced. Every other setting cascades from this choice.

Full cascade when you switch from Points to Time:

Time Unit (Hours / Days)

Only appears when Estimation Mode is Time. A pure display conversion using an 8-hour workday. "Scope +96 hrs" becomes "Scope +12 days". The underlying data never changes — only the display.

Capacity Mode

Two choices: Per Sprint or Per User. Only shown when Sprint Mode is on.

Switching from Per Sprint to Per User changes the Auto-Level algorithm completely. Per Sprint mode distributes issues to fit sprint totals. Per User mode balances per-person workload, moving specific people's lowest-priority issues to sprints where that person has room.

Capacity Limit

The Settings default capacity when no other source is active. The label changes based on your Capacity Mode:

This is the fallback — you can override it per sprint or per person directly in the Sprints tab. See Capacity Precedence Hierarchy below for the full override chain.

Sprint Mode

A checkbox (default: on). When turned off:

Board

Select the Scrum board that contains your sprints. The app lists all Scrum boards on your Jira site — type to search, then click to select. If the app is opened from within a Jira board, the board is auto-detected. Required for the Sprints tab. Changing the board reloads all sprint data, velocity history, and team configuration (team settings are stored per board).

JQL Filter

Standard Jira JQL syntax. This query fetches the issues used across Dashboard, Scope, Alerts, Team & Capacity, and Epics tabs. The Sprints tab shows all issues from the board regardless of this filter.

Include Backlog

When enabled, unscheduled backlog issues from the board appear below the sprint list and are included in demand calculations across all views. The default is off — you see only committed/planned work. Turn it on when you want the full picture of everything that needs to get done.

Velocity Lookback

Controls how many past periods are included in velocity calculations and scope/burndown charts. This is a global setting that can be changed from four places: Settings, Dashboard, Sprints tab (Velocity section), and What-If. Range: 3–10, default 5. Fewer sprints means a more volatile average — a single good or bad sprint has more weight.

Sprint Length

Duration of new sprints created by Auto-Level: 1, 2, 3, or 4 weeks. Also affects velocity normalization — velocity is converted to "per week" by dividing by sprint length. If you change sprint length, consider adjusting your capacity limit proportionally.

Display Columns

Choose which columns appear in sprint issue tables. The default set is Key, Summary, Assignee, and Story Points. There are 20 standard columns available. You can also search for Jira custom fields by typing at least 2 characters in the search box. This only affects the issue table display — no impact on calculations.

AI Features

Configure an AI provider (Anthropic Claude, OpenAI ChatGPT, or Google Gemini) with an API key to enable AI-powered features: the AI Insights section on the Dashboard, and AI risk analysis on the What-If tab. You can optionally select a specific model from the provider's lineup.

AI Features settings: provider dropdown, API key, and model selector

Read-only Mode

When enabled, your Jira data is never modified. Drag-drop, sprint actions, and date syncing are disabled. You can still run Auto-Level preview (dry run) to see what it would do, but the "Accept" button is blocked. All viewing and analytics features still work normally.

Enable Work Leveling

Only visible in CSV mode (when data is loaded from a CSV file rather than Jira). When enabled, a Level Work banner appears at the top of the Scope and Team & Capacity tabs offering to redistribute tasks so no week exceeds capacity. Leveling shifts issue due dates to smooth out demand peaks. Disable this setting to hide the leveling banner.

Progress Indicator

Controls the small progress display in each sprint card header. Available options depend on Estimation Mode:

Auto-Switching Cascade

Switching Estimation Mode automatically adjusts related settings to stay consistent:

When You Switch ToWhat Changes Automatically
PointsUses Story Points field, Progress Indicator → Points
TimeUses Remaining Estimate field, Progress Indicator → Estimate, Time Unit selector appears

Capacity Precedence Hierarchy

Capacity has multiple layers. The app resolves the effective capacity for each sprint using a priority order where the first match wins.

In Per Sprint Mode:

  1. Per-sprint manual override — the user clicked the capacity number in the sprint header and typed a number. This exact number is used.
  2. Per-sprint "Effective velocity" — the user selected effective velocity in the sprint header dropdown. The historical efficiency-adjusted capacity is used.
  3. Per-sprint "Team capacity" — the user selected "Team capacity" in the sprint header dropdown. The calculated team capacity for this sprint is used (from team member hours/points, minus holidays and time off).
  4. Settings default — nothing set for this sprint. The "Capacity limit per sprint" from Settings is used (default: 40).

In Per User Mode:

  1. Per-user per-sprint override — the user clicked this person's capacity bar and typed a number. That number is used for this person in this sprint.
  2. Per-sprint "Team capacity" per-member — the sprint is set to "Team capacity". Each member's calculated capacity from team config is used.
  3. Per-sprint total override divided by assignee count — if a per-sprint manual number is set, it is divided evenly among assignees.
  4. Settings default — nothing set. The "Capacity limit per user" from Settings is used (default: 20).

Where This Matters

Settings Dependency Diagram

How the major settings depend on and affect each other:

SettingDetermines / Gates / Feeds
Estimation Mode Determines which Jira field is read (Story Points vs Remaining Estimate), unit labels everywhere, gates Time Unit dropdown and Progress Indicator options, feeds sprint headers, all Dashboard cards, Team & Capacity chart, What-If simulation inputs, Auto-Level issue sizing, Velocity units, Alerts field checks. Does NOT affect dependencies or risk badges.
Capacity Mode Gates Capacity Limit field label, determines what "over capacity" means (team total vs individual person), determines which Auto-Level algorithm runs, determines sprint header layout (single bar vs per-person), gates Effective velocity option visibility.
Sprint Mode Gates Sprints tab, Auto-Level, Velocity tracking, Scope creep detection, Sprint risk badges, Capacity Mode radio, Plan What-If checkbox.
Team Config Feeds calculated capacity per sprint (when limit is "Team capacity"), Team & Capacity chart, Delivery Forecast card, What-If simulation baseline, Auto-Level bin sizes.
Velocity Lookback Feeds velocity average, Delivery Forecast projected date, Effective velocity option in Auto-Level, Monte Carlo variance.
Sprint Length Feeds velocity normalization (pts/week), new sprint date ranges from Auto-Level, Delivery Forecast weekly throughput.
Include Backlog Feeds which issues are in scope for Dashboard demand, Demand vs Capacity analysis, Forecast remaining work, and Alerts issue set.
JQL Filter Feeds Scope, Alerts, Team & Capacity, Epics tabs. NOT the Sprints tab or Auto-Level.
Display-only settings Display Columns (issue table layout), Epics toggle (show/hide tab), Read-only Mode (disable writes), AI Provider/Key (enable AI chat). No calculation impact.

23. How Everything Connects

The Delivery Forecast, Scope Growth, What-If sliders, Demand vs Capacity chart, and Monte Carlo simulation all share the same underlying engine: a per-sprint, per-user stepped simulation that walks through future sprints consuming work against individual capacity.

The Shared Simulation Engine

Every forecasting feature in Project Commander runs the same simulation:

  1. Walk through each future sprint (or week) in order.
  2. For each sprint, calculate each team member's available capacity (hours or points, adjusted for utilization, holidays, and PTO).
  3. Consume each member's assigned issues up to their capacity. Overflow remaining work to the next sprint.
  4. Consume unassigned issues with whatever spare capacity remains.
  5. Add scope growth to the unassigned pool each sprint.
  6. Record the sprint where all work is absorbed — that is the projected completion.

One Source of Truth

All views are driven by the same model:

If the stat card says May 30, the chart's completion marker points to the same sprint, and the Monte Carlo P50 clusters around the same date. There is one source of truth for the forecast. Changing any input — team capacity, issue estimates, scope growth rate, or What-If slider positions — flows through the same engine and updates all views consistently.

Why this matters

Because everything uses the same engine, you never get contradictory forecasts from different parts of the app. The Dashboard, What-If, and Monte Carlo views are different lenses on the same underlying simulation. When you improve your plan (e.g., by running Auto-Level to rebalance), the improvement shows up everywhere simultaneously.

24. Troubleshooting

App shows "Loading..." indefinitely

No sprints appear on the Sprints tab

Drag and drop is not working

Changes are not saving

Dependency warnings are missing

Velocity section is empty

Wrong demand values in Points Mode

Capacity shows 0 in Time Mode

AI analysis not working

What-If (Project view) shows "Issues Need Due Dates"

Forecast shows "Never"

25. Technical Reference

Visual Indicators

IndicatorMeaning
Default button highlightedSprint uses capacity limit from Settings
Team button highlightedPer User mode: capacity calculated from team config. Per Sprint mode: same as Default.
Manual button highlightedSprint capacity is a user-entered number
Team button greyed outNo team members configured on the Team & Capacity tab
Red avatar ringUser is over their capacity limit
Green avatar ringUser is within capacity
Gray avatarUser is filtered out — click to restore
Purple move badgeIssue was moved by Auto-Level
Orange move badgeIssue was manually moved during an Auto-Level session
⚠ icon on issue keyDependency conflict (blocker finishes after this issue starts)
Lock icon (filled)Issue or sprint is locked (excluded from Auto-Level and drag)
Blue sort arrow (↑/↓)Column is sorted ascending or descending
Blue reorder lineDrop target indicator when reordering issues
Red due dateIssue is past due (due date before today)
T marker (purple)Target sprint/week on cascade bar chart
P marker (green/red)Projected delivery sprint/week on cascade bar chart

Keyboard and Mouse Shortcuts

ActionShortcut
Select multiple issuesClick checkboxes individually
Select a range of time off daysShift + click
Toggle individual time off daysCtrl + click
Close column search dropdownEsc
Add column from searchEnter

Data Storage

Project Commander stores all its data securely within your Jira Cloud instance using Atlassian Forge storage:

Issue data from your JQL filter is fetched from Jira on each page load and shared across all tabs. No issue data is stored permanently by the app.

Time Unit Conversion

Jira stores time values in seconds. Project Commander converts using an 8-hour workday:

Capacity States

Each sprint's Default/Team/Manual toggle determines its capacity source:

Toggle StateSource
Default selectedThe Capacity Limit from Settings
Team selectedPer User mode: calculated from team configuration (Points Mode: points per sprint; Time Mode: available hours minus holidays and time off). Per Sprint mode: same as Default.
Manual selectedThe number the user entered

During Auto-Level, a "Use velocity as capacity" checkbox can override these with efficiency-adjusted limits based on each member's historical completion rate against their real available capacity.

Risk and Team & Capacity tabs always use team-based calculations when available, regardless of individual sprint toggles. They fall back to the Capacity Limit from Settings when no team is configured.

Auto-Level Constraints

Monte Carlo Simulation Details

Dashboard Card Formulas

CardFormula
Schedule Adherence Compares planned-by-now vs completed-by-now. Planned = sum(estimates) for issues due ≤ today. Completed = sum(estimates) for Done issues due ≤ today. Ahead if completed > 105% of planned, Behind if < 95%, On Pace otherwise.
Finish on Time? Remaining = sum(remaining estimate) for non-Done issues. Weekly capacity = sum(hours/week × utilization%) per team member. Total capacity = weekly capacity × weeks until target. Ratio = capacity ÷ remaining. Yes if ratio ≥ 1.0, At Risk if ≥ 0.9, No if < 0.9.
Forecast Velocity-based: weeks needed = adjusted remaining work ÷ weekly throughput. Projected date = today + (weeks × 7). Remaining work is multiplied by estimate accuracy ratio when estimates are consistently low. Sprint plan-based: end date of last open sprint.
Progress sum(estimates for Done issues) ÷ sum(all estimates) × 100. Falls back to count(Done) ÷ count(All) if no estimates exist.

Deliverability

StatusCondition
SufficientRemaining capacity ≥ remaining work (or all work already done)
TightRemaining capacity ≥ 90% of remaining work
At RiskRemaining capacity < 90% of remaining work

Per-sprint badges: DELIVERABLE if capacity ≥ demand, TIGHT if capacity ≥ 90% of demand, OVERCOMMITTED otherwise.

Scope Creep Calculation

For active sprints: original commitment = sum(points) of issues at sprint start (from Jira changelog). Current scope = sum(points) of current issues. % change = (current − original) ÷ original × 100. The expandable panel lists each issue added or removed mid-sprint, with dates from the changelog.

All Status Thresholds

FactorGreen (OK)Amber (Caution)Red (At Risk)
Scope Growth≤ 10%11% – 25%> 25%
Team Capacity60% – 90%< 60% or 91% – 110%> 110%
Team BalanceAll 50% – 100%Any >100% or <50%Any >115% with another <60%
Delivery Rate≥ 85%65% – 84%< 65%
Estimate Accuracy80% – 110%< 80% or 111% – 130%> 130%
Dependency Conflicts01 – 2≥ 3

Key Constants

ConstantValueUsed In
Workday8 hoursAll time conversions (hours ↔ days)
Time (seconds → hours)÷ 3,600Jira remaining estimate conversion
Time (seconds → days)÷ 28,800Jira remaining estimate conversion (8h day)
Max new sprints (Auto-Level)10Auto-Level sprint creation limit
Monte Carlo iterations2,000 (Sprint) / 5,000 (Project) / 10,000 (Compare All)What-If simulation
Sprint length options1, 2, 3, or 4 weeksNew sprint creation, Auto-Level