The hardest part of application rationalization isn't finding the applications — it's making defensible decisions about what to do with each one. Which ones do you retire? Which do you consolidate? Which are genuinely worth keeping even if they look expensive from the outside?
The answer requires a consistent, objective scoring methodology. Without a rubric, rationalization decisions become political rather than analytical — and decisions made on politics rather than data don't hold up when stakeholders push back.
This post documents the four-dimension scoring rubric we use at APM Guru, with detailed criteria for each score level and guidance on how to apply it consistently.
The Four Dimensions
Every application in the portfolio is scored across four independent dimensions. Each dimension is rated 1–5. A composite score is calculated by averaging the four scores, with optional weighting based on organizational priorities.
The four dimensions are: Business Value, Technical Health, Utilization, and Cost Efficiency.
Dimension 1: Business Value (1–5)
Business Value measures how important the application is to the organization's core operations, revenue generation, and strategic direction. It's the "so what do you actually do with this?" question.
| Score | Criteria | Example |
|---|---|---|
| 5 | Mission-critical. Revenue-blocking, compliance-required, or core to primary business processes. Retiring it would halt operations. | Core EHR system for a healthcare provider. Primary CRM for a sales-driven business. |
| 4 | High value. Operationally important, used by multiple departments or large user populations for significant work. Retirement would cause meaningful disruption. | Company-wide collaboration platform. Primary project management tool for product teams. |
| 3 | Moderate value. Supports legitimate business functions but is not operationally critical. Alternatives exist if needed. Retirement would cause inconvenience rather than disruption. | Department-level analytics tool. Secondary HR platform for one business unit. |
| 2 | Low value. Marginal business function. Users would adapt quickly if the application were retired. High overlap with other, higher-value tools in the portfolio. | A third project management tool used by 8 people. A standalone survey tool whose function is covered by Microsoft Forms. |
| 1 | Negligible value. No active use. Unable to identify a business function this application currently serves. Potentially a legacy subscription nobody cancelled. | An application with 0 logins in the last 90 days. A trial that converted to a paid subscription and was forgotten. |
Guidance on Scoring Business Value
- Don't confuse seniority with criticality. Applications used only by senior executives are sometimes scored high by default, but usage by a small group of high-status users doesn't make the application mission-critical if its function is easily served by another tool.
- Compliance requirements automatically elevate business value. An application that must exist to satisfy regulatory requirements should be scored no lower than 4, regardless of utilization.
- Always validate business value scores with the application's business owner. Data doesn't capture context — the business owner might know that an apparently underutilized app runs once a month but is the only system that produces a specific regulatory report.
Dimension 2: Technical Health (1–5)
Technical Health measures the condition, modernity, and security posture of the application. It answers: "Is this a sustainable platform for the organization's future, or is it accumulating technical debt?"
| Score | Criteria |
|---|---|
| 5 | Modern, actively developed platform on current vendor roadmap. No known security vulnerabilities. Clean integration architecture. Strong API ecosystem. Vendor is financially healthy and investing in the product. |
| 4 | Generally healthy platform. May have minor version lag or minor integration complexity, but no significant technical debt. Vendor is maintaining the product. No active security concerns. |
| 3 | Acceptable but showing age. May be several versions behind current. Some integration debt. Vendor support is available but the product is not receiving significant new investment. No active security vulnerabilities, but the risk profile is increasing. |
| 2 | Technically unhealthy. End-of-life approaching or already reached. Known security vulnerabilities. Heavily customized with fragile integrations that make migration complex. Vendor support diminishing. |
| 1 | Technically critical. End-of-life, no vendor support. Active security vulnerabilities. Built on deprecated technology stack. High compliance risk. Maintenance is ad-hoc and risky. |
Note: An application with a Low Business Value score (1–2) and a Low Technical Health score (1–2) is a clear, urgent retirement candidate. An application with High Business Value (4–5) and Low Technical Health (1–2) is a migration candidate — keep the function, replace the platform.
Dimension 3: Utilization (1–5)
Utilization measures the gap between what you're paying for and what's actually being used. Monthly Active Users (MAU) as a percentage of licensed/provisioned seats is the primary metric.
| Score | MAU Rate | Notes |
|---|---|---|
| 5 | > 80% | Highly utilized. Most licensed users are active monthly. Good investment of license spend. |
| 4 | 60–80% | Well-utilized. Some inactive licenses, but the application is getting solid use from its user base. |
| 3 | 40–60% | Moderate utilization. Meaningful inactive seat count. Renegotiation of license count is worth exploring, but the application isn't necessarily a retirement candidate. |
| 2 | 15–40% | Low utilization. Most licensed seats inactive. Strong candidate for license reduction and potential retirement or consolidation. |
| 1 | < 15% | Near-zero utilization. Application is effectively unused. Immediate retirement or consolidation candidate regardless of other dimension scores. |
Utilization Scoring Nuances
- Seasonal applications: Some applications are used intensively during specific periods (year-end close tools, tax preparation software, etc.) but show low MAU in off-months. Weight annual peak utilization more heavily for these applications.
- Automation-driven applications: Tools used primarily by automated processes (APIs, integrations, batch jobs) may show low user login counts but high actual usage. Check API call volumes and event logs, not just login data.
- New applications: Applications in their first 60–90 days of deployment haven't had time to ramp utilization. Don't penalize new tools in their adoption phase — reassess at the 6-month mark.
Dimension 4: Cost Efficiency (1–5)
Cost Efficiency measures whether what you're paying for the application is commensurate with the value it delivers and competitive with alternatives. The primary metric is cost per monthly active user (CPAU), benchmarked against market rates for the application category.
| Score | Criteria |
|---|---|
| 5 | Excellent value. CPAU is at or below market benchmark. License count is well-matched to actual usage. Contract terms are favorable. |
| 4 | Good value. CPAU is slightly above benchmark but within acceptable range. Some opportunity for renegotiation at next renewal. |
| 3 | Average. CPAU is noticeably above benchmark. License count significantly exceeds active usage. Renegotiation should be prioritized at next renewal. |
| 2 | Overpriced. CPAU is well above market rate. Significant savings opportunity through renegotiation, right-sizing, or replacement with lower-cost alternative. |
| 1 | Significantly overpriced. CPAU is 2x+ market rate, or the application has substantial cost with near-zero utilization making the effective CPAU effectively infinite. |
Calculating the Composite Score
A simple average of the four dimension scores gives you the composite score. This will fall on a 1–5 scale and can be used to sort and group the portfolio:
- 4.0–5.0: Retain. Monitor cost efficiency at next renewal.
- 3.0–3.9: Retain with action. Address the dimension(s) pulling the score down — typically a renegotiation or right-sizing adjustment.
- 2.0–2.9: Consolidation or migration candidate. Evaluate against portfolio overlaps; identify whether function can be absorbed by a higher-scoring application.
- 1.0–1.9: Immediate retirement candidate. Low scores across most dimensions indicate an application that is not earning its place in the portfolio.
Optional: Weighted Scoring
Organizations with specific priorities can apply weights to the four dimensions. Examples:
- Cost-reduction focus: Cost Efficiency × 2, all others × 1 (total weight: 5)
- Security hardening focus: Technical Health × 2, all others × 1 (total weight: 5)
- Modernization focus: Technical Health × 1.5, Business Value × 1.5, others × 0.75 (total weight: 4.5)
Weighted scoring requires explicit documentation and stakeholder alignment before use — otherwise it looks like the framework is being rigged to produce a predetermined result.
Using Scores to Drive Decisions
Scores are a starting point for the conversation, not the end of it. A composite score of 2.1 doesn't automatically mean an application should be retired — it means it should be discussed carefully with the business owner, and the discussion should be grounded in the data the score represents.
"The scoring rubric doesn't make decisions. It makes the right conversations happen. The conversations make decisions."
The purpose of the rubric is to replace subjective impressions and political weight with something more objective. A VP of Marketing who says "we need this tool" deserves a response grounded in data: "We do, but 12% of your licensed seats are active monthly, and we're paying $340 per active user for a tool with a market rate of $85. Can we get the utilization up, or can we reduce the license count?" That's a conversation that can lead somewhere productive.
Want help scoring your portfolio? Book a free portfolio assessment → We'll apply this rubric to your top 20 applications and show you where the savings are.
