GINC’s National Capability Ratings Methodology
The Global Institute for National Capability (GINC) conducts national assessments on 100+ capabilities for 150+ countries and territories.
The Global Institute for National Capability (GINC) conducts national assessments on 100+ capabilities for 150+ countries and territories. Assessments are composed of expert ratings, corresponding numerical scores and descriptive justifications. The 2025 edition is a baseline, working hypothesis, to inform the evolution of the National Capability Framework, the emerging ratings methodology, capability assessment rubrics and ratings review process.
Research & Ratings Review Process
The National Capability Scorecard is produced by a team of in-house and external analysts and expert advisers from academia, research groups and think tanks, technology analysts, economists and industry experts.
The inaugural, 2025 edition included 30 analysts and around 20 advisers. The initial hypothesis relies heavily on a baseline of over 500,000 synthetic survey respondents providing historic, current and forecast national capability assessments.
The analysts, who prepare the draft reports and ratings, use a broad range of sources, including academic research and analyses, news articles, reports from government and nongovernmental organizations, individual professional contacts, on-the-ground research and the synthetic survey responses.
The analysts’ proposed ratings are discussed and defended at a series of review meetings, organized by region and attended by GINC staff and a panel of expert advisers. The end product represents the consensus of the analysts, outside advisers, and GINC staff, who are responsible for any final decisions. Although an element of subjectivity is unavoidable in such an enterprise, the ratings process emphasizes methodological consistency, intellectual rigor, and balanced and unbiased judgments.
Scoring Process
GINC uses a three-tiered system consisting of expert assessments against defined capability rubrics, the development of consensus scores based on those assessments and finally the assignment of capability ratings based on the scores and further analysis. The complete list of the questions used in the expert assessment process, and the tables for converting rubric assessments to scores appear below.
Rubric Assessments
Experts are asked to classify a nations capability based on a 7 point rubric below. They also provide a medium term outlook for the capability (3-5 years) and an optional justification statement for what change or delta would be required for the assessment to be at the next level. In the case of the highest level, the justification is what it would take to lose the top level.
# | Name | Descriptor |
---|---|---|
A | Frontier | Indigenous leader in next generation capability; sets global benchmarks and defines standards. Resilient, adaptive, and capable of sustained leadership under stress. |
B | Advanced | Current-generation capability is strong and globally competitive. Next-generation systems may be imported, piloted, or adopted, but are not indigenously led. |
C | Developed | Primary force/capability is current generation; mature, reliable, and broadly institutionalized across the system. |
D | Intermediate | Previous generation dominates with partial modernization underway. Capability is reforming and scaled in parts, but performance remains uneven. |
E | Foundation | Basic but limited capability; functional core structures exist yet remain narrow in scope, outdated, or low in resilience. |
F | Emerging | Pilots, prototypes, or proofs-of-concept are in place; traction is visible but fragile and not yet institutionalized. |
P | Planning | No operational capability; intent only. Strategy papers, budgets, or designs exist, but nothing is fielded. |
When evaluating national capabilities, respondents also consider how they are expected to evolve over the medium term. An outlook assessment provides a forward-looking view of whether a capability is expected to deteriorate, remain steady, or improve over the next three to five years.
Outlook | Description |
---|---|
Negative | The capability has a Negative Outlook over the next 3–5 years and is expected to deteriorate. |
Stable | The capability has a Stable Outlook over the next 3–5 years and is expected to remain steady. |
Positive | The capability has a Positive Outlook over the next 3–5 years and is expected to improve. |
Scores
Each assessment is translated to a score using the table below. Scores are then averaged across a diverse range of assessments and a corresponding consensus score and rating is provisionally applied for that nation, for that capability.
Band | Score | Rating | Description |
---|---|---|---|
Frontier | 20 | AAA | Indigenous leader in next generation capability; sets global benchmarks and defines standards. Resilient, adaptive, and capable of sustained leadership under stress. |
Frontier | 19 | AA | Current-generation capability is strong and globally competitive. Next-generation systems may be imported, piloted, or adopted, but are not indigenously led. |
Frontier | 18 | A | High-capacity and resilient current generation capability. Performs strongly across domains but reliant on foreign R&D or partnerships for cutting edge. |
Advanced | 17 | BBB | Primary force/capability is current generation; mature, reliable, and broadly institutionalized across the system. |
Advanced | 16 | BB | Nationwide coverage of capability with resilient and reliable systems. Competitive against peers but not best-in-class. |
Advanced | 15 | B | Current generation capability dominates but has gaps in resilience, modernization, or integration. |
Developed | 14 | CCC | Established and scaled, but dominated by previous generation technology. Modernization underway but uneven. |
Developed | 13 | CC | Previous generation systems widespread; partial reform or upgrades in limited sectors. |
Developed | 12 | C | Core systems functional but outdated; capability limited to older generation platforms. |
Intermediate | 11 | DDD | Basic but limited capability; foundational systems in place with reform or investment gaining traction. |
Intermediate | 10 | DD | Minimal but functional system; limited reach, funding, or resilience. Legacy systems dominant. |
Intermediate | 9 | D | Rudimentary structures; outdated, inefficient, or fragile under stress. |
Foundation | 8 | EEE | Pilots, prototypes, or proofs-of-concept in place. Early-stage development with visible momentum or policy commitment. |
Foundation | 7 | EE | Nascent capacity with partial implementation or geographic pilots. Fragile and not institutionalized. |
Foundation | 6 | E | Lower emerging capacity; largely aspirational, reliant on external actors or one-off programs. |
Emerging | 5 | FFF | Short-term strategies or initiatives underway but no operational capability yet fielded. |
Emerging | 4 | FF | Longer-term strategies exist; conceptual or policy-level commitment to develop capability. |
Emerging | 3 | F | No operational system; early signals or aspirations exist but fragmented or unstructured. |
Planning | 2 | SP | Short-term planning only; no substantive capability present. |
Planning | 1 | LP | Long-term planning; policy-level commitment over future horizon but no implementation. |
Planning | 0 | NP | No plans, no operational capability, no visible strategy. |
Ratings
The review process looks at the distribution of assessments that make up the provisional rating. The ratings and scores from the previous edition are used as a benchmark for the current year under review. A ratings is typically changed only if there have been a real-world developments during the year that warrants a decline or improvement (e.g., a xxx), though gradual changes in conditions—in the absence of a signal event—are occasionally registered in the ratings.