← Back to Insights
Framework

AI Readiness Scorecard: Is Your Organization Ready?

2026-01-066 min read

Before you launch an AI initiative, ask yourself: Is your organization actually ready?

I've seen too many agencies rush into AI projects only to hit preventable roadblocks: outdated data infrastructure, resistant culture, unclear governance, insufficient skills.

This scorecard helps you assess readiness across 12 dimensions and identify gaps before they become project killers.

How to Use This Scorecard

Scoring:

  • Rate your organization on each dimension from 1-5
  • 1 = Major gap, 5 = Fully ready
  • Be honest—this is for your benefit, not for show

Interpretation:

  • 48-60 points: You're ready to launch. Start with a pilot.
  • 36-47 points: Addressable gaps. Fix 2-3 critical items first.
  • 24-35 points: Significant work needed. Start with foundational improvements.
  • Below 24: Premature for AI. Focus on basics (data quality, digital infrastructure).

The 12 Dimensions of AI Readiness

1. Leadership Commitment

What to assess: Does executive leadership actively support AI initiatives with time, attention, and resources?

Scoring guide:

  • 5: CIO/City Manager champions AI, allocates budget, removes barriers
  • 3: Interested but passive support, limited budget allocated
  • 1: Skeptical or absent leadership, no dedicated resources

Why it matters: AI projects require organizational change. Without executive air cover, they stall when encountering resistance.

Red flag: Leadership says "yes" but won't dedicate staff time or budget.


2. Clear Use Case

What to assess: Have you identified specific problems AI will solve, with measurable outcomes?

Scoring guide:

  • 5: Documented use case with clear metrics and ROI projection
  • 3: General idea but fuzzy metrics (e.g., "improve efficiency")
  • 1: Starting with technology and looking for problems

Why it matters: See my article on why AI pilots fail—starting without a clear problem is the #1 mistake.

Red flag: Your business case starts with "AI could..." instead of "We currently spend X hours/dollars on..."


3. Data Availability

What to assess: Do you have accessible, relevant data to train/run AI systems?

Scoring guide:

  • 5: Data is digital, centralized, and readily accessible
  • 3: Data exists but scattered across systems, requires integration
  • 1: Data is primarily paper-based or doesn't exist in usable format

Why it matters: AI needs data. If it's locked in filing cabinets or trapped in legacy systems, you're not ready.

Quick test: Can you export 6 months of relevant data in under 2 hours? If not, score yourself lower.


4. Data Quality

What to assess: Is your data accurate, complete, and well-maintained?

Scoring guide:

  • 5: Regular data quality audits, standardized entry, minimal errors
  • 3: Inconsistent quality, some missing data, sporadic validation
  • 1: Known accuracy issues, significant gaps, no quality standards

Why it matters: Garbage in, garbage out. AI trained on bad data produces bad results.

Reality check: If staff don't trust your current reports, they won't trust AI outputs either.


5. Technical Infrastructure

What to assess: Do you have the IT foundation to support AI tools?

Scoring guide:

  • 5: Cloud-ready, modern APIs, solid security, good uptime
  • 3: Mix of legacy and modern systems, some integration challenges
  • 1: Primarily on-prem legacy systems, limited API access

Why it matters: Many AI tools are cloud-based SaaS. If you can't integrate them with existing systems, value is limited.

Pro tip: Start with standalone AI tools that don't require deep integration, then upgrade infrastructure as you prove value.


6. Technical Skills

What to assess: Does your team have AI/data science skills, or access to them?

Scoring guide:

  • 5: In-house data scientists or AI specialists
  • 3: Technical staff who can learn, plus budget for training/contractors
  • 1: No technical capacity and no plan to build it

Why it matters: Someone needs to evaluate vendors, interpret results, and troubleshoot issues.

Good news: You don't need PhDs. Practical AI implementation needs translators more than researchers.


7. Change Management Capability

What to assess: Can your organization successfully adopt new processes and tools?

Scoring guide:

  • 5: Track record of successful technology adoption, change champions exist
  • 3: Mixed results, some resistance but manageable
  • 1: History of failed rollouts, entrenched "we've always done it this way" culture

Why it matters: Technical success doesn't equal organizational success. User adoption is harder than implementation.

Reference: My research on organizational culture and systems (read more) shows structure matters more than communication.


8. Governance & Policy

What to assess: Do you have policies for AI use, data privacy, and algorithmic decisions?

Scoring guide:

  • 5: Documented AI governance framework, ethical guidelines, privacy policies
  • 3: General IT policies but no AI-specific guidance
  • 1: No relevant policies or governance structure

Why it matters: AI raises new questions about transparency, bias, and accountability. Define guardrails early.

Start here: Adapt existing IT policies. You don't need perfection, just clear principles.


9. Budget & Resources

What to assess: Have you secured funding for both pilot and scale-up phases?

Scoring guide:

  • 5: Multi-year budget allocated, includes pilot + scaling + ongoing costs
  • 3: Pilot budget secured, scaling contingent on results
  • 1: No dedicated budget, hoping to "find money" later

Why it matters: Pilots without scale-up budgets die even when successful.

Real talk: Budget for 3x your pilot cost to scale properly. If you can't, pick a smaller pilot.


10. Vendor/Partner Ecosystem

What to assess: Have you identified potential technology partners or vendors?

Scoring guide:

  • 5: Vetted 3+ vendors, clear evaluation criteria, pilot partner selected
  • 3: Initial research done, aware of options, no formal evaluation yet
  • 1: No vendor research, don't know what's available

Why it matters: Build vs. buy decisions require knowing what's available. Most local governments should buy.

Tip: Talk to peer agencies who've implemented AI. Learn from their vendor experiences.


11. Legal & Compliance Clarity

What to assess: Do you understand legal constraints on AI use in your domain?

Scoring guide:

  • 5: Legal review completed, compliance requirements documented
  • 3: Aware of key issues (privacy, procurement, records), need formal review
  • 1: Haven't considered legal implications

Why it matters: Public sector faces unique constraints around transparency, public records, and procurement.

Action: Brief your legal team early. They're allies, not obstacles (usually).


12. Success Metrics Defined

What to assess: Can you measure whether AI is working?

Scoring guide:

  • 5: Clear KPIs defined, baseline measured, tracking system ready
  • 3: General ideas about metrics but not formalized
  • 1: "We'll know it when we see it" approach

Why it matters: You can't improve what you don't measure. And you can't justify scaling without proof.

Template:

  • Time saved: Currently takes X hours/week, target Y hours/week
  • Cost impact: Currently costs $X/year, target $Y/year
  • Quality improvement: Current error rate X%, target Y%

Your Scorecard Action Plan

If You Scored 48-60 (Ready to Launch)

Next steps:

  1. Select your pilot use case using the Goldilocks criteria
  2. Create 90-day pilot plan with clear exit criteria
  3. Form project team with cross-functional representation
  4. Launch pilot and track metrics weekly

Timeline: Start pilot within 30 days.

If You Scored 36-47 (Addressable Gaps)

Next steps:

  1. Identify your 2-3 lowest-scoring dimensions
  2. Create action plans to improve each to at least a "3"
  3. Focus on quick wins (governance policies, vendor research)
  4. Reassess in 60 days

Timeline: Address gaps for 60-90 days before launching pilot.

Priority fixes:

  • Data access/quality issues (work with IT to catalog and clean data)
  • Governance gaps (draft AI use policy, get legal review)
  • Leadership buy-in (create business case, present to executives)

If You Scored 24-35 (Foundation Work Needed)

Next steps:

  1. Don't start an AI pilot yet—you'll likely fail
  2. Focus on foundational improvements:
    • Digitize paper processes
    • Improve data quality and accessibility
    • Build change management capability
    • Upgrade technical infrastructure
  3. Consider hiring a consultant for a readiness assessment
  4. Reassess in 6 months

Timeline: 6-12 months of foundation work before AI pilot.

Silver lining: The work you do now will pay dividends beyond AI (better data, modern systems, change-ready culture).

If You Scored Below 24 (Premature for AI)

Honest assessment: You're not ready for AI initiatives yet, and that's okay.

Better investments right now:

  1. Basic digital transformation (move from paper to digital)
  2. Data governance and quality improvement
  3. Staff training on existing systems
  4. Infrastructure modernization

Timeline: Come back to AI in 12-18 months after foundational work.

Perspective: Many successful AI implementations come from organizations that spent years building solid digital foundations first.


Download the Template

I've created a printable AI Readiness Scorecard with:

  • Scoring worksheet for your leadership team
  • Action plan templates for each dimension
  • Gap analysis matrix
  • Vendor evaluation criteria

Download the free template →


Questions?

Every organization's readiness journey is different. If you're unsure where to start or need help interpreting your score, let's talk.

I offer AI Readiness Assessments for local government agencies—we'll work through this framework together and create your specific action plan.

Get More Insights Like This

Join 1,000+ local government leaders receiving weekly case studies, frameworks, and practical AI guidance.

Subscribe to Newsletter