Most AI projects don’t fail because the technology is bad. They fail because the organization wasn’t ready when they started, and the cost of finding that out the hard way is months of effort and a wary team that won’t want to try again.
The good news is that AI readiness is something you can assess in about thirty minutes, before you spend a dollar. This piece walks you through the ten questions we ask every client during our discovery process, four warning signs that mean you should fix something else first, and a clear way to interpret your answers.
What “AI ready” actually means
AI readiness is not about having a data science team or a budget line for machine learning. For a small or mid-size organization, it means three simple things:
- You have a real problem that’s worth solving
- You have the operational basics in place to support a new tool
- You have the people and decision-making capacity to see a project through
If any of those three legs is missing, even the best AI tool will struggle. If all three are in place, you can almost certainly run a successful pilot in 90 days using off-the-shelf software and a modest budget.
Let’s check.
The ten-question AI readiness self-assessment
Answer these honestly. Score 1 point for each “yes.” We’ll talk about what your total means at the end.
1. Can you name a specific, recurring problem you want to solve?
Not “we want to use AI.” An actual problem. “Our team spends ten hours a week answering the same five customer questions.” “We lose half our after-hours leads because nobody responds until morning.” “Our intake forms take three days to triage.” If you can describe the problem in one sentence with a number in it, score a point.
2. Does that problem happen often enough to matter?
AI is most valuable for tasks that happen dozens or hundreds of times a month. If your candidate problem happens twice a week, the time you’d save isn’t worth the setup cost. Score a point if your problem occurs at least 50 times a month.
3. Do you have the data the AI would need to do its job?
AI can’t answer customer questions if you don’t have written answers somewhere. It can’t summarize meetings if no one records them. It can’t qualify leads if your CRM is empty. Score a point if the inputs the AI would need already exist in some usable form, even if they’re scattered.
4. Is the cost of an imperfect answer low?
Early AI use cases should be tasks where a draft is more useful than a blank page and a human reviews the output. Score a point if a wrong answer would be embarrassing but not catastrophic, drafted emails, not signed contracts. Auto-approving refunds and giving medical advice are not where to start.
5. Is there a documented workflow around the task?
AI is much easier to add to an existing process than to invent one. Score a point if your team already has a defined way of handling the work (even an informal one) that you could describe to a new hire on day one.
6. Can you name the person who will own the project?
Every successful rollout has a human owner, not necessarily a developer, just someone who cares enough to babysit the tool through its first few weeks. Score a point if you can name that person right now and they have at least five hours a week to spare.
7. Does leadership actually want this?
AI projects need air cover. If the executive director, owner, or department head is lukewarm, the project will quietly die the first time it hits friction. Score a point if leadership is genuinely enthusiastic, not just willing.
8. Do you have a realistic budget?
For most first AI projects, a realistic three-month budget is $1,500 to $10,000, covering software, setup, and any outside help. Score a point if you have that available without needing to write a board memo for it. (If you don’t, that’s not a deal-breaker. It just means your first move should be a smaller free or low-cost trial.)
9. Are you willing to run a pilot before you roll out?
The teams that succeed pilot with one group, measure honestly, and only expand if it works. The teams that fail try to deploy organization-wide on day one. Score a point if you’re prepared to spend the first 90 days proving the tool works on a small scale.
10. Can you define what success looks like in numbers?
“Save time” is not a success metric. “Reduce average response time from 8 hours to under 30 minutes” is. Score a point if you can write down a measurable goal for the project before you start.
Scoring your results
Add up your points.
8–10 points: You’re ready. You have a real problem, the data and workflow to support a solution, a named owner, leadership backing, and a clear measure of success. Pick your highest-priority use case and start a 90-day pilot. The AI Implementation Guide walks through how to plan and run it.
5–7 points: You’re almost ready. You have most of what you need, but there are one or two gaps to close before you start. Look at the questions you answered “no” to and treat those as your prep work. Most teams in this range are 30 to 60 days away from being fully ready, and the prep itself is usually quick once you know what to focus on.
Below 5 points: Not yet, and that’s okay. Implementing AI before the foundations are in place almost always wastes money. Spend the next quarter fixing the underlying gaps: define a real problem, document a workflow, identify a project owner, get leadership aligned. Then come back to the assessment. You’ll save yourself a frustrating false start.
Four warning signs you’re not ready
Beyond the scorecard, there are a few patterns that should give you pause. If any of these describe your situation, address them before you start any AI work.
Warning sign 1: you’re shopping for tools before you’ve defined the problem
If your team has been demoing AI platforms for months without ever writing down the specific problem you want to solve, you’re in tools-first mode. Stop. Run the four-question discovery exercise from our AI Implementation Guide before you watch one more sales call.
Warning sign 2: your data lives in people’s heads
If the knowledge your AI would need is locked inside one or two long-tenured employees and isn’t documented anywhere, the AI won’t have anything to work with. Spend a few weeks capturing that institutional knowledge (even rough notes are better than nothing) before you bring in a tool that needs to read it.
Warning sign 3: your existing workflows are broken
AI doesn’t fix broken processes. It scales them, meaning if your customer service triage is chaotic today, AI will give you chaotic triage at higher volume tomorrow. If you suspect a workflow is fundamentally broken, fix the workflow before you automate it.
Warning sign 4: there’s no clear owner
If everyone in the room nods enthusiastically when AI comes up but no one volunteers to actually run the project, it won’t happen. AI tools without a named human owner decay into shelfware in about six weeks. If you can’t identify that person, identify the staffing gap as the first problem to solve.
What to do if you’re not quite ready
Falling short on a few questions isn’t the end of the road. It usually means you’re 30 to 90 days of focused prep work away from being ready to start. Here’s what to do in the meantime:
- Pick one problem to study. You don’t need a long list. One specific, well-defined problem is enough to start once you’re ready.
- Document the workflow you’re considering automating. Write it down end to end. The act of documenting will surface issues you didn’t realize were there.
- Identify your owner. Have the conversation. Make sure they want the role and have the time. Get it in writing if you need to.
- Get leadership aligned. Walk the executive team through what an AI pilot would look like, what it would cost, and what success would mean. Get explicit buy-in, not vague nodding.
- Audit your data. Where does the relevant information live today? Is it in one system, ten systems, or someone’s head? Clean it up before you ask AI to read it.
Do those five things in a quarter and you’ll come back to this assessment with a much higher score.
Why this matters more than the tools you pick
The single most reliable predictor of AI success in a small organization isn’t the platform, the model, or the budget. It’s whether the organization had its act together before it started. We see the same pattern across every industry we work with, from professional services to nonprofits to home services.
Two businesses can buy the exact same tool, on the exact same day, and end up in completely different places six months later. The one that started with a defined problem, an owner, leadership backing, and a willingness to pilot will have a quietly successful AI workflow saving real time. The one that started with “we should do something with AI” will have an unused subscription and a team that’s now skeptical of the whole idea.
This assessment exists to make sure you’re in the first group.
Your next step
If you scored well on the assessment, your next move is to pick one use case and run a 90-day pilot. Our AI Implementation Guide is the playbook to follow.
If you scored in the middle and want a second opinion on what to fix first (or if you want help running the pilot itself once you’re ready) that’s exactly what our AI Implementation Consulting service is built to do. Schedule a free strategy call and we’ll walk through your specific situation, the gaps you should close, and the realistic next 90 days for your organization. No sales pitch, just a clear conversation about whether AI is the right move for you right now.