TL;DR
- Most AI projects don't deliver real value. Studies show the majority of AI and gen-AI initiatives fail to produce measurable revenue or productivity gains.
- You don't need a PhD to choose good AI ideas. A simple 3-question filter can help you decide what's worth funding and what's just hype.
- Every serious AI idea must clearly hit one of two goals: grow revenue or reduce meaningful cost/risk - ideally both.
- No data, no value. If you don't have usable data as context, even the best model can't help you much.
- If you can't measure before/after, don't do it. AI without clear metrics becomes an expensive science experiment.
Why you need a brutally simple test for AI ideas
If you're an executive or owner of an SMB in the US or Israel, you're probably hearing "We should do something with AI" in almost every leadership meeting.
At the same time, most companies are not seeing real business value from their AI spend. A recent MIT-backed study found that about 95% of generative AI projects fail to show any meaningful financial uplift, despite billions invested.
A BCG report similarly found that only about 5% of companies are truly capturing value from AI; the rest see little or no earnings impact.
The problem isn't the technology. It's that:
- Ideas aren't tied tightly enough to revenue, cost, or risk
- Companies lack usable data to power the AI
- There's no clear way to measure before/after impact
So instead of giving you another 40-page AI strategy, here's a three-question test you can apply to any AI idea in under 10 minutes.
If an idea fails on any of these, you should pause or kill it before you spend serious money.
Question 1: Does this clearly grow revenue or reduce a meaningful cost or risk?
If this works, where does the money show up?
Every AI initiative you consider should make it easy to answer at least one of these:
-
Revenue:
- Will this help us close more deals?
- Will it increase average deal size or lifetime value?
- Will it unlock new products/services we can sell?
-
Cost / Time:
- Will it reduce manual work enough that we can handle more volume with the same team?
- Will it shorten cycle times (sales cycles, ticket resolution, invoice processing)?
-
Risk / Complexity:
- Will it reduce errors that could lead to financial or legal risk?
- Will it simplify a process that is currently fragile and dependent on one or two key people?
High-performing companies that actually extract value from AI treat it as a tool to hit these business levers, not as a cool demo.
Quick self-check:
- If you had to remove the word "AI" from the slide, would this still be a good business project?
- Can you point to a specific line item in your P&L this idea will affect (revenue, COGS, payroll, bad debt, etc.)?
- Is the potential impact material (for example, ≥5–10% change in that metric), not just a 1–2% nice-to-have?
If you can't answer yes here, the idea is probably interesting but not needle-moving.
Question 2: Do we have real, usable data to feed this?
You've heard the phrase: "AI is only as good as your data." It shows up in AWS, MIT, and industry research over and over again. McKinsey's recent State of AI report found that 70% of organizations struggle with data issues like governance, integration, and simply not having enough data to make AI useful.
For SMBs, "data" doesn't mean fancy data lakes. It usually means:
- CRM records (even if incomplete)
- Email threads with leads and customers
- Website and e-commerce analytics
- Support tickets and call logs
- Spreadsheets your team lives in every day
The key questions are:
- Availability:
- Can we export a meaningful sample (e.g., last 6–12 months) into a spreadsheet within a week?
- Basic quality:
- Do our records have the basics filled in (amounts, dates, statuses, names)?
- Would a human reading 50 random records understand what's going on?
If the idea depends on data you wish you had but don't actually have in a usable way, you're not looking at an AI project. You're looking at a data-foundation project first.
Rule of thumb:
If you can't inspect the underlying data in a simple table, don't expect an AI model to magically turn it into value.
Question 3: Can we measure before/after impact in 90 days or less?
Even when an idea is tied to value and has data behind it, many companies still fail at the final hurdle: measurement.
Practitioners and analysts consistently recommend treating AI like any other investment: define clear metrics and baselines, then track change.
For SMBs, keep it simple and concrete:
-
Sales examples:
- Average response time to inbound leads
- Conversion rate from lead → meeting → closed-won
- Revenue per rep or per marketing dollar
-
Operations examples:
- Tickets resolved per agent per day
- Average handling time per request
- Number of errors, disputes, or chargebacks
-
Risk / quality examples:
- Number of compliance exceptions
- Rate of data-entry mistakes in critical systems
For each AI idea, ask:
- What single metric will we watch?
- What's the baseline?
- e.g., average 42 hours to respond to a new lead; 12 tickets per agent per day.
- What's the target after 60–90 days?
- e.g., response time under 1 hour; 20 tickets per agent per day.
If you cannot define a baseline and target within a one-hour discussion, it's a warning sign. You're probably about to fund a science experiment instead of a business initiative.
Putting it into practice this week
Here's how to use this framework in the real world:
Step 1 – List your AI ideas
Gather your top 3–5 AI ideas currently floating around:
- "AI assistant for customer support"
- "AI for outbound prospecting"
- "AI to summarize internal documents"
- …whatever is on your roadmap or in your inbox
Write them down in a simple table.
Step 2 – Score each idea against the three questions
For each idea, give a score from 1–5 on:
- Value: How clearly does it drive revenue or reduce a meaningful cost/risk?
- Data: How confident are we that we have usable data available within a week?
- Measurement: How easy is it to define a before/after metric for 90 days?
You'll quickly see patterns:
- Some ideas are high-value but blocked by missing data
- Others are easy to try but barely affect revenue, risk, or time
Step 3 – Make one decision
By the end of this activity, decide:
- One idea to prioritize for a 60–90 day pilot
- Ideas to park until you fix data foundations
- Ideas to kill because they don't move the needle
A next step that doesn't lock you into anything
If you already have an AI backlog, the next helpful step isn't "buy a platform" or "hire ten data scientists."
A better move is to:
- Use this three-question test with your leadership team
- Shortlist one use case with:
- Clear value (revenue, cost, or risk)
- Accessible data
- Measurable before/after metrics in 90 days
Then, whether you work with your internal team or an external AI partner, you can design a small, low-risk pilot around that one use case.
No big transformation, no hype — just a disciplined way to ensure your next AI project actually moves the needle instead of adding more complexity to your business.
