The 3-5-4 Method: How to Find Your Highest-Impact AI Opportunity in One Morning
Chris Duffy
Nov 17, 2025 • 9 Min Read
The 3-5-4 Method: How to Find Your Highest-Impact AI Opportunity in One Morning
The most common reason AI projects fail before they start is not technical and not financial.
It is this: the business chose a tool before it understood its problem.
Someone went to a conference, saw a vendor demo, came back enthusiastic, bought a licence, pointed it at their business, and waited for transformation. When it did not come — or came only partially, only for two of the ten people who were supposed to use it — the conclusion was that "AI doesn't work here."
The conclusion was wrong. The process was wrong.
The tool was probably fine. The problem was that nobody had done the work to understand which specific workflow the tool should address, why that workflow was the right priority, and what "success" would actually look like in measurable terms.
That work is what the 3-5-4 Method does.
Why process first
There is a structural reason why businesses default to tool-first thinking. Vendors are very good at demonstrating capability. They show you the impressive outputs — the polished document, the automated workflow, the conversion improvement. They are showing you what their tool can do. They are not showing you whether your specific problem is the right fit, or whether your team will adopt it, or whether the data quality is sufficient to support it.
Process-first thinking inverts this. You start with the problem. You map it precisely. You decide what should and should not be automated. Then — and only then — you evaluate tools against a specific requirement.
Businesses that follow this sequence achieve significantly higher adoption rates and produce results they can point to. Businesses that skip it end up with capable tools that nobody uses.
The 3-5-4 Method is the framework for the first part: finding and scoping the right problem before you touch any technology.
The Three Signs
The first step is identifying a candidate process. Not all processes are equal candidates for AI. The ones that produce genuine, measurable results share three observable characteristics.
Sign 1: "We do this every day."
Repetitive tasks with minimal variation between instances. Data entry. Daily reports. Standard correspondence. Routine categorisation. The hallmark of this type is that the person doing it could describe the exact steps without thinking — because they have done it hundreds or thousands of times.
These tasks are prime candidates because AI handles repetition well. The training cost is low, the output quality is consistent, and the time saving compounds quickly.
Sign 2: "It takes forever."
Rules-based tasks that are time-consuming but not genuinely complex. Client onboarding. Quotation generation. Compliance checks against a known checklist. Tender preparation using a structured template. The hallmark is that there are clear rules governing the output — it is just that applying those rules takes a significant amount of time.
These tasks are prime candidates because the rules can be encoded. The AI applies them consistently, without the variation that comes from doing a tedious task at 4pm on a Friday.
Sign 3: "We copy-paste this."
Manual data transfer between systems. Email to CRM. Website enquiry to quote template. Spreadsheet to accounting system. The hallmark is that information exists in one place and needs to be reproduced in another — and a human being is currently the mechanism for that transfer.
These tasks are prime candidates because they are pure mechanical work with no judgement content. There is no reason for a human to be doing them, and they represent significant accumulated time at scale.
A process that shows all three signs — repetitive, rules-based, and involving data transfer — is your highest-priority candidate. Start there.
The Five Questions
Once you have a candidate process, the next step is mapping it precisely. The purpose of the mapping is to understand the workflow in enough detail to make intelligent automation decisions. Assumptions and approximations at this stage produce poor automation decisions later.
The five questions produce a complete map.
Question 1: What triggers this task?
What starts it? An incoming email? A calendar event? A customer action? A scheduled time? A field update in a system? The trigger defines the entry point for automation. If the trigger is inconsistent or unclear, that is itself important information about the workflow.
Question 2: What are the 4-8 steps in sequence?
Walk through the workflow step by step, in the order they actually happen. Not the order they should happen in theory — the order they actually happen in practice. Ask the person who does this task, not the person who designed the process. These are often different.
Question 3: Where are the handoffs?
Where does the workflow move from one person to another, or from one system to another? Handoffs are where delays accumulate, errors occur, and information gets lost. They are also, often, the highest-value points for automation — because automating a handoff removes an entire category of friction.
Question 4: How long does each step actually take?
Measure this. Do not estimate. Ask the person doing the task to time themselves for one week. The actual figures are almost always different from the intuitive guess, often significantly. You cannot calculate ROI without a documented baseline, and you cannot document a baseline without actual measurement.
Question 5: Where is the primary bottleneck or error risk?
Where does the workflow slow down? Where do mistakes happen? Where does the output most frequently need to be corrected or redone? This is the point of maximum leverage — where AI can produce the most impact per unit of complexity.
The output of the five questions is a complete, measured workflow map. It typically takes two to three hours to produce properly — half a morning. The information it contains is the foundation for every subsequent decision.
The Four Decisions
With a mapped workflow, you have enough information to make four critical decisions about what to automate and what to keep human.
Decision 1: What stays human?
Not everything in a workflow should be automated, and attempting to automate judgement-dependent elements typically produces worse outcomes than leaving them human. Strategy belongs here. Decisions that depend on context not captured in the data. Relationships where the human element is part of the value. Exception handling for cases that fall outside the standard pattern.
The test for this decision is: could someone who had never worked in this business, given only the documented inputs, produce a correct output? If yes, the step can be automated. If no — if it requires institutional knowledge, contextual judgement, or relationship awareness — it stays human.
Decision 2: What gets automated?
Everything that passes the test above: calculations, standard format conversions, consistent data entry, information transfer between systems, standard compliance checks, first-draft generation from templates. These are the mechanical elements of the workflow — the ones consuming time without contributing judgement.
Decision 3: What is the 80% case?
No automated system handles every edge case perfectly. Attempting to build one that does creates a system that is expensive to build, complex to maintain, and often less reliable than a simpler one.
The 80% case is the standard scenario — the most common version of the task — that automation will handle completely. The 20% that falls outside that definition routes to a human review process. Defining the boundary clearly, in advance, is what makes the automation reliable.
A useful test: what makes a workflow instance non-standard? If you can list the criteria — unusual client type, non-standard format, regulatory complexity — you can build a routing rule. If you cannot list them, the automation needs more scope work before deployment.
Decision 4: What does success look like in numbers?
Before any tool is deployed, define what you will measure and what the target is. This is the step most frequently skipped, and its absence is why many AI implementations produce outcomes that nobody can confidently assess as good or bad.
The metrics should come directly from the workflow map: time saved per instance, error rate reduction, volume capacity increase. They should be tied to the baseline figures from Question 4. And they should include a minimum acceptable threshold — the point below which the implementation does not justify the investment.
Success defined in advance survives the optimism that surrounds any new deployment. It also gives you something to point to when the pilot is complete.
A worked example
A manufacturer processes ingredient cost changes twice a year. Each update requires changing prices in a spreadsheet, recalculating margins, updating the website, updating the accounting system, generating a new PDF price list, and emailing the sales team. The process involves multiple staff members and takes several days.
Applying the three signs: it is repetitive (same process twice a year, same steps each time), rules-based (the calculations follow fixed margin rules), and full of manual data transfer (every step involves moving numbers from one system to another). Strong candidate.
Mapping the five questions reveals that the process takes three to four days across multiple staff, involves six separate manual handoffs, and the primary error risk is pricing inconsistency between systems when updates are done in a different order on different days.
The four decisions: strategy and customer relationships stay human; every calculation, format conversion, and system update gets automated; the 80% case is standard ingredient changes (non-standard variations — new product ranges, promotional pricing — route to human review); success is defined as the entire update completing in under 30 minutes with zero inconsistency between systems.
The implementation reduces the process from multiple days to approximately 15 minutes, eliminates all six manual handoffs, and removes the error class of system inconsistency entirely.
That outcome was not produced by an impressive tool. It was produced by process work that happened before a tool was selected. The tool — a straightforward automation connecting the existing systems — became obvious once the map was complete.
What this is not
The 3-5-4 Method is a discovery and scoping framework. It is not an implementation methodology.
The output is a well-defined, measured AI use case with documented baseline metrics and clear success criteria. That output is what makes a sound implementation possible. It is not the implementation itself.
The typical time to complete the framework for one process is a focused half-day for the mapping, plus a week of baseline measurement before that. For businesses with multiple candidate processes, a more structured discovery process — like the SPARK Assessment — applies the same rigour across the full business systematically, scoring 18 dimensions of readiness and producing a prioritised 90-day roadmap.
But for a business that wants to start somewhere, and start properly: pick the process that shows all three signs. Answer the five questions. Make the four decisions. You will have more clarity about your AI opportunity than most businesses achieve after six months of tool procurement.
If you want to apply the 3-5-4 Method with structured support, or if you want a comprehensive readiness picture before committing to implementation, the SPARK Assessment is the starting point.
Find out more: igniteaisolutions.co.uk
Chris Duffy is the Founder and Chief AI Officer at Ignite AI Solutions, helping UK SMEs implement AI that actually works. With 23 years in UK Defence including Special Forces, he brings security clearance, military execution discipline, and a culture-first methodology to AI transformation. His clients consistently achieve 85%+ adoption rates against an industry average of 35-50%.
Website: igniteaisolutions.co.uk
LinkedIn: linkedin.com/in/christopher-duffy-caio