The Monday Morning Method: Start AI This Week — Without the Pilot Graveyard
Chris Duffy
Nov 10, 2025 • 8 Min Read
The Monday Morning Method: Start AI This Week — Without the Pilot Graveyard
The pilot graveyard is real.
Most organisations have one. It is the collection of AI tools that were enthusiastically adopted for six weeks, produced some promising early results, and then quietly ceased to be used. Nobody formally abandoned them — they just stopped being talked about. The licence renewals get quietly cancelled six months later.
The graveyard is not populated by bad tools. It is populated by tools that were deployed without a foundation. No documented baseline, so nobody could demonstrate the improvement. No defined use case, so the tool was used inconsistently. No trained champions, so when the early adopters moved on to other priorities, there was nobody to maintain momentum. No governance, so cautious users stayed cautious.
The Monday Morning Method is the four-week sequence that builds the foundation before any tool is deployed.
It is called the Monday Morning Method because that is when most business decisions get made. You come back from a conference, or read something that resonates, or have a conversation with a competitor that unsettles you, and on Monday morning you want to do something about AI. This is the plan for that morning.
Why the sequence matters
There is a specific reason why process and scoping work precedes tool selection, and it is not aesthetic. It is practical.
Tools selected without a defined use case get used in the lowest-friction way, which is usually the highest-visibility but lowest-impact way. Teams use the AI assistant to write emails they could write themselves in five minutes. They use the automation tool for a simple task that saves thirty seconds a week. The results are marginal, the enthusiasm fades, and the conclusion — that AI is not transformative for this business — becomes a self-fulfilling belief.
Tools selected against a specific, measured, documented requirement are deployed with purpose. Users know what the tool is for, what success looks like, and what they are measuring. The results are attributable. The confidence is earned.
The sequence produces the second outcome. The temptation to skip to the tools produces the first.
Week 1: Identify
The first week has one output: a clearly identified candidate process for AI deployment.
Not a vague aspiration. Not "we want to use AI for customer service." A specific, named workflow: the process by which incoming service enquiries are categorised, assigned, and responded to. The process by which weekly management reports are assembled. The process by which new product listings are created from supplier data.
The identification tool is the three signs from the 3-5-4 Method.
Look for processes that are repetitive — the same steps, multiple times a week, performed by multiple people. Look for processes that are rules-based but time-consuming — where the output follows a clear pattern but generating it takes significant manual effort. Look for processes with manual data transfer — where information moves from one system to another via a human being.
The candidate process is the one that shows all three signs.
The practical exercise for Week 1: gather the team for 90 minutes. Ask everyone to write down the three tasks that consume the most time in their week. Collect the answers. Look for the processes that appear most frequently, that show the most signs, and that have the clearest measurable outcomes.
That process is your Week 1 output.
If three processes emerge as equally strong candidates, apply one more filter: which one has the clearest before/after measurement? The one you can most easily measure is the right starting point — because an unmeasurable pilot is an unconvincible one.
Week 2: Map
The second week has one output: a complete, measured workflow map of your identified process.
This is the most important week in the entire sequence, and the one most frequently abbreviated. The temptation is to get to tools. Resist it. The map is what everything else depends on.
Use the five mapping questions from the 3-5-4 Method.
What triggers the process? What are the 4-8 steps, in the order they actually happen? Where are the handoffs between people or systems? How long does each step actually take — not estimated, measured? Where is the primary bottleneck or error risk?
The practical exercise: have the person who actually does this task walk you through it step by step while you observe or record. Then have them do it and time each step. Do this for three to five instances of the process to establish a realistic range. The average of those measurements becomes your baseline.
The baseline is the number you will compare against after deployment. Without it, you cannot calculate ROI. Without ROI, you cannot make the case for the next phase. Without the next phase, the pilot stays a pilot.
Document the map clearly enough that someone who has never done the task could understand the current state. That level of clarity is also what makes the automation specification obvious — you cannot automate what you cannot describe.
Week 3: Decide
The third week has one output: a clear set of four decisions about what to automate, what stays human, and what success means.
These are the four decisions from the 3-5-4 Method.
What stays human? Identify every step in your workflow map that requires judgement, contextual knowledge, relationship awareness, or handling of exceptions. These do not get automated. They get made faster and better by humans whose mechanical work has been removed.
What gets automated? Every step that involves calculation, format conversion, standard data entry, or information transfer between systems. These are the mechanical elements — time-consuming but not judgement-dependent.
What is the 80% case? Define the standard scenario your automation will handle completely. Define the criteria that make a workflow instance non-standard and route it to human review. The clearer this boundary, the more reliable the automation — and the more confidence users will have in it.
What does success look like in numbers? Using your Week 2 baseline measurements, define what you will measure after deployment and what the target is. Time saved per instance. Error rate before and after. Volume capacity increase. These targets should be ambitious enough to justify the investment and conservative enough to be achievable in the pilot window.
The practical exercise: conduct a 90-minute workshop with the process owner and at least two regular users of the workflow. Walk through each step on the map. Make the four decisions collaboratively. Document the output.
This workshop also serves a change management function. People who were part of the decision about what to automate and what stays human are significantly less likely to resist the resulting system. Involvement creates ownership.
Week 4: Select and deploy
With a mapped process, a documented baseline, and four clear decisions, you are now ready to select a tool.
The tool selection question is no longer "what is the best AI tool?" It is "which tool most directly addresses my documented requirement at an acceptable cost?"
Evaluate every candidate against five vendor questions.
Does it integrate with the systems already in use? A tool that requires extensive integration work before it produces any value extends the payback period significantly and creates technical debt.
Where is data stored, and is it GDPR compliant? For UK businesses, data residency matters — not just for compliance, but for the trust of regulated clients and partners.
Can we extract our data if we change tools? Vendor lock-in is a real risk in a rapidly evolving market. Ensure portability before committing.
What is the total cost — setup, subscription, training, and ongoing support? The licence fee is rarely the largest cost. Understanding the full picture prevents budget surprises at the point of renewal.
Does the vendor provide adoption support? A vendor whose commercial interest extends beyond the sale to actual adoption of their product is a meaningfully better partner than one whose interest ends at the contract signature.
The tool that scores best against your documented requirement is the right tool for this use case. Not the most impressive demo. Not the most recent product launch. The one that fits.
In Week 4, also identify your pilot group and your champions. The pilot group should be large enough to produce meaningful data — typically 8-15 people — and small enough to support intensively. The champions should be people who are already curious about the technology and willing to model usage publicly for their colleagues.
Deploy to the pilot group with proper training — minimum two hours, covering what the tool does, what it does not do, and what governance applies. Launch with a feedback mechanism. Set a review date four weeks out.
What comes next
The Monday Morning Method produces a four-week output: a deployed pilot with a trained group, a documented baseline, and a clear success measurement framework.
The four weeks after that — weeks 5-8 from the original Monday — produce the data. At the end of week 8, you have enough evidence to make a scale decision: expand to additional teams, iterate on the current deployment, or stop and try a different use case.
That decision is the one that separates the organisations in the 26% that scale AI successfully from the 74% that do not. It is made at week 8, not week 52. It is made on documented evidence, not instinct.
The organisations that follow this sequence — identify, map, decide, deploy — consistently achieve higher adoption rates and faster time to measurable outcomes than those that start with tool procurement. The work in weeks 1-3 is unglamorous. The results in weeks 5-8 are not.
The only question is which Monday you start.
If you would like support applying this framework in your business, or if you want a more comprehensive readiness assessment before committing to a pilot, the SPARK Assessment maps your AI readiness across 18 dimensions in two weeks.
Find out more: igniteaisolutions.co.uk
Chris Duffy is the Founder and Chief AI Officer at Ignite AI Solutions, helping UK SMEs implement AI that actually works. With 23 years in UK Defence including Special Forces, he brings security clearance, military execution discipline, and a culture-first methodology to AI transformation. His clients consistently achieve 85%+ adoption rates against an industry average of 35-50%.
Website: igniteaisolutions.co.uk
LinkedIn: linkedin.com/in/christopher-duffy-caio