Start With the Problem, Not the Technology
The most common mistake firms make when approaching AI automation is leading with the technology rather than the problem. "We want to implement an AI solution" or "we want to use LLMs in our process" are not useful briefs. They put the cart before the horse and tend to lead to projects that look impressive technically but don't solve anything specific.
The most productive conversations start the other way: here is a task we do repeatedly, here is how long it currently takes and who does it, here is what the output needs to look like. That specificity is what allows a consultant to tell you quickly whether automation is technically feasible, what it would cost, and what the realistic time saving would be.
The Information That Actually Matters
When preparing to brief a consultant, focus on collecting the following:
A description of the current manual process
Walk through exactly what someone does today to complete the task. Not at a high level — step by step. If it involves reading documents, which documents, how many, in what format? If it involves pulling data from multiple sources, which sources, how is it accessed, what format is the data in? If it involves compiling a report, what does the report look like and who receives it?
The volume and frequency
How often does this task happen? Daily, weekly, monthly, per transaction? How many units of work does each instance involve — how many documents, how many data sources, how many records? Volume and frequency together determine the ROI case, so they are the most important numbers to have.
Who does the work and at what cost
What seniority level is currently doing this task — associate, analyst, paralegal, partner? An approximation of their hourly cost (salary plus overhead, not billing rate) is helpful for calculating the business case. Even a rough figure is useful: "a mid-level associate who costs the firm about £60 per hour" is more useful than nothing.
What the output needs to look like
What does the finished product look like today? A spreadsheet, a Word document, an entry in a case management system, a structured database? And who uses it — is it internal, or does it go to clients? The output format determines what the automation needs to produce, which affects the build.
Sample materials
If possible, bring examples of the inputs and outputs. A sample document the system would need to process. A copy of the report it would need to produce. Even approximate examples are valuable — they let a consultant assess quickly whether the automation is straightforward or complex, and whether there are edge cases that need handling.
What a Good Scoping Conversation Looks Like
A competent AI automation consultant should be able to tell you, by the end of a 45-minute call, whether your problem is automatable, what the likely approach is, roughly what it would cost to build, and what the realistic time saving would be. If someone cannot give you a directional answer after a single conversation, they either lack experience or your brief is still too vague.
The consultant should ask questions about the inputs (what exactly goes into the process), the outputs (what exactly comes out), the edge cases (what happens when the input is ambiguous or non-standard), and the constraints (data residency requirements, integration with existing systems, budget range). If these questions are not being asked, that is a warning sign.
Evaluating a Proposal
When you receive a proposal, there are a few things worth scrutinising:
- Is the scope specific? A good proposal describes exactly what the system will and will not do. Vague scope is a risk — it means the consultant has flexibility to deliver less than you expected while technically meeting the brief.
- Is the accuracy claim realistic? Any claim of 100% accuracy should be treated with scepticism. A credible proposal will specify expected accuracy on the specific document types involved and describe the validation mechanism — how errors are caught before they reach downstream users.
- What happens when it goes wrong? What is the exception-handling approach for documents the system cannot process confidently? A well-designed system flags uncertainties rather than guessing; the proposal should describe this.
- What does ongoing maintenance look like? Document formats evolve. Regulation changes. Processes change. What is the arrangement for maintaining and updating the system after delivery?
What Makes a Project Go Well
In our experience, the projects that succeed most quickly share three characteristics: a specific, bounded scope; a clear owner on the client side who can make decisions about the output format and validation criteria; and real example documents to build and test against. The projects that run into difficulty tend to have a vague or expanding scope, no designated decision-maker, and no real materials to work with until late in the build.
Getting these three things in place before you start is the best investment you can make in a successful automation project.