For knowledge lawyers, IT directors and innovation leads at UK law firms — a stage-by-stage walk through how a document extraction pipeline actually works on legal materials, written so that a non-engineer partner can sense-check the design.
The Core Problem: Legal Documents Are Not Data
Law firms hold enormous amounts of useful information locked inside documents. Commercial contracts, court filings, Companies House filings, leases, regulatory correspondence, witness statements, deal bibles. The information is there — the parties, the financial terms, the key dates, the obligations — but it is buried in prose and formatted pages rather than stored as queryable data.
To do anything systematic with it — analyse it, report on it, schedule it for a partner — someone has to read each document and transfer the relevant data into a spreadsheet, schedule or DD report. On a large matter that is one of the most time-consuming and error-prone tasks in the practice.
Modern AI extraction pipelines solve this. Here is how they work, stage by stage, in a legal context.
Stage 1: Document Ingestion
The first step is getting documents into the system. In a law firm setting they usually arrive in several formats:
- Native PDFs — Generated digitally (e.g. exported from Word). Machine-readable text is already embedded.
- Scanned PDFs — Created by scanning a physical original. These are images; there is no underlying text layer. Common for historic leases and old company books.
- Word documents (.docx) — Generally straightforward to parse, as the XML structure is accessible.
- Images (JPEG, PNG, TIFF) — Scanned documents saved as images rather than PDFs.
- Email and message archives (.msg, .eml) — Common in disputes and investigations work.
The pipeline needs to handle all of these. For native PDFs and Word, text extraction is direct. For scanned documents, an OCR step is required first.
Stage 2: OCR (Optical Character Recognition)
OCR converts an image of text into machine-readable characters. Modern OCR — Tesseract (open source) or commercial options like AWS Textract or Google Document AI — is highly accurate on clean scans, typically 98–99% character accuracy on good-quality documents.
Accuracy drops on low-quality scans, unusual fonts, handwriting, or documents with complex layouts (tables, multi-column text, signature blocks that overlap with body text — common in old leases). A good pipeline includes pre-processing — deskewing, contrast adjustment, noise reduction — and post-processing to catch common OCR errors.
For documents that mix typed and handwritten content (witness statements with manuscript annotations, signed contracts), hybrid approaches are used: OCR for the printed text, and either human review or specialist handwriting recognition for the manuscript portions.
Stage 3: Text Cleaning and Structure Detection
Raw OCR output is not clean text. It contains page numbers, running headers, footers, watermarks, "STRICTLY PRIVATE AND CONFIDENTIAL" stamps, stray characters, and formatting artefacts. Before extraction, the text needs to be cleaned: irrelevant elements removed, paragraphs reassembled (OCR often breaks lines mid-sentence), tables identified and structured.
For complex documents, layout analysis runs at this stage too — identifying which text is in the body, which is in headers and footers, which is in tables, which is in margin notes or annotations. Structure matters for accuracy: a rent figure inside a rent review schedule has different significance than the same number in a narrative paragraph.
Stage 4: LLM-Based Extraction
This is where the AI does its core work. A large language model — the technology underlying tools like GPT-4 or Claude — is given the cleaned document text alongside a structured prompt that specifies exactly what to extract.
The prompt is engineered for the specific document type. For a commercial lease the model might be instructed to identify and return: landlord, tenant, demised premises, term commencement, expiry, initial annual rent, rent review mechanism, break clause dates and conditions, alienation, user, and any provisions that appear to deviate from a standard institutional form.
The model returns structured output — typically JSON — containing the requested fields and their values. This is not keyword matching or template-based extraction; the model understands context. It can identify that "the term shall commence on the date of this deed" means the start date is the execution date, even though no explicit date appears in that sentence.
Unlike rules-based extraction — which breaks when documents vary from an expected format — LLM extraction handles variation naturally, because the model understands what the text means, not just what it looks like. Useful, given that no two leases drafted by different firms ever look the same.
Stage 5: Validation and Confidence Scoring
LLMs are capable but not infallible. A well-engineered pipeline does not treat every output as correct. Validation steps include:
- Format validation — Is the extracted date in a valid date format? Is the rent figure a number?
- Cross-document consistency checks — If the same party name appears in 50 documents, do all extractions match?
- Confidence flagging — The model can be instructed to indicate when it is uncertain. Those items go to human review rather than passing through automatically.
- Mandatory field checks — If a required field is missing, the document is flagged rather than silently producing an incomplete record.
Human review is not eliminated — it is targeted. Instead of an associate reading every document, they review only the flagged items: ones where the model was uncertain, or where validation checks failed. That is a much more efficient use of fee earner time.
Stage 6: Output to Database, Schedule or Matter System
The validated extracted data is written to the output system. In a law firm context that is typically:
- A DD report or red-flag schedule in the firm's house style
- A spreadsheet (Excel) for direct use by the deal team
- An integration with the firm's matter management or document management system (iManage, NetDocuments)
- A structured database for repeatedly-queried data (e.g. a portfolio lease register)
The output format is determined by how the data will be used. For ongoing pipelines where new documents are added regularly, database storage with an internal API is usually the right approach. For one-off transactions, a clean schedule is often sufficient.
What Good Extraction Looks Like
A well-built pipeline is not just technically functional — it is built around the specific documents and use case it needs to serve. Extraction prompts are developed and refined using real examples of the firm's actual documents. Validation rules are designed around what errors would matter most to the partner signing the report. Output format matches what the downstream users actually need.
This is why off-the-shelf document extraction tools often underperform on legal work: they are built to handle any document, which means they are not optimised for yours. A custom pipeline tuned to your firm's document types and your deal-room patterns consistently outperforms generic tools on accuracy and on the relevance of what it extracts. We have built and run extraction pipelines at the volume a global law firm needs; the same architecture transfers cleanly to UK domestic firms.
If your firm is sitting on large document volumes you cannot easily query — historical lease portfolios, recurring DD work, complaints handling, court bundles — extraction is usually a high-value first project. Get a quote if you want to talk through what it would look like.