The challenge
FLSmidth runs a global engineering operation spanning dozens of countries and decades of accumulated messaging history. The company had an aggressive move to Microsoft 365 planned, but sitting on-premises was one of the largest Enterprise Vault deployments we'd encountered — 165 terabytes of historical messaging data that had to come across intact.
Prior attempts using native Microsoft tooling had failed. The volume broke assumption after assumption: extraction speed, error tolerance, memory use, resumability. The programme stalled.
What we did
We deployed distributed agents across FLSmidth's data centres to extract data directly from the Enterprise Vault stores — no PST intermediate step, no staging servers duplicating 165TB of data just to move it. Extraction ran in parallel across multiple agent instances, each checkpointing state per-message so that overnight failures never cost ground already covered.
When agents hit corrupt messages — and at this scale there were thousands — they logged the corruption, skipped the item, and continued. The full reconciliation report at the end accounted for every message: migrated, skipped, or flagged for manual review.
The outcome
FLSmidth completed what was, at the time, the largest single Enterprise Vault to Microsoft 365 consolidation we're aware of. Enterprise Vault infrastructure was decommissioned. Historical messages surfaced for the first time in modern M365 search — content that had effectively been invisible for years.
The project became a reference point for what's achievable at scale when the migration engine is built for enterprise fault tolerance from the outset, rather than patched together from tools designed for smaller environments.