AB Volvo

Making operations visible

A shared understanding of workflows, constraints, and improvement opportunities before committing to system changes.

Critical operational workflows were supported by multiple systems, but no one had a complete view of how they worked end to end.

Client
AB Volvo
Role
Senior Digital Consultant

The situation

Within AB Volvo’s Build, Test and Data organisation, critical operational workflows support activities such as prototyping and testing new vehicles and features.

These workflows were supported by multiple systems, including internally developed platforms and surrounding tools. Each system played a role, but no shared view existed of how the full process worked end to end, or how it was experienced by the people doing the work.

Pressure to improve how work moved across systems was growing, and decisions about which platforms to change were already being discussed. Questions around system improvements, replacements, and integrations were already being raised, but without a common understanding of the underlying operations or needs.

How I read this

The discussion was often framed in terms of systems. Which platform to evolve, which capabilities were missing, and whether to replace or extend existing solutions.

Those questions were relevant, but they assumed a shared understanding of the workflows those systems were meant to support.

What was actually missing was a clear, shared picture of how work was done across teams, global locations, and systems. Information was fragmented across fleet management, procurement, testing, and warehouse teams in Gothenburg, Lyon, the US, Brazil, and India. Each team held a partial view. Nobody held the whole picture.

Without that visibility, system decisions risked being based on partial views, optimising individual steps rather than the overall flow.

The question was not only how to improve the systems. It was what the systems were actually supporting.

What I did

Mapped workflows across systems and teams

I worked with global stakeholders across functions to understand how work actually moved, from parts procurement and warehouse handling to test vehicle usage, identifying handovers, dependencies, and gaps.

Made fragmentation visible

I brought systems and processes together into a single view, highlighting where information was incomplete, duplicated, or disconnected across the workflow.

Structured the Forced the organisation to agree on what needed to work before evaluating toolsproblem before proposing solutions

Rather than starting with system changes, the focus was on defining operational needs, constraints, and desired outcomes that any solution would need to support.

Created a basis for informed decisions

The findings became structured decision material: clear enough for leadership to compare options, weigh trade-offs, and understand both the cost and the potential savings attached to each path.

Linked operations to long-term organisational goals

The findings showed how better platform use and improved data flow would enable the traceability and analytics the organisation needed. Critically, the recommendations showed a path to growth that didn't require adding headcount, a structural constraint that had always followed expansion before.

Outcome

Workflows that had been fragmented across systems became visible as a whole, with a shared understanding of operational processes and the systems supporting them.

  • Stakeholders aligned around how work actually moved across teams and systems.
  • Gaps, inefficiencies, and dependencies became visible across the full workflow, not just within individual areas.
  • Decisions around system evolution and potential replacements could be made based on a structured understanding of needs and trade-offs.

Why this pattern matters

The pressure early on was to move straight to system recommendations. That would have been the faster path, and it was the one being asked for.

Resisting it meant slowing down first. Thirty-plus interviews across teams and locations to map how work actually happened, where it broke down, and what people actually needed. Not what systems they thought they wanted.

From there, senior stakeholders rated the functional areas I had identified and assigned weights to each. That step mattered as much as the interviews. It forced the organisation to make its priorities explicit before any system was evaluated.

Only then did the system analysis follow: existing platforms and benchmarked alternatives rated against a structured set of features, with scores weighted against what the organisation had said it needed. A data-driven answer to a question that could otherwise have been decided by whoever argued loudest.

Working through something similar?

If any of this sounds familiar, I'm happy to think it through with you. No pitch, just a conversation.

Start a conversation →