Back to Blog
·2 min read
Edit

Why don't we trust AI

Imagine bringing a new employee into an engineering design review. They know the tools, they can model and simulate, and they’ve read all the manuals. But would you trust them to challenge a decision without understanding your company’s standards, your design history, or the judgment that comes from experience?

Probably not. Because in engineering, trust is also built on context and familiarity.

That's the same issue we face with GenAI. Generic models can be powerful, but they don't understand your way of working. Not unless you train them on your process, your constraints, and your goals. This is where Retrieval Augmented Generation chatbots can slot in.

Now picture this: your design review includes a GenAI assistant that’s been trained on your internal standards/processes, familiar with your past projects, and aware of industry best practices. It knows the common failure modes, understands your gate reviews, and brings up lessons learned from previous designs.

But here's the key it certainly doesn’t replace the chief engineer or design authority. It supports them.

This is the "human-in-the-loop" model in action. The engineer stays in control, with AI helping surface risks, highlight inconsistencies, and ensure adherence to process.

It's not about automation for its own sake. It's about making smarter & more transparent decisions, with better tools, while keeping human judgment at the core.

Whether a design will be transporting people across oceans or saving lives in healthcare… in critical design work, trust isn't optional.

Erik Cavan

Erik Cavan

Applied AI

Share: