But the shortcut came with a catch: he found himself combing through references that led nowhere, uncovering what he later learned were “hallucinations” — factual inaccuracies that have proven unavoidable in generative AI outputs. Verifying the AI-generated content ended up taking even longer than drafting a report without ChatGPT.
Wasted time wasn’t the only risk. “Had we chosen to publish these incorrect data in our industrial report because we trusted AI blindly, the reputation of our firm would have been at stake,” said Hang.