A global IDC survey of 2,375 IT and business leaders finds organisations trust GenAI more than any other AI type but few are investing in the guardrails that make AI demonstrably trustworthy.
Key findings
Trust skews to the new: 48% report complete trust in GenAI; 33% for agentic AI; 18% for traditional AI. Among organisations investing the least in trustworthy AI, GenAI was rated 200% more trustworthy than traditional machine learning.
- Guardrails lag: While 78% say they fully trust AI, only 40% are investing in governance, explainability, and ethical safeguards.
- ROI link: Organisations prioritising trustworthy AI are 1.6× more likely to report double or greater ROI on AI projects.
- Adoption outpaces assurance: GenAI visibility/use sits at 81% vs. 66% for traditional AI, heightening risk and ethical questions.
- Main concerns: Data privacy [62%], transparency/explainability [57%], and ethical use [56%].
- Quantum curiosity: ~33% are familiar with quantum AI; 26% report complete trust, despite early-stage real-world use.
What’s holding teams back
- Data foundations: 49% cite weak or fragmented data infrastructure/cloud environments.
- Governance gaps: 44% lack sufficient data-governance processes.
- Skills shortage: 41% report insufficient AI talent.
- Data pain points: Accessing relevant sources [58%], privacy/compliance [49%], and data quality [46%].
Why it matters
“Human-like interactivity seems to invite trust regardless of actual reliability,” said Kathy Lange, Research Director, AI & Automation, IDC. “GenAI is trusted, but is it always trustworthy?”
“To build trust, improve implementation success, keep humans in the loop, and empower teams with the right controls.” Added Bryan Harris, CTO at SAS