Due you trust AI humanlike responses?

ComputerWorld.com reported that “Trust in generative AI (genAI) has surged globally despite gaps in AI safeguards because of its humanlike ability to respond to queries or prompts, according to a new study by IDC.”  The October 1, 2025 report entitled “Employees overtrust humanlike AI, ignoring its flaws” (https://www.computerworld.com/article/4065796/employees-overtrust-humanlike-ai-ignoring-its-flaws.html) included these comments about the IDC study:

Only 40% of organizations invest in “trustworthy AI,” or AI with guardrails. Yet, those investing the least view genAI as 200% more trustworthy than traditional, proven machine learning — despite the latter being more established and having greater reliability and explainability.

“Our research shows a contradiction: that forms of AI with humanlike interactivity and social familiarity seem to encourage the greatest trust, regardless of actual reliability or accuracy,” said Kathy Lange, research director of the AI and Automation Practice at IDC.

IDC’s study, sponsored by SAS, found that organizations that build governance, ethics, and transparency guardrails are 60% more likely to double AI project ROI — highlighting the cost of ignoring responsible AI practices. The global survey of 2,375 IT professionals and line-of-business leaders found that strategic AI use, not just cost-cutting, drives market share and customer gains.

GenAI has rapidly outpaced traditional AI, and as organizations move toward agentic AI, its influence on decision-making — often hidden — will only grow. (AI agents are autonomous programs, often containerized, that make decisions in dynamic environments.)

“What stands out in this research is how rapidly the center of gravity has shifted from traditional machine learning toward generative and agentic AI,” Chris Marshall, IDC’s vice president of Data, Analytics, and AI research wrote in the report.

Yet, without trust, progress stalls, IDC found. AI trust isn’t just ethical — it’s financial. Nearly half of companies face a trust gap, leading to lower ROI. Mature AI adopters invest more in responsible AI and see better results.

While only a quarter of survey respondents have dedicated AI governance teams, most plan to boost investment, especially in ethics training, bias detection, and responsible AI platforms. “Trust is key to unlocking AI’s full value,” IDC said in its report.

What do you think?

Next
Next

Will Nextcloud be an alternative to Microsoft365 in Europe?