Tuesday, November 18, 2025
7.4 C
New York

Don’t blindly trust what AI tells you, Google boss tells BBC


Faisal Islam,economics editor,

Rachel Clun,business reporter and

Liv McMahon,Technology reporter

Getty Images A young female student seen from above interacts with an AI chatbot on a smartphone while studying at a desk with a laptop, notes and stationery. The scene highlights modern learning and technology integration.Getty Images

People should not “blindly trust” everything AI tools tell them, the boss of Google’s parent company Alphabet has told the BBC.

In an exclusive interview, chief executive Sundar Pichai said that AI models are “prone to errors” and urged people to use them alongside other tools.

Mr Pichai said it highlighted the importance of having a rich information ecosystem, rather than solely relying on AI technology.

“This is why people also use Google search, and we have other products that are more grounded in providing accurate information.”

However, some experts say big tech firms such as Google should not be inviting users to fact-check their tools’ output, but should focus instead on making their systems more reliable.

While AI tools were helpful “if you want to creatively write something”, Mr Pichai said people “have to learn to use these tools for what they’re good at, and not blindly trust everything they say”.

He told the BBC: “We take pride in the amount of work we put in to give us as accurate information as possible, but the current state-of-the-art AI technology is prone to some errors.”

The company displays disclaimers on its AI tools to let users know they can make mistakes.

But this has not shielded it from criticism and concerns over errors made by its own products.

Google’s rollout of AI Overviews summarising its search results was marred by criticism and mockery over some erratic, inaccurate responses.

The tendency for generative AI products, such as chatbots, to relay misleading or false information, is a cause of concern among experts.

“We know these systems make up answers, and they make up answers to please us – and that’s a problem,” Gina Neff, professor of responsible AI at Queen Mary University of London, told BBC Radio 4’s Today programme.

“It’s okay if I’m asking ‘what movie should I see next’, it’s quite different if I’m asking really sensitive questions about my health, mental wellbeing, about science, about news,” she said.

She also urged Google to take more responsibility over its AI products and their accuracy, rather than passing that on to consumers.

“The company now is asking to mark their own exam paper while they’re burning down the school,” the said.

‘A new phase’

The tech world has been awaiting the latest launch of Google’s consumer AI model, Gemini 3.0, which is starting to win back market share from ChatGPT.

The company unveiled the model on Tuesday, claiming it would unleash “a new era of intelligence” at the heart of its own products such as its search engine.

In a blog post, it said Gemini 3 boasted industry-leading performance across understanding and responding to different modes of input, such as photo, audio and video, as well as “state-of-the-art” reasoning capabilities.

In May this year, Google began introducing a new “AI Mode” into its search, integrating its Gemini chatbot which is aimed at giving users the experience of talking to an expert.

At the time, Mr Pichai said the integration of Gemini with search signalled a “new phase of the AI platform shift”.

The move is also part of the tech giant’s bid to remain competitive against AI services such as ChatGPT, which have threatened Google’s online search dominance.

His comments back up BBC research from earlier this year, which found that AI chatbots inaccurately summarised news stories.

OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI were all given content from the BBC website and asked questions about it, and the research found the AI answers contained “significant inaccuracies“.

Broader BBC findings have since suggested that, despite improvements, AI assistants still misrepresent news 45% of the time.

In his interview with the BBC, Mr Pichai said there was some tension between how fast technology was being developed and how mitigations are built in to prevent potential harmful effects.

For Alphabet, Mr Pichai said managing that tension means being “bold and responsible at the same time”.

“So we are moving fast through this moment. I think our consumers are demanding it,” he said.

The tech giant has also increased its investment in AI security in proportion with its investment in AI, Mr Pichai added.

“For example, we are open-sourcing technology which will allow you to detect whether an image is generated by AI,” he said.

Asked about recently uncovered years-old comments from tech billionaire Elon Musk to OpenAI’s founders around fears the now Google-owned DeepMind could create an AI “dictatorship”, Mr Pichai said “no one company should own a technology as powerful as AI”.

But he added there were many companies in the AI ecosystem today.

“If there was only one company which was building AI technology and everyone else had to use it, I would be concerned about that too, but we are so far from that scenario right now,” he said.



Source link

Hot this week

Atlanta Fed President Bostic says he’ll leave position when his term expires in February

Atlanta Federal Reserve President Raphael Bostic said Wednesday...

REIT Balance Sheets Are Looking Good

As measured by the largest ETF dedicated to...

Finextra’s US Regulation Pulse Check 2026

  Which 2026 regulatory deadlines...

Latest Post

Demo

Related Articles

Popular Categories

Demo