Press
TikTok scales back AI-generated video overviews after absurd errors
Images
TikTok has rowed back on an AI feature which incorrectly summarised some videos on the platform, including claiming a celebrity was fruit. The company's 'AI overviews' recently began appearing beneath content on the platform to describe what a video was showing, or provide more context. While only rolled out to some users in the US and the Philippines, the feature's incorrect and bizarre AI-generated summaries of TikTok content - seen beneath videos of celebrities like platform star Charli D'Amelio - have been shared widely. According to TikTok, its experimental summaries have been tweaked to only suggest products similar to those shown in videos. The changes were first reported by news outlet Business Insider. Much like the AI Overviews at the top of most Google search results, TikTok's AI-generated overviews would attempt to sum up the contents of videos for some users when they clicked to see more of a video's caption. Some examples screenshotted by users and seen by the BBC showed videos on the platform being accurately described, but Business Insider also identified a number of "wildly inaccurate" AI overviews. This included one which saw a video of dancer Charli D'Amelio described as a "collection of various blueberries with different toppings," the publication said. It saw similarly vague, inaccurate and strange AI-generated summaries on other TikTok videos of celebrities and artists, including Shakira and Olivia Rodrigo. The feature will now only be used to surface information about items in videos, according to TikTok. It comes as tech firms look to deploy more AI products on their platforms to boost user engagement. However, some such efforts have been met with user backlash, or mockery, when these tools go awry. Posts reacting to TikTok's testing of AI overviews on its videos first began appearing in January. But it appears the summaries were made more widely available, with several users and creators highlighting AI-generated descriptions containing absurd mistakes in late April. A recent example shared on Reddit saw a performance by ballroom dancers Reagan and Juli To described in an AI overview on TikTok as "a person repeatedly striking their head with a rubber chicken". Other examples shared by TikTok users contained similarly strange descriptions. For instance, AI overviews for two separate videos, neither of which featured violence or tools, said they featured "a person repeatedly striking their head with a hammer". According to TikTok, users were able to report and provide feedback about AI overviews. But this did not stop some from speculating as to whether the platform was "trolling" its users. "The new AI Overview is so bad it feels like it has to be a joke," wrote TikTok user and creator Brett Vanderbrook alongside his video. He showed a range of examples where TikTok's AI feature conjured up bizarre descriptions for what was happening in videos - such as a comedy skit described as someone "demonstrating a new, clever technique for cutting through water". TikTok says it has identified the cause of AI overview errors and inconsistencies, without detailing what this was. But generative AI tools often make things up when responding to users, summarising or generating information, and errors can range from being hilarious to potentially harmful in nature. Google was widely mocked in 2024 after its AI Overviews results told users to eat rocks and "glue pizza". Apple later faced criticism after an AI tool designed to summarise notifications created false headlines for the BBC News and the New York Times apps. The tech giant suspended the feature, saying it would improve and update it. Since then AI development has continued, with firms claiming the tech has vastly improved in ability and accuracy, but so-called "hallucinations" persist. However, ChatGPT-maker OpenAI recently said it identified "goblin" and "gremlin" creeping into its systems' responses - a quirk it believes arose after a tool it trained to have a nerdy persona incentivised mentioning the creatures. False case law or citations appearing in court filings have meanwhile prompted warnings about AI use in legal settings, with AI errors also reportedly causing issues for some governments. Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here. The actor has branded the singer an identity thief - but plenty of fans think they're the same person. The feud has fuelled a costly showdown between two tech titans. Shivon Zilis is the mother of four of Musk's children. That relationship began as she advised OpenAI. Insects' lightning-fast reactions could transform AI and robotics, Sheffield researchers say. GPS ruined our sense of direction. Search engines weaken our memory. AI, scientists warn, could do the same to everything from creativity to critical thinking.