In the span of roughly a year, Google launched and then quietly discontinued a feature that crowdsourced health advice from online forums using AI. Called “What People Suggest,” it was positioned as a way to provide users with authentic community insights on health topics. Three people familiar with the decision confirmed it is no longer operational, and Google’s acknowledgment of the removal raised significant transparency concerns.
The feature debuted at Google’s New York health event in spring of last year, where it was championed by Karen DeSalvo, then the company’s chief health officer. DeSalvo described it as a response to users’ genuine desire to hear from others navigating similar health experiences. The AI sorted through online discussions and surfaced relevant, thematic insights alongside links to original sources.
Google’s spokesperson defended the removal as part of search interface simplification and firmly denied that safety was a concern. Yet the company’s attempt to demonstrate public disclosure fell apart when the referenced blog post turned out to contain no mention of the feature. “It’s dead,” one insider said of the tool.
Google’s handling of health AI has faced intense scrutiny in recent months. An investigation found that AI Overviews — served to two billion monthly users — had been sharing dangerous health misinformation. Although Google adjusted some AI Overviews in response, health professionals have called for more systemic reform of how AI handles medical information in search results.
Google’s upcoming health event offers another chance to recalibrate its public image on AI health tools. But the quiet death of “What People Suggest” and the inadequate communication around it will serve as a persistent reminder of the risks involved in deploying health AI without adequate safeguards. Moving forward with integrity will require Google to stop burying mistakes and start engaging with them openly.
The Rise and Fall of Google’s AI Feature That Sourced Medical Advice From Amateurs
11