COINPURO - Crypto Currency Latest News logo COINPURO - Crypto Currency Latest News logo
Bitcoin World 2026-05-14 06:45:12

Campbell Brown, once Meta’s news chief, warns AI is repeating social media’s worst mistakes

BitcoinWorld Campbell Brown, once Meta’s news chief, warns AI is repeating social media’s worst mistakes Campbell Brown has spent her career chasing accurate information, first as a renowned TV journalist, then as Facebook’s first and only dedicated news chief. Now, watching AI reshape how people consume information, she sees history threatening to repeat itself. This time, she’s not waiting for someone else to fix it. Her company, Forum AI — which she discussed recently with Bitcoin World’s Tim Fernholz at a StrictlyVC evening in San Francisco — evaluates how foundation models perform on what she calls “high-stakes” topics: geopolitics, mental health, finance, hiring. These are subjects where “there are no clear yes-or-no answers, where it’s murky and nuanced and complex.” The idea is to find the world’s foremost experts, have them architect benchmarks, then train AI judges to evaluate models at scale. From Facebook to fixing AI Brown traces the origin of Forum AI, founded 17 months ago in New York, to a specific moment. “I was at Meta when ChatGPT was first released publicly,” she recalled, “and I remember really shortly after realizing this is going to be the funnel through which all information flows. And it’s not very good.” The implications for her own children made the moment feel almost existential. “My kids are going to be really dumb if we don’t figure out how to fix this,” she recalled thinking. What frustrated her most was that accuracy didn’t seem to be anyone’s priority. Foundation model companies, she said, are “extremely focused on coding and math,” whereas news and information are harder. But harder, she argued, doesn’t mean optional. What Forum AI found when it tested the leading models When Forum AI began evaluating the leading models, the findings weren’t encouraging. She cited Gemini pulling from Chinese Communist Party websites “for stories that have nothing to do with China,” and noted a left-leaning political bias across nearly all models. Subtler failures abound too, she said, including missing context, missing perspectives, and straw-manning arguments without acknowledgment. “There’s a long way to go,” she said. “But I also think that there are some very easy fixes that would vastly improve the outcomes.” For Forum AI’s geopolitics work, Brown has recruited Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy, and Anne Neuberger, who led cybersecurity in the Obama administration. The goal is to get AI judges to roughly 90% consensus with those human experts, a threshold she says Forum AI has been able to reach. The lesson from social media that AI is ignoring Brown spent years at Facebook watching what happens when a platform optimizes for the wrong thing. “We failed at a lot of the things we tried,” she told Fernholz. The fact-checking program she built no longer exists. The lesson, even if social media has turned a blind eye to it, is that optimizing for engagement has been lousy for society and left many less informed. Her hope is that AI can break that cycle. “Right now it could go either way,” she said; companies could give users what they want, or they could “give people what’s real and what’s honest and what’s truthful.” Why enterprise demand might be the unlikely ally She acknowledged the idealistic version of that — AI optimizing for truth — might sound naive. But she thinks enterprise may be the unlikely ally here. Businesses using AI for credit decisions, lending, insurance, and hiring care about liability, and “they’re going to want you to optimize for getting it right.” That enterprise demand is also what Forum AI is betting its business on, though turning compliance interest into consistent revenue remains a challenge, particularly given that much of the current market is still satisfied with checkbox audits and standardized benchmarks that Brown considers inadequate. The compliance landscape, she said, is “a joke.” When New York City passed the first hiring bias law requiring AI audits, the state comptroller found more than half had violations that went undetected. Real evaluation, she said, requires domain expertise to work through not just known scenarios but edge cases that “can get you into trouble that people don’t think about.” And that work takes time. “Smart generalists aren’t going to cut it.” The disconnect between Silicon Valley hype and user reality Brown — whose company last fall raised $3 million led by Lerer Hippeau — is uniquely positioned to describe the disconnect between the AI industry’s self-image and the reality for most users. “You hear from the leaders of the big tech companies, ‘This technology is going to change the world,’ ‘it’s going to put you out of work,’ ‘it’s going to cure cancer,'” she said. “But then to a normal person who’s just using a chatbot to ask basic questions, they’re still getting a lot of slop and wrong answers.” Trust in AI sits at extraordinarily low levels, and she thinks that skepticism is, in many cases, justified. “The conversation is sort of happening in Silicon Valley around one thing, and a totally different conversation is happening among consumers.” Conclusion Campbell Brown’s trajectory from TV news to Meta to founding Forum AI reflects a growing concern that AI, left unchecked, could amplify the same misinformation dynamics that plagued social media. Her approach — using expert-designed benchmarks to hold foundation models accountable — offers a potential path forward, but it depends on whether the industry and regulators are willing to prioritize accuracy over engagement. For now, the gap between what AI companies promise and what users experience remains wide, and Brown is betting that enterprise demand for liability-proof AI will close it. FAQs Q1: What is Forum AI? Forum AI is a startup founded by Campbell Brown that evaluates foundation models on high-stakes topics like geopolitics, mental health, finance, and hiring. It uses expert-designed benchmarks and AI judges to assess accuracy and bias at scale. Q2: Why does Campbell Brown think AI accuracy is important? Brown argues that AI chatbots are becoming the primary funnel for information, and if they provide inaccurate or biased answers, it could leave people less informed — repeating the mistakes of social media platforms that optimized for engagement over truth. Q3: How does Forum AI evaluate AI models? Forum AI recruits leading domain experts to architect benchmarks, then trains AI judges to evaluate models against those benchmarks. The goal is to reach roughly 90% consensus between AI judges and human experts. This post Campbell Brown, once Meta’s news chief, warns AI is repeating social media’s worst mistakes first appeared on BitcoinWorld .

가장 많이 읽은 뉴스

coinpuro_earn
면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.