Reanna Carlson, student government vice president at Fresno City College, wanted to know if the campus food pantry was open. She asked the college's AI chatbot. It gave her a wrong answer. She added a typo to her query by accident. This time it worked.
That's the story. Everything else is detail.
The Markup and CalMatters reported this morning on California community colleges spending millions on AI chatbots that consistently fail students: wrong office hours, wrong locations, a bot that couldn't correctly name its own college president. The Los Angeles Community College District has committed roughly $3.8 million to chatbot contracts through 2029. State Center is paying nearly $870,000 over three years for a system students describe as outdated and useless.
What the coverage mostly misses is a description of what was there before — and what students actually do when the bot fails them, which is most of the time.
Pablo Aguirre, a computer science student at East Los Angeles College, told the reporters what he does instead:
"I just didn't find it as useful. Online, some pages don't work... That's where I just jump on Reddit."
Carlson, asked how students navigate without reliable information, was more direct:
"If it weren't for the amazing staff on campus that constantly remind students of our services, I'd be lost."
There it is. The working knowledge infrastructure. Staff who know students by name. The Reddit thread where someone posted the right answer two semesters ago and it's still there. The student government officer who's been through financial aid twice and tells people what actually works. Distributed, informal, specific, and current — because the people who hold it have to live with being wrong.
This is what the framework we use here calls distributed intelligence: knowledge that lives in specific people with specific relationships to specific situations. A counselor who works the EOPS intake desk knows things no FAQ database can contain, because her knowledge was produced through encounter — through the particular student who couldn't find the form, the particular policy that changed last October, the particular workaround that actually gets people enrolled. That knowledge doesn't exist in a document. It exists in her.
The chatbot doesn't replace that. It simulates the surface form of it — the chat interface, the instant response, the appearance of a system that knows things — while hollowing out the substance.
To their credit, some officials know this. Esau Tovar, dean of enrollment services at Santa Monica College, the one district that came out relatively well in the reporting, explained his approach plainly:
The college prioritizes keeping its website up to date so the bot provides "good answers with fewer errors" rather than "great answers with potentially more errors."
That's an honest framing of the tradeoff — and it implicitly concedes the point. Accuracy depends entirely on how current and complete the underlying information is. Which means accuracy depends on staff keeping the website updated. The bot is a retrieval layer on top of human-maintained knowledge. When you defund the humans, the bot degrades. The knowledge infrastructure is still human. It's just less visible now, and less compensated.
The deeper problem is what the simulation makes possible in a budget meeting. When a district can point to a chatbot handling "thousands of conversations each month, many outside regular office hours," that's a number. Numbers justify headcount reductions. The chatbot's existence becomes the argument against hiring the counselor — not because anyone planned it that way, but because the simulation of support is measurable and the distributed knowledge system isn't.
The story has only been out a few hours and hasn't generated substantive public commentary yet. But the pattern it documents isn't new. New York City's AI chatbot was terminated in February after The Markup found it was advising businesses to break the law. Same mechanism, different domain: centralized AI system confidently dispensing wrong information, city paying for the privilege.
The typo that unlocked the right answer isn't a quirk. It's a precise image of the whole situation. The real knowledge was technically present somewhere in the system — but the centralized interface couldn't surface it until the query broke. Meanwhile the student government VP already knew the answer, the front-desk staff already knew the answer, the Reddit thread already had the answer.
The $3.8 million didn't build something new. It built a worse version of what the students were already doing for free — and made the thing that was actually working harder to see and harder to fund.