A [philosophical paper](https://link.springer.com/article/10.1007/s10676-024-09775-5) calls ChatGPT “bullshit” in Frankfurt’s sense: its outputs are produced without concern for truth, and the popular language of “hallucination” makes things sound more innocent than they are. This is hard to deny. A language model is trained to predict the next token, not to represent the world faithfully. It will sometimes produce fluent falsehoods. If you put that raw system in front of users as an authority, without evaluation or safeguards, you are effectively deploying a bullshit machine and inviting overtrust. But as system builders, that is precisely the problem we are trying to solve for. In other parts of our digital infrastructure we already rely on tools that have no understanding and no moral agency. A sepsis risk score does not care whether a patient lives. An aircraft autopilot does not care if passengers arrive safely. These systems are tuned to proxies rather than abstract truth. Used on their own they would be dangerous. Embedded in carefully designed workflows, with training, procedures and monitoring, they transform the world in myriad of positive ways. LLMs belong in that same category. The interesting question is not “does the model care about truth” but “what kind of system are we building around it, and for which decisions.” In practice there is a base model that really is just next token prediction, an engineered layer that brings in retrieval, verification, constraints and evaluation, and an institutional layer that determines how the whole system is described, governed and audited in the real world. We should be precise about how the “bullshit” label is used in discussions of AI. A point about how a base model is trained should not be mistaken for a verdict on every system built around it. It is closer to noting that a microscope cannot diagnose cancer on its own than concluding it has no place in oncology. The real question is whether we are using the tool in a way. The Glasgow paper is a useful warning about metaphors and irresponsible deployment. The next step, for those of us working with these tools, is to show how careful system design can be genuinely truth tracking and accountable for specific problems, instead of letting either hype or the language of “bullshit” do the thinking for us.