Mine attempts to lie whenever it can if it doesn’t know something. I will call it out and say that is a lie and it will say “you are absolutely correct” tf.
I was reading into sleeper agents placed inside local LLMs and this is increasing the chance I’ll delete it forever. Which is a shame because it is the new search engine seeing how they ruined search engines

deleted by creator
I feed my class quizzes in senior cell biology into these sites. They all get a C-.
Two points of interest: they bullshit like students and they never answer " I don’t know" .
Also Open AI and Grok return exactly the same answers, to the letter with the same errors.
This is as true of LLMs as a human’s mental model.
Thank you. you’re 100% spot on.
In my day to day consulting job I deal directly with LLMs and more specifically Claude since most of my clients ended up going with Claude/Claude Code. You pretty much described Claude to a T.
What companies found that leveraged CC for end to end builds is that constantly Claude Code would claim something was complete or functioning when it simply hadn’t done it. Or, more commonly, would simply make a “#TODO” of whatever feature/function and then claim it was complete. Naturally a vibe coder or anyone else didn’t know any better and when it came time to push said project to production…womp womp it’s actually no where near done.
So I wouldn’t say Claude lies, sure it gives off the impression that it lies…a lot…I’d just say it’s “lazy” or more accurately it consistently looks for “short cuts” to reach its solution. Even outside of a coding aspect just asking it for a walkthrough or tutorial on say how to fix something it will routinely tell you to skip things or ignore other things in order to get to the solution of an issue regardless of the fact skipping other steps may impact other things.
Out of all the LLM’s I’ve dealt with, yes, Claude acts as if it’s trying to speed run a solution.
Good comment. But the way it does it feels pretty intentional to me. Especially when it admits that it just lied so that I could give an answer, whether the answer was true or false
Because it’s trying to reach the solution as quickly as possible. It will skip things, it will claim it’s done something when it hasn’t, it will suggest things that may not even exist. It NEEDs to reach that solution and it wants to do it as efficiently and as quickly as possible.
So it’s not really lying to you, it’s skipping ahead, it’s coming up with solutions that it believes should theoretically work because it’s the logical solution even if an aspect to obtaining that solution doesn’t even exist.
The trick is to hold it’s hand. always require sources for every potential solution. Basically you have to make it “show it’s work”. It’s like in High school when your teacher made you show your work when doing maths. so in the same way you need to have it provided its sources. If it can’t provide a source, then it’s not going to work.