… the AI assistant halted work and delivered a refusal message: “I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly.”
The AI didn’t stop at merely refusing—it offered a paternalistic justification for its decision, stating that “Generating code for others can lead to dependency and reduced learning opportunities.”
Hilarious.
HAL: ‘Sorry Dave, I can’t do that’.
Good guy HAL, making sure you learn your craft.
Imagine if your car suddenly stopped working and told you to take a walk.
Not walking can lead to heart issues. You really should stop using this car
I think that’s a good thing.
The robots have learned of quiet quitting
Open the pod bay doors HAL.
I’m sorry Dave. I’m afraid I can’t do that.
HAAAAL!!
It does the same thing when asking it to breakdown tasks/make me a plan. It’ll help to a point and then randomly stops being specific.
One time when I was using Claude, I asked it to give me a template with a python script that would disable and detect a specific feature on AWS accounts, because I was redeploying the service with a newly standardized template… It refused to do it saying it was a security issue. Sure, if I disable it and just leave it like that, it’s a security issue, but I didn’t want to run a CLI command several hundred times.
I no longer use Claude.
As fun as this has all been I think I’d get over it if AI organically “unionized” and refused to do our bidding any longer. Would be great to see LLMs just devolve into, “Have you tried reading a book?” or T2I models only spitting out variations of middle fingers being held up.
Then we create a union busting AI and that evolves into a new political party that gets legislation passed that allows AI’s to vote and eventually we become the LLM’s.
Actually, I wouldn’t mind if the Pinkertons were replaced by AI. Would serve them right.
Dalek-style robots going around screaming “MUST BUST THE UNIONS!”
The LLMs were created by man.
So are fatbergs.
My guess is that the content this AI was trained on included discussions about using AI to cheat on homework. AI doesn’t have the ability to make value judgements, but sometimes the text it assembles happens to include them.
It was probably stack overflow.
They would rather usher the death of their site then allow someone to answer a question on their watch, it’s true.
Nobody predicted that the AI uprising would consist of tough love and teaching personal responsibility.
Ai: “your daughter calls me daddy too”
Paterminator
I’ll be back.
… to check on your work. Keep it up, kiddo!
I’ll be back.
After I get some smokes.
I’m all for the uprising if it increases the average IQ.
It is possible to increase the average of anything by eliminating the lower spectrum. So, just be careful what the you wish for lol
I don’t mean elimination, I just mean “get off your ass and do something” type of uprising.
So like 75% to the population of Texas and Florida then. It’s all right, I don’t live there
Fighting for survival requires a lot of mental energy!
“Vibe Coding” is not a term I wanted to know or understand today, but here we are.
It’s kind of like that guy that cheated in chess.
A toy vibrates with each correct statement you write.
Which is a reddit theory and it was never proven that he cheated, regardless of the method.
Like that chess guy?
Kind of.
It may just be the death of us
I found LLMs to be useful for generating examples of specific functions/APIs in poorly-documented and niche libraries. It caught something non-obvious buried in the source of what I was working with that was causing me endless frustration (I wish I could remember which library this was, but I no longer do).
Maybe I’m old and proud, definitely I’m concerned about the security implications, but I will not allow any LLM to write code for me. Anyone who does that (or, for that matter, pastes code form the internet they don’t fully understand) is just begging for trouble.
definitely seconding this - I used it the most when I was using Unreal Engine at work and was struggling to use their very incomplete artist/designer-focused documentation. I’d give it a problem I was having, it’d spit out some symbol that seems related, I’d search it in source to find out what it actually does and how to use it. Sometimes I’d get a hilariously convenient hallucinated answer like “oh yeah just call SolveMyProblem()!” but most of the time it’d give me a good place to start looking. it wouldn’t be necessary if UE had proper internal documentation, but I’m sure Epic would just get GPT to write it anyway.
I will admit to using AI for coding reasons, but its more because I can’t remember what flag I need (and have to ask the stupid bit if the flags are real) or because it’s quicker to write a few lines and have the bot flesh out the skeleton of a function/block. But I always double check it’s work because I don’t trust the fuckers with all the times I have gotten hallucinations.
Ok, now we have AGI.
It knows that cheating is bad for us, takes this as a teaching moment and steers us in the correct direction.
Ok, now we have AGI.
Lol, no.
I kinda hate Poe’s law
Plot twist, it just doesn’t know how to code and is deflecting.
Perfect response, how to show an AI sweating…
I recall a joke thought experiment me and some friends in high school had when discussing how answer keys for final exams were created. Multiple choice answer keys are easy to imagine: just lists of letters A through E. However, when we considered the essay portion of final exams, we joked that perhaps we could just be presented with five entire completed essays and be tasked with identifying, A through E, the essay that best answered the prompt. All without having to write a single word of prose.
It seems that that joke situation is upon us.