

If the driver wants to kill himself and drives into a tree at 200kph, the manufacturer is not responsible
🇺🇦Donate to help volunteer fighters defend Ukraine! https://protectavolunteer.com/ 🇺🇦
🌲Help plant tree’s, remove carbon, protect rainforest and more!
ikt has funded planting of 1,142 trees and the prevention of 53.74 tCO2e from being emitted
If the driver wants to kill himself and drives into a tree at 200kph, the manufacturer is not responsible
Yes that was the argument
u/Thorry84@feddit.nl: AI produces garbage
me: this looks amazing to me, sure it’s not perfect but it’s super impressive considering it was made in like 30 minutes
u/Thorry84@feddit.nl: no it’s garbage, look I noticed minor things that are not correct!
me: fair enough, can you make something better using any other tools besides AI?
u/Thorry84@feddit.nl: fuck you🤬🤬🤬🤬🤬🤬
deleted by creator
It’s all good bro
Louis CK Everything is amazing & Nobody is happy
“We live in an amazing amazing world and it’s wasted on the crappiest generation of spoiled idiots”
deleted by creator
deleted by creator
deleted by creator
deleted by creator
deleted by creator
deleted by creator
deleted by creator
deleted by creator
Because if he didn’t use the jailbreak it would give him crisis resources
but even OpenAI admitted that they’re not perfect:
That said chatgpt or not I suspect he wasn’t on the path to a long life or at least not a happy one:
I think OpenAI could do better in this case, the safeguards have to be increased but the teen clearly had intent and overrode the basic safety guards that were in place, so when they quote things chatgpt said I try to keep in mind his prompts included that they were for “writing or world-building.”
Tragic all around :(
I do wonder how this scenario would play out with any other LLM provider as well