You don’t replace your dish gnome cartridge every 3 years? I was told it was a feature. They get tired.
- 0 Posts
- 5 Comments
Joined 2 years ago
Cake day: June 29th, 2023
You are not logged in. If you use a Fediverse account that is able to follow users, you can follow this user.
That guy gets to say that if you look up “humour” in the encyclopedia there’s a picture of him.
I didn’t see anything about 9/11 this year. Is the fervour dying down in the US, or did I just miss it this year because I wasn’t on reddit?
I’m used to once a year getting all this extremely sincere “never forget” stuff in my feed, along with a bunch of fights about how hypocritical it is.
Yo so lemmy.world isn’t run by tankies? If it is could y’all just ban me now so I don’t invest too much in the instance pls? I got kind of sick of the constant cycle of disappointment seeing how many supposedly leftist subs were taken over and ruined by them on reddit.
Yes, they try to prevent unwanted outputs with filters that prevent the LLM from seeing your input, not by teaching the LLM, because they can’t actually do that, it doesn’t truly learn.
Hypotheticals and such work because the LLM has no capacity to understand context. The idea that “A is inside B”, on a conceptual level, is lost on them. So the idea that a recipe for napalm is the same whether it’s framed within a hypothetical or not is impossible for them to decode. To an LLM, just wrapping the same idea in a new context makes it seem like a different thing.
They don’t have any capacity to self-censor, so telling them not to say something is like telling a human not to think of an elephant. It doesn’t work. You can tell a human not to speak about the elephant, because that’s guarded by our internal filter, but the LLM is more like our internal processes that operate before our filters go to work. There is no separation between “thought” and output (quotes around “thought” because they don’t actually think).
Solving this problem I think means making a conscious agent, which means the people who make these things are incentivised to work towards something that might become conscious. People are already working on something called agentic AI which has an internal self-censor, and to my thinking that’s one of the steps towards a conscious model.