• 2 Posts
  • 30 Comments
Joined 2 years ago
cake
Cake day: July 13th, 2023

help-circle
















  • No, the general public doesn’t know what LLMs are.

    we know what it is so we can have a conversation with the more specific terms.
    But not only is this comic probably geared towards a larger audience, this is probably one of the things they’re joking about.

    I personally don’t give a rats ass that openAI and other scum are trying to appropriate the term AI for themselves.
    What bothers me about them is that their marketing anthropomorphizes their products, making it almost impossible for people to have rational discussions about the tech without emotion coming into play.


  • That is beyond pedantry.

    That is how language works. Word definitions are literally just informal consensus agreement. Dictionaries are just descriptions of observed usage. Not literally everyone needs to agree on it.
    This isn’t some kind of independent conclusion I came to on my own; I used to think like you appear to, but then I watched some explanations from authors and from professional linguists, and they changed my mind about language prescriptivism.

    If you say “AI” in most contexts, more people will know what you mean than if you say “LLM”. If your goal is communication, then by that measure “AI” is “more correct” (but again, correctness isn’t even applicable here)


  • You know that things can both harm and benefit you, right? That’s the whole idea behind the idiom “the pros outweigh the cons”.

    If someone is making an argument about the cons of a thing, it’s insane to expect them to just list of a bunch of unrelated pros, and likewise it’s an unreasonable assumption to believe from that, that they don’t believe in the existence of any pros.

    I think that LLMs cause significant harm, and we don’t have any harm mitigation in place to protect us. In light of the serious potential for widespread harm, the pros (of which there are some) dont really matter until we make serious progress in reducing the potential for harm.

    I shouldn’t need this degree of nuance. People shouldn’t need to get warnings in the form of a short novel full of couched language. I’m not the only person in this conversation, the proponents are already presenting the pros. And people should be able to understand that.

    When people were fighting against leaded gasoline, they shouldn’t need to “yes, it makes cars more fuel efficient and prevents potentially damaging engine knock, thereby reducing average maintenance costs” every time they speak about the harms. It is unreasonable to say that they were harming discourse by not acknowledging the benefits every time they cautioned against it’s use.

    I don’t believe that you’re making a genuine argument, I believe you’re trying to stifle criticism by shifting the responsibility for nuance from it’s rightful place in the hands of the people selling and supporting a product with the potential for harm, onto the critics.


  • It’s not a strawman, it’s hyperbole.

    There are serious known harms and we suspect that there are more.
    There are known ethical issues, and there may be more.
    There are few known benefits, but we suspect that there are more.

    Do we just knowingly subject untrained people to harm just to see if there are a few more positive usecases, and to make shareholders a bit more money?
    How does their argument differ from that?