Note: this lemmy post was originally titled MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline and linked to this article, which I cross-posted from this post in !fuck_ai@lemmy.world.
Someone pointed out that the “Science, Public Health Policy and the Law” website which published this click-bait summary of the MIT study is not a reputable publication deserving of traffic, so, 16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT’s page about the study instead.
The actual paper is here and was previously posted on !fuck_ai@lemmy.world and other lemmy communities here.
Note that the study with its original title got far less upvotes than the click-bait summary did 🤡
Its so disturbing. Especially the bit about your brain activity not returning to normal afterwards. They are teaching the kids to use it in elementary schools.
I think they meant it doesn’t return to non-AI-user levels when you do the same task on your own immediately afterwards. But if you keep doing the task on your own for some time, I’d expect it to return to those levels rather fast.
That’s probably true, but it sure can be hard to motivate yourself to do things yourself when that AI dice roll is right there to give you an immediate dopamine hit. I’m starting to see things like vibecoding being as addictive as gambling.
Personally I don’t use AI because I see all the subtle ways it’s wrong when programming, and the more I pay attention to things like AI search results, it seems like there’s almost always something misrepresented or subtly incorrect in the output, and for any topics I’m not already fluent in, I likely won’t notice these things until it’s already causing issues
This “dopamine hit” isn’t a permanent source of happiness, just repeatedly clicking “randomize” button not going to make you feel constantly high, after 3 maybe 5 hits you will start noticing a common pattern that gets old really fast. And to make it better you need to come up with ways to declare different structures, to establish rulesets, checklists, to make some unique pieces at certain checkpoints yourself, while allowing LLM to fill all the boilerplate around it, etc. Which is more effort but also produces more rewarding results. I like to think about it this way: LLM produces the best most generic thing possible for the prompt. Then I look at it and consider which parts I want to be less generic and reprompt. In programming or scripting, I’m okay with “best generic thing” that solves the problem I have. If I were writing novels, maybe it’s usable for some kind of top-down writing where you start with high-level structure, then clarify it step by step down to the lowest level. You can use AI to write around this structure, and if something is too boring/generic it’s again simply a matter of refining this structure more and expanding something into multiple more detailed things.
it’s not any different than eating fast/processed food vs eating healthy.
it warps your expectations