Even if they were a shill I don’t understand what you imagine achieving by telling them that as if they don’t already know.
Or maybe you’re not actually even talking to them but rather just performing to your imagined audience.
Refusing to reduce complex reality into slogans and clichés since 19XX
Even if they were a shill I don’t understand what you imagine achieving by telling them that as if they don’t already know.
Or maybe you’re not actually even talking to them but rather just performing to your imagined audience.
I’ve yet to meet a single anti-AI person in real life. I’m starting to think it’s just a loud online minority that again makes the rest of the people on the left not daring to even admit to using it.


You replied to a wrong comment.


The photos of people that news outlets use reliably reveal their bias. If they don’t like the person they use a bad picture. Once you notice it, it can’t be unseen.


They haven’t claimed to be in the possession of AGI no matter how nicely that would fit your narrative.


It wasn’t from a starlink satellite though.
which the U.S. aerospace company SpaceX later admitted was part of a cargo trunk for its Crew Dragon spacecraft.


I’d say LLMs are not necessarily an indicator that we’re close to AGI, but they’re also not a non-indicator. Certaintly more of an indicator of it than the invention of the steam engine was. For narrowly intelligent systems, they’re getting quite advanced. We’re not there yet, but I worry that the moment we actually step into the zone of general intelligence might not be as obvious as one would think.
However, I also don’t think there’s any basis to make the absolute claim that LLMs will never lead there, because nobody could possibly know that with that degree of certainty.
And yeah, there are multiple ways to screw things up even with narrowly intelligent AI - we don’t need AGI for that.


The “AI” that we have now is not actually AI
This is simply just false. We’ve had AI since 1956
AI isn’t any one thing. It’s an broad term used in computer science to refer to any system designed to perform a cognitive task that would normally require human intelligence. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. That’s called “narrow” or “weak” AI.
It can still have superhuman abilities, but only within the specific task it was built for - like playing chess or generating language.
A large language model like ChatGPT is also narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. What people expect from it, though, isn’t narrow intelligence - it’s general intelligence. The ability to apply cognitive skills across a wide range of domains the way a human can. That’s something LLMs simply can’t do - at least not yet. Artificial General Intelligence is the end goal for many AI companies, but LLMs are not generally intelligent. However they still fall under the umbrella of AI as a broad category of systems.
Making what we’ve got into actual AI like you said isn’t going to happen, full stop.
I’ve never claimed LLMs will lead to AGI as I stated in the comment you quoted above.


Feel free to help me realize it then, because whatever irony or conflict you’re seeing there, I don’t see.


AI is not something somebody is going to develop in their moms basement. AGI is NOT inevitable.
Plenty of AI systems have already been developed by private individuals on their personal computers. This is not hypothetical. And I’m not claiming that our first AGI will have anything to do with LLMs.
I view AGI as inevitable because it’s the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn’t matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don’t think there’s anything supernatural about intelligence.


Nobody could possibly know. That’s why I make no claims about the timeline.


I’m just going to ignore your completely uncalled-for smug and dismissive tone and note that at no point have I suggested LLMs will lead to AGI.
Thank you for your contribution to making this platform a worse place for everyone.


The way I see it:
I genuinely see no solution to this. I can only hope things turn out well, or at the very least that it doesn’t happen during my lifetime. The genie isn’t going back into the bottle.


Have the comments here read the article?
You serious? Ofcourse not - but they did see the letters “AI” in the title.
It is still a first party web interface.
uBlock Origin and SponsorBlock works just fine. When it comes to AI content, just ignore it and it’ll dissapear from your recommendations.
For virtually everything wrong with youtube there’s an add-on to fix it or a behavior to change. So many people can’t help but watch short-form content for example and then they complain their feed is all shorts - I wonder why.


No idea. I did it myself with a dremel.


That’s one way do deal with the traffic: just make your state so shitty place to live that everyone moves elsewhere.


Mine has my name engraved on it permanently as is the case with most of my possessions. I love the horror on people’s face when they hear you’ve intentionally “damaged” something valuable.
Stock items have no character. Customize everything.
I haven’t claimed it isn’t real. I haven’t claimed it’s stopping me from talking about it. I haven’t said I love AI. I’m not pretending people are on my side and I haven’t claimed to be on the left politically.