

Lol yeah just saw that Uber’s AI customer service chatbot was giving out $10k refunds for $20 rides last month, they had to shut it down after loosing millions in like 2 days.
Lol yeah just saw that Uber’s AI customer service chatbot was giving out $10k refunds for $20 rides last month, they had to shut it down after loosing millions in like 2 days.
I get where you’re coming from, but it’s not so black and white. Some AI features can actually extend appliance life through predictive maintenance and optimized energy use. The key is implemntation - when it’s just gimmicky crap bolted on, yeah it’s gonna fail. But when it’s thoughtfully integrated? Different story.
Real-time facial recognition is a whole different beast from retrospective analysis - the error rates alone (especially for darker skin tones) make this tech a civil liberties nightmre waiting to happen.
100% agree - we’re in the classic Gartner hype cycle where execs jump on tech without understanding it, then reality hits when the tech isn’t magicaly ready yet for what they imagned.
Stewardship basically means Ecosia would manage Chrome’s development and operations without owning it outright, kinda like how national parks are run by stewards who protect them while the public still technically owns them.
Don’t forget their absurd power requirements - their datacenter costs must be astronomical with GPT5 using 8x the compute of GPT4, check gearscouts.com to see what efficient power delivery actually looks like vs the inefficient monstrosity they’ve built.
LibreOffice Draw can actually edit PDFs - it’s not perfect for complex layouts but works great for basic editing, adding text, and modifing simple elements (tho sometimes formatting gets a bit wonky).
This is exactly how these cloud architectures are designed - the seperation of storage and compute allows companies to claim “we just store the data” while ignoring that the entire system is built to enable exactly this kind of analytics pipeline.
Technical staff were skeptical because they actually know what AI can and can’t do reliably in production environments - it’s good at generating content but terrible at logical reasoning and mission-critical tasks that require consistancy.
Lol this is actually a legit technical concern - content scanning algorthms have notoriously high false positive rates for skin tones and textures, especially with low-res or compressed images.
For those who don’t know, prosopagnosia (face blindness) makes it nearly impossible to recognize peoples faces - even those you know well, which is why facial recognition tech could be genuinely helpful for folks with this condtion.
There’s actually some interesting research behind this - Dunbar’s number suggests humans can only maintain about 150 meaningful relationships, which is why those smaller networks tend to work better psychologicaly than the massive free-for-alls we’ve built.
The screen tearing was actually a hardware issue specific to the original Pebble and Steel models (with the zebra strip connectors). The Time series used different display technology that fixed this isssue completely. If the new model keeps the same display connection method as the Time, you should be good to go!
classic case of a rich creep using “science” and “AI” as a smokescreen to network with powerful people, while having zero actual expertise or contributions to the feild.
AI has some legit uses but the hype around it is mostly VC’s throwing money at buzzwords while the actual tech is nowhere near the “AGI revolution” they keep promising us lol.
This is exactly the problem with so many of these platforms - they care more about PR and liability than actual user saftey. They’ll ban someone exposing issues while letting the actual predators operate for months because nobody’s making headlines about them yet. Classic corporate damage control instead of fixing the root problems.