

Jesus, how the heck is this called “sideloading is so easy on an iPhone”?
That’s a nightmare procedure, and completely unnecessary.
Obviously Apple makes sideloading as hard as possible.
Jesus, how the heck is this called “sideloading is so easy on an iPhone”?
That’s a nightmare procedure, and completely unnecessary.
Obviously Apple makes sideloading as hard as possible.
You probably didn’t do it on purpose, but you made a comparison on Apple’s terms, thus implicitly priveleging Apple.
Last thing Apple needs is us priveleging it.
Openness isn’t just a nice to have. It is essential.
The difference between general purpose computing and gatekept walled garden computing is night and day.
Identifying the devs is not in the “need to know” for Google. Google sells or helps to sell a general purpose open device where it is on us to exploit that device however we will.
Now Google wants to switch to a walled garden, moderated development model.
If Google promises it won’t use those dev IDs to moderate development, their promise is only worth the wind it moves and the sound it makes.
Not a solution to our problem, but this is a crumb in our favor.
Bias is inevitable.
It makes no sense to work to eliminate it.
It’s like trying to breed a new species of humans that are void of preferences. An impossible task. If such a breeding process somehow succeded, the resulting product would not be a human anyway.
If anyone claims to be unbiased themselves, or claims freedom from bias for someone else, even just implying that someone somewhere is unbiased, I immediately know bullshit is afoot. Such claims are not always knowingly malicious, but are always detrimental to my interests should I start foolishly believing them.
If the problem we want to solve is how to consolidate wealth and power in a few private hands the fastest, the unregulated free markets is the solution.
The owners of the AI are centering their personal interests first.
Who here thinks businesses and CEOs exist to serve the public interests? I have a bridge to sell you.
If we want businesses and CEOs to serve the public inerests ahead of their personal interests we have to FORCE them to.
This problem is so bad it even has a name as a “principle agent problem”. The CEOs and other execs routinely steal from even their “own” publicly traded companies, but it is hardly ever litigated as it is hard to prove without violating the privacy of the CEOs. The most obvious method is via kickbacks. They get under the table payments from the contractor companies when deciding which contractor should get the contract.
The business world is rife with scum and villainy. If we ever want some guardrails around business practices we must grab the CEOs by their genitals. Because taking their word for anything is worth the sound that the word makes.
CEOs need to see jail time, and capital punishment in states that allow it.
Instead we lionize these psychopaths and call them “business leaders”. We brought all this on ourselves by uncritically believing the businesses’ own way of describing themselves.
“We will build one concentration camp less than the Republicans!”
Signed, Democrats.
Elon turned Grok into Mecha-Hitler.
Trump is telling the Smithsonian museum to ignore slavery, or to cover slavery as a positive.
The domestic appetite for propaganda is huge. Prager U is American.
Let’s not center foreign countries when we have so much work to do at home.
I have seen some people talk like that, and it strikes me as a religion. There’s euphoria, zeal, hope. To them AGI is coming to usher in heaven on earth. Singularity is like rupture.
Sam Altman is one of the preachers of this religion.
How indeed. It’s probably a multi-factor phenomenon which requires an anthropological study for a serious answer. (Good luck trying to get the necessary access to study them.) My guess for one factor in this, is that they have more money than they know what to do with.
As an assist to an actual oncologist, only.
I can see AI as a tool in some contexts, doing some specific tasks better than an unassisted person.
But as a replacement for people, AI is a dud. I would rather be alone than have a gf AI. And yes I am taking trauma and personal+cultural baggage into account. LLM is also a product of our culture for the most part, so will have our baggage anyway. But at least in principle it could be trained to not have certain kinds of baggage, and still, I would rather deal with a person save for the simplest and lowest stake interactions.
If we want better people, we need to enfranchise them and remove most paywalls from the world. Right now the world instead of being inviting is bristling with physical, cultural, and virtual fences, saying to us, “you don’t belong and aren’t welcome in 99.99% of the space, and the other 0.01% will cost you.” Housing for now is only a privelege. In a world like that it’s a miracle the people are as decent as they are. If we want better people we have to delibarately, on purpose, choose broadbased human flourishing as a policy objective, and be ruthless to any enemies of said objective. No amnesty for the billionaires and wannabe billionaires. Instead they are trying to shove down our throats AI/LLMs and virtual worlds as replacements for an actually decent and inviting world.
A narrow purpose AI trained to recognize tumor growths early is the kind of AI that makes sense to me.
The “owners” of our world want us to be passengers, not drivers. They own the carusel, and we rent our rides.
They say we have no skin in the game. Truth is, SKIN is ALL we have in this game. We must have assets in the game as a birthright to make it worth playing in good faith. If most are landless and assetless, sorry, the game sucks. That means untill we get the rules that protect all of our interests, as opposed to protecting massive wealth accumulations at everyone’s expense, we will ignore the rules, the norms, decorum, civility, etc.
If the hoarders break the social contract repeatedly, like they have since 2008, it takes people some time to internalize and digest the fact of what it means for none of us to be bound by a social contract. Once people catch on, there will be hell to pay.
Of course the power dynamics cannot ever be eliminated (either by breeding or enculturation) from the interpersonal relationships.
Instead, power can be regulated and managed, to maximize distributed decisionmaking, and to protect those decisionmakers who could not or would not protect themselves.
In a free for all, feudalism will always result. The strong and the willing will rule over the weak and the unwilling.
There have to be limits to the power dynamics. Those limits will have to be enforced to protect the vulnerable, the gullible, and the unwilling (those who have the capability to exercise power, but refuse by choice), etc. This requires advanced democratic governance with a very strong government.
Doing away with the government is just a speedrun toward technofeudalism.
Working to create a protected space that selects for distributed decisionmaking is the actual project. That’s an actually sane, worthwhile and achievable goal.
When democratic governance withers what fills the power vacuum is feudalism.
Technofeudalism is feudalism with computers.
Ironically, to create a space that selects for and protects distributed decisionmaking (the desire of most sane anarchists), you need a strong government!
My use case for AI is to get it to tell me water to cereal ratios, like for rice, oatmal, corn meal. If there is a mistake, I can easily control for it, and it’s a decent enough starting point.
That said, I am just being lazy by avoiding taking my own notes. I can easily make my own list of water to cereal ratios to hang on the fridge.
I was fine before the AI.
The biggest customer of AI are the billionaires who can’t hire enough people for their technofeudalist/surveillance capitalism agenda. The billionaires (wannabe aristocrats) know that machines have no morals, no bottom lines, no scruples, don’t leak info to the press, don’t complain, don’t demand to take time off or to work from home, etc.
AI makes the perfect fascist.
They sell AI like it’s a benefit to us all, but it ain’t that. It’s a benefit to the billionaires who think they own our world.
AI is used for censorship, surveillance pricing, activism/protest analysis, making firing decisions, making kill decisions in battle, etc. It’s a nightmare fuel under our system of absurd wealth concentration.
Fuck AI.
Corporations are people too, friend.
/s