AmbitiousProcess (they/them)

  • 0 Posts
  • 14 Comments
Joined 3 months ago
cake
Cake day: June 6th, 2025

help-circle

  • I doubt that’s the case, currently.

    Right now, there’s a lot of genuine competition in the AI space, so they’re actually trying to out compete one another for market share. It’s only once users are locked into using a particular service that they begin deliberate enshittification with the purpose of getting more money, either from paying for tokens, or like Google did when it deliberately made search quality worse so people would see more ads (“What are you gonna do, go to Bing?”)

    By contrast, if ChatGPT sucks, you can locally host a model, use one from Anthropic, Perplexity, any number of interfaces for open source (or at least, source-available) models like Deepseek, Llama, or Qwen, etc.

    It’s only once industry consolidation really starts taking place that we’ll see things like deliberate measures to make people either spend more on tokens, or make money from things like injecting ads into responses.



  • Just checked the contributor’s page, the crawled privacy policy being referenced is stated to be 4 months out of date, but the policy on Nebula’s website hasn’t been changed since Aug 31 2023, so I think TOSDR might be a little bugged, and just doesn’t have all the current policy’s points available for contributors to tag. The current privacy policy is much more lengthy to cover local state privacy regulations, the scope of what they now offer, etc.

    Still, it’s all pretty boilerplate, and nothing about it is really out of the ordinary or super harmful. Extremely basic attribution might be used if you click onto Nebula from an ad, and they might share a non-identifying hashed ID with that company. They’ll collect aggregate statistics to determine the impact of marketing campaigns, they sometimes email you, they collect data on your device that most webservers would by default in logs. All very standard.

    If they update any part of the policy about how they collect/use/share your data, they’ll notify you,

    They even explicitly say to not provide them with info on your race/politics/religion/health/biometrics/genetics/criminality or union membership. You are given an explicit right to delete your account regardless of local privacy laws, and they give you a single email to contact specifically regarding any requests related to the privacy policy.

    None of this is crazy, and I have no clue why artyom would call it a “shithole” based on that.


  • Except for these people, it almost definitely is. They have staff, an office, inventory to manage, etc. Most YouTubers nowadays aren’t just operating on their own, and thus have financial expenses outside of just paying themselves for their own labor, that can’t just keep going if their revenue stream goes down, or even just takes a large enough cut.

    It’s unfortunate, but that’s just how a lot of the content creation industry works right now, especially on YouTube.


  • It’s also just generally easier for first-time users to start using. For anyone curious, their little “feeds” of communities you can follow in one go by topic are super handy.

    For example, if I subscribe to the activismplus feed, I automatically subscribe to communities like antiwork, solarpunk, socialism, leftism, anarchism, unions, antifascism, human rights, left urbanism, etc, from a number of different instances all at once.

    For a first-time user, it’s easier to pick a topic they’re interested in and automatically be following all the relevant communities across most instances, rather than subscribing to communities one-by-one over a very long period of time.



  • The MSDS for the filament I use says that it doesn’t contain any PBT/vPvB substance or endocrine disruptors. I presume that means it’s likely fine, at least for the brand I use.

    The only 2 ingredients are PLA, and calcium carbonate, which is also found in egshells, some vegetables, and is coincidentally commonly used as an additive to composting piles that can eliminate pathogens.

    I also think the overall amount of pigment entering the environment from something like this will be quite low compared to practically any other contaminant that enters the waste stream from people who just don’t know what’s compostable throwing random things in the bin.

    There’s also the fact that there’s probably larger overall harms from all the microplastics existing in a landfill rather than being broken down entirely into plant proteins in a composting facility but with a minute amount of contamination. It’s not perfect, but it’s probably better than leaving all the microplastics floating around for decades if not centuries, depending on the environment.



  • This paper estimates the CO2e emissions of roughly a 1kg spool (estimates are done by length of filament, not weight, but weight would end up being about 1kg) of PLA filament at 3.10kg of CO2e.

    The model used to print the alleged ghost gun is the FMDA 19.2 by “the Gatalog,” which when I load it into my slicer shows an estimated 55g of filament used to print when using 15% infill, and 94g with 100% solid infill, for an estimated 0.1705-0.2914 CO2e of emissions for the printed parts. (This doesn’t include any support material, depending on print positioning)

    There’s no easy way to determine how much of that could theoretically end up as microplastics though. As for the metal parts, I have no clue lmao, I don’t care to estimate it that much.


  • From what I’ve seen, at the bare minimum, it will break down completely back into plant polymers faster than other plastics could hope to break down into anything non-dangerous to the environment, and even if it does break down into microplastics quicker, I’d rather have something like that, which can then later break down into plant polymers, rather than something that slowly leeches microplastics into the environment for the next few centuries, and doesn’t really break down into anything much less dangerous past that point.

    To cite some interesting points from the paper you referenced:

    The biodegradation of polylactic acid occurs in two main steps: fragmentation and mineralization. […] which can be biotic or abiotic. For instance, biotic hydrolysis involves microorganisms and/or enzymes, whereas abiotic hydrolysis involves mechanical weathering.

    This means it can break down via multiple mechanisms, with or without the presence of any microbes, but only given specific environmental circumstances, which is why it doesn’t work well in aquatic environments, as previously mentioned. However, some of it does still break down there, and if it later exits that aquatic environment, other processes can begin to break down what remains.

    The authors concluded that polylactic acid and its blends are similar to non-biodegradable plastics in terms of biodegradation in aquatic environment.

    [They] proposed that low temperatures along with low bacterial density make the sea water unsuitable for the biodegradation of polylactic acid.

    However, on the microplastics point, while they do state it degrades quickly, in terms of overall quantity of microplastics produced, it’s actually lower than other common plastics.

    The authors reported that polylactic acid forms almost 18 times fewer microplastics as compared to the petroleum-based plastic, polypropylene.

    They do still mention that it will still likely have many negative effects on marine life, though, even given that. Surely we’ll stop dumping plastics in the ocean now, for the good of the planet! Or not, because profits matter more, am I right?

    From another study, it seems that soil with certain combinations of bacteria, at regular temperatures found in nature, could mineralize about 24% of PLA in 150 days, which is pretty damn good compared to how long it would take non-bioplastics to do so.

    And of course, when put into dedicated composting facilities that can reach high temperatures, PLA can be composted extremely effectively. And this is just regular PLA we’re talking about, not things like cPLA, which can be 100% composted within regular composting facilities within 2-4 months. (coincidentally, most biodegradable utensils are now made of cPLA)

    I wouldn’t doubt we start seeing even more compostable variants of filament for 3D printers specifically popping up as actual distribution and manufacturing for the material becomes more cost effective and widespread. I was able to find cPLA filament at a reasonable price just from a simple search, and there’s even a biodegradable flexible filament as an alternative to TPU, made of oyster powder, which is 100% compostable (though is about 4-8X the price of regular TPU per gram as of now)

    None of this discounts any of the current environmental impacts of 3D printing materials, of course, but a lot of PLA now can already be almost entirely, if not actually entirely composted in local municipal composting facilities, and there’s even more compostable alternatives that exist today.

    I compost my failed or no-longer-needed PLA prints, and my city even explicitly states to put it in my compost bin, as it’s supported by our composting system.


  • That’s not an extension cable, but an adapter, thus it’s not a problem in this case. It’s a cable that can convert the data from an audio jack to something that can go through USB-C, not a cable that simply extends a USB-C cable. The cable can almost certainly handle any amount of power and data that an audio jack would pass through it, no problem, even if it were a USB-C to USB-C extension cable, and not an adapter.

    The problem arises when someone tries using a higher-spec USB-C cable with a lower-spec USB-C extension cable, such as using a 240W charger with the lower-spec USB-C extension cable in the middle that can only do 120W. In that case, it would pass more electricity through than the lower-spec cable could handle, and it would overheat.

    The amount of data and power from an audio jack is simply too small to overwhelm practically any USB-C cable or adapter that exists, thus it’s not an issue.


  • Most of these AI crawlers are from major corporations operating out of datacenters with known IP ranges, which is why they do IP range blocks. That’s why in Codeberg’s response, they mention that after they fixed the configuration issue that only blocked those IP ranges on non-Anubis routes, the crawling stopped.

    For example, OpenAI publishes a list of IP ranges that their crawlers can come from, and also displays user agents for each bot.

    Perplexity also publishes IP ranges, but Cloudflare later found them bypassing no-crawl directives with undeclared crawlers. They did use different IPs, but not from “shady apps.” Instead, they would simply rotate ASNs, and request a new IP.

    The reason they do this is because it is still legal for them to do so. Rotating ASNs and IPs within that ASN is not a crime. However, maliciously utilizing apps installed on people’s devices to route network traffic they’re unaware of is. It also carries much higher latency, and could even allow for man-in-the-middle attacks, which they clearly don’t want.


  • While true to a degree, I think the fact is that AI is just much more complex than a knife, and clearly has perverse incentives, which cause people to use it “wrong” more often than not.

    Sure, you can use a knife to cook just as you can use a knife to kill, but just as society encourages cooking and legally & morally discourages murder, then in the inverse, society encourages any shortcut that can get you to an end goal for the sake of profit, while not caring about personal growth, or the overall state of the world if everyone takes that same shortcut, and the AI technology is designed with the intent to be a shortcut rather than just a tool.

    The reason people use AI in so many damaging ways is not just because it is possible for the tool to be used that way, and some people don’t care about others, it’s that the tool is made with the intention of offloading your cognitive burden, doing things for you, and creating what can be used as a final product.

    It’s like if generative AI models for image generation could only fill in colors on line art, nothing more. The scope of the harm they could cause is very limited, because you’d always require line art of the final product, which would require human labor, and thus prevent a lot of slop content from people not even willing to do that, and it would be tailored as an assistance tool for artists, rather than an entire creation tool for anyone.

    Contrast that with GenAI models that can generate entire images, or even videos, and they come with the explicit premise and design of creating the final content, with all line art, colors, shading, etc, with just a prompt. This directly encourages slop content, because to have it only do something like coloring in lines will require a much more complex setup to prevent it from simply creating the end product all at once on its own.

    We can even see how the cultural shifts around AI happened in line with how UX changed for AI tools. The original design for OpenAI’s models was on “OpenAI Playground,” where you’d have this large box with a bunch of sliders you could tweak, and the model would just continue the previous sentence you typed if you didn’t word it like a conversation. It was designed to look like a tool, a research demo, and a mindless machine.

    Then, they released ChatGPT, and made it look more like a chat, and almost immediately, people began to humanize it, treating it as its own entity, a sort of semi-conscious figure, because it was “chatting” with them in an interface similar to how they might text with a friend.

    And now, ChatGPT’s homepage is presented as just a simple search box, and lo and behold, suddenly the marketing has shifted to using ChatGPT not as a companion, but as a research tool (e.g. “deep research”) and people have begun treating it more like a source of truth rather than just a thing talking to them.

    And even in models where there is extreme complexity to how you could manipulate them, and the many use cases they could be used for, interfaces are made as sleek and minimalistic as possible, to hide away any ability you might have to influence the result with real, human creativity.

    The tools might not be “evil” on their own, but when interfaces are designed the way they are, marketing speak is used how it is, and the profit motive incentivizes using them in the laziest way possible, bad outcomes are not just a side effect, they are a result by design.