• 11 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: October 14th, 2024

help-circle







  • My take: if Rossman came out swinging as an anti-corporate revolutionary, his ideas wouldn’t have wide appeal right now, since many people still think the problem is just “bad” mega-corporations. So instead, he’s arguing for less-shitty tech corporations as a first step (symbolized by Clippy, of a less-intrusive software age), rather than “destroy all tech corpos now.” No, Microsoft wasn’t good then, but they were less awful.

    If his video were starkly anti-capitalist, it would not have reached 2.5 million people, and I’d say getting that many people to start thinking about rejecting invasive software is a great step in the right direction, as opposed to ideological purism that would only resonate with those who already agree. The need for these baby steps is frustrating for those who already see the big picture, but a few chats with my coworkers quickly reveal how shockingly little some people have actually thought about the sins of big tech.



  • DegenerateSupreme@lemmy.ziptoFuck AI@lemmy.worldOn Exceptions
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    7 months ago

    I’d say the main ethical concern at this time, regardless of harmless use cases, is the abysmal environmental impact necessary to power centralized, commercial AI models. Refer to situations like the one in Texas. A person’s use of models like ChatGPT, however small, contributes to the demand for this architecture that requires incomprehensible amounts of water, while much of the world does not have enough. In classic fashion, the U.S. government is years behind on accepting what’s wrong, allowing these companies to ruin communities behind a veil of hyped-up marketing about “innovation” and beating China at another dick-measuring contest.

    The other concern is that ChatGPT’s ability to write your Python code for data modeling is built on the hard work of programmers who will not see a cent for their contribution to the model’s training. As the adage goes, “AI allows wealth to access talent, while preventing talent from accessing wealth.” But since a ridiculous amount of data goes into these models, it’s an amorphous ethical issue that’s understandably difficult for us to contend with, because our brains struggle to comprehend so many levels of abstraction. How harmed is each individual programmer or artist? That approach ends up being meaningless, so you have to regard it more as a class-action lawsuit, where tens of thousands have been deprived as a whole.

    By my measure, this AI bubble will collapse like a dying star in the next year, because the companies have no path to profitability. I hope that shifts AI development away from these environmentally-destructive practices, and eventually we’ll see legislation requiring model training to be ethically sourced (Adobe is already getting ahead of the curve on this).

    As for what you can do instead, people have been running local Deepseek R1 models since earlier this year, so you could follow a guide to set one up.


  • DegenerateSupreme@lemmy.ziptoFuck Cars@lemmy.worldNo words
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    1
    ·
    8 months ago

    The shift to these ridiculously large trucks is partially consequent of the poorly-implemented Obama fuel economy regulations. The regulations were determined by wheelbase and tread width, which disincentivized manufacturers from making mid- or small-sized trucks. The bigger they made them, the less restricted they were by fuel economy. Larger vehicles also ease constraints on engineers; they don’t have to struggle fitting a lot into a small body. Once large trucks became the default offering, they morphed into the annoying cultural “status” symbol we know today.

    Anyway I have a Miata MX-5 and I love my tiny car.



  • I’m in complete agreement with this perspective, but rarely do I see discussions like this address the sticking point centrists and conservatives get hung up on: they don’t believe this is “theft.”

    When I told my coworker about the historic productivity-to-wages gap, she argued (paraphrasing), “Could it not be that gap is reflective of the CEOs innovating ways to make their workers increasingly productive, while the value of those workers’ labor hasn’t actually increased, therefore explaining why the minds behind those innovations deserve the wealth?”

    This conversation will go nowhere if we keep throwing around terms like “wage theft” and skipping step 1 where we argue the moral determination as to why that is true.


  • To say “that feeling” of indignation (at the letter’s inclusion in a gallery) is the same as other things that make him roll his eyes, is reductionist. We regard things as stupid for different reasons; they’re not all the “same feeling.” As others have said, the artist’s intentionality in presenting something is part of its message. So the indignation he felt about a piece being put in a gallery is part of that piece’s effect on him, born from the artist’s choices. That feeling is different than hearing a moron say something dumb and thinking it’s stupid.

    Intentionality is the key. Case in point, “language evolves” is a silly thing to say after a mistake, but many subcultures start misspelling things on purpose, and that intentionality is how language evolves.





  • I feel conflicted. On one hand, people can regulate themselves, and Facebook becoming a bigoted cesspit may bring more people to a moderated Fediverse.

    On the other hand, these major platforms having such user monopoly and influence can cause unfettered hate speech to breed violence.

    I’m conflicted about the idea that an insidious for-profit megacorporation should be expected to uphold a moral responsibility to prevent violence; their failure to do so might be a necessary wake-up call that ultimately strips them of that problematic influence. Thoughts?




  • Viability of crafting endgame gear has felt like it’s been in flux since release. I think having craftable BiS gear—with fully-customized and guaranteed perks at a very steep resource cost—would be fine mechanically, considering perk variety would necessitate multiple sets for different applications.

    The only issue with a steep resource cost would be accessibility. Naturally, the largest companies are going to do their best to lock down all farming spots necessary for BiS crafting. Right now, the costs are high yet attainable by smaller companies or individual players; if however a large company decides they’re going to outfit their entire regiment in craftable BiS that has massive costs, smaller groups might not be able to access those resources for months at a time due to round-the-clock farming.

    This is all the more reason I’d be a fan of randomly-spawning resource locations, within a set area, in addition to the static spawns. A bit of random chance could help even the playing field between large companies and small ones seeking to harvest resources.











  • Agreed. The problem is that so many (including in this thread) argue that training AI models is no different than training humans—that a human brain inspired by what it sees is functionally the same thing.

    My response to why there is still an ethical difference revolves around two arguments: scale, and profession.

    Scale: AI models’ sheer image output makes them a threat to artists where other human artists are not. One artist clearly profiting off another’s style can still be inspiration, and even part of the former’s path toward their own style; however, the functional equivalent of ten thousand artists doing the same is something else entirely. The art is produced at a scale that could drown out the original artist’s work, without which such image generation wouldn’t be possible in the first place.

    Profession. Those profiting from AI art, which relies on unpaid scraping of artists’s work for data sets, are not themselves artists. They are programmers, engineers, and the CEOs and stakeholders who can even afford the ridiculous capital necessary in the first place to utilize this technology at scale. The idea that this is just a “continuation of the chain of inspiration from which all artists benefit” is nonsense.

    As the popular adage goes nowadays, “AI models allow wealth to access skill while forbidding skill to access wealth.”