

CEO complains that his blatant circular investment infinite money glitch isn’t convincing people that these perpetually unprofitable businesses are actually worth further investment.


CEO complains that his blatant circular investment infinite money glitch isn’t convincing people that these perpetually unprofitable businesses are actually worth further investment.
Like, there’s no way in hell these files haven’t been doctored, right? Months of obfuscation and deflection, and then suddenly Trump’s fine to sign their release? There’s no way.


Why the hell did The Guardian include comment from an Amazon Spokesperson? ‘‘Nuh uh, that’s not true’’ no fucking duh that’s their response.


Got a secondhand Pixel phone and installed GrapheneOS. I love it.


I’m just now starting my degree is software engineering. I’m 31. I’d gotten comfortable enough with Linux that I wanted to try NixOS to avoid having my system get borked again (in my case, KDE Plasma started having shell crashes at log in).
If I was only using NixOS to run a basic computer set up? Sure, no problem. If I want to rice and customize it? No, I wasn’t ready.


I know it’s not for everyone, but my Light Phone III arrives soon and tech headlines of late aren’t making me regret my choice.


My take: if Rossman came out swinging as an anti-corporate revolutionary, his ideas wouldn’t have wide appeal right now, since many people still think the problem is just “bad” mega-corporations. So instead, he’s arguing for less-shitty tech corporations as a first step (symbolized by Clippy, of a less-intrusive software age), rather than “destroy all tech corpos now.” No, Microsoft wasn’t good then, but they were less awful.
If his video were starkly anti-capitalist, it would not have reached 2.5 million people, and I’d say getting that many people to start thinking about rejecting invasive software is a great step in the right direction, as opposed to ideological purism that would only resonate with those who already agree. The need for these baby steps is frustrating for those who already see the big picture, but a few chats with my coworkers quickly reveal how shockingly little some people have actually thought about the sins of big tech.


I’m so confused by this common sentiment in the community. I’ve been gaming on Arch / NixOS for the past several months with an NVIDIA card after I switched earlier this year. Basically no issues.
Meanwhile, my buddy converted to Manjaro, and has a Radeon. He’s been having awful issues. Several of the games he plays crash constantly, especially if they are multiplayer. He tried switching to openSUSE recently; no real improvements.
I wanted to buy AMD for my eventual next card, but now I’m terrified of doing so, and deeply confused why everyone says AMD is better for Linux.
I’d say the main ethical concern at this time, regardless of harmless use cases, is the abysmal environmental impact necessary to power centralized, commercial AI models. Refer to situations like the one in Texas. A person’s use of models like ChatGPT, however small, contributes to the demand for this architecture that requires incomprehensible amounts of water, while much of the world does not have enough. In classic fashion, the U.S. government is years behind on accepting what’s wrong, allowing these companies to ruin communities behind a veil of hyped-up marketing about “innovation” and beating China at another dick-measuring contest.
The other concern is that ChatGPT’s ability to write your Python code for data modeling is built on the hard work of programmers who will not see a cent for their contribution to the model’s training. As the adage goes, “AI allows wealth to access talent, while preventing talent from accessing wealth.” But since a ridiculous amount of data goes into these models, it’s an amorphous ethical issue that’s understandably difficult for us to contend with, because our brains struggle to comprehend so many levels of abstraction. How harmed is each individual programmer or artist? That approach ends up being meaningless, so you have to regard it more as a class-action lawsuit, where tens of thousands have been deprived as a whole.
By my measure, this AI bubble will collapse like a dying star in the next year, because the companies have no path to profitability. I hope that shifts AI development away from these environmentally-destructive practices, and eventually we’ll see legislation requiring model training to be ethically sourced (Adobe is already getting ahead of the curve on this).
As for what you can do instead, people have been running local Deepseek R1 models since earlier this year, so you could follow a guide to set one up.
The shift to these ridiculously large trucks is partially consequent of the poorly-implemented Obama fuel economy regulations. The regulations were determined by wheelbase and tread width, which disincentivized manufacturers from making mid- or small-sized trucks. The bigger they made them, the less restricted they were by fuel economy. Larger vehicles also ease constraints on engineers; they don’t have to struggle fitting a lot into a small body. Once large trucks became the default offering, they morphed into the annoying cultural “status” symbol we know today.
Anyway I have a Miata MX-5 and I love my tiny car.

Precisely this. Leftist rhetoric about wages is often framed for other leftists, without addressing the core arguments underpinning centrist and conservative views on why the rich “deserve” their wealth. People say “theft” without making arguments for why our definition of theft needs to change.

I’m in complete agreement with this perspective, but rarely do I see discussions like this address the sticking point centrists and conservatives get hung up on: they don’t believe this is “theft.”
When I told my coworker about the historic productivity-to-wages gap, she argued (paraphrasing), “Could it not be that gap is reflective of the CEOs innovating ways to make their workers increasingly productive, while the value of those workers’ labor hasn’t actually increased, therefore explaining why the minds behind those innovations deserve the wealth?”
This conversation will go nowhere if we keep throwing around terms like “wage theft” and skipping step 1 where we argue the moral determination as to why that is true.


To say “that feeling” of indignation (at the letter’s inclusion in a gallery) is the same as other things that make him roll his eyes, is reductionist. We regard things as stupid for different reasons; they’re not all the “same feeling.” As others have said, the artist’s intentionality in presenting something is part of its message. So the indignation he felt about a piece being put in a gallery is part of that piece’s effect on him, born from the artist’s choices. That feeling is different than hearing a moron say something dumb and thinking it’s stupid.
Intentionality is the key. Case in point, “language evolves” is a silly thing to say after a mistake, but many subcultures start misspelling things on purpose, and that intentionality is how language evolves.
I’m really glad I’m in the lower left because that’s my favorite of the faces.


I just gave up and pre-ordered the Light Phone 3. Anytime I truly need a mobile app, I can just use an old iPhone and a WiFi connection.


I feel conflicted. On one hand, people can regulate themselves, and Facebook becoming a bigoted cesspit may bring more people to a moderated Fediverse.
On the other hand, these major platforms having such user monopoly and influence can cause unfettered hate speech to breed violence.
I’m conflicted about the idea that an insidious for-profit megacorporation should be expected to uphold a moral responsibility to prevent violence; their failure to do so might be a necessary wake-up call that ultimately strips them of that problematic influence. Thoughts?


I was naive to think they’d try easing into this stuff, but — perhaps fortunately for public outrage and taking action — they are being loud and clear about it. Really just no subtlety whatsoever to the fascist horror.


Viability of crafting endgame gear has felt like it’s been in flux since release. I think having craftable BiS gear—with fully-customized and guaranteed perks at a very steep resource cost—would be fine mechanically, considering perk variety would necessitate multiple sets for different applications.
The only issue with a steep resource cost would be accessibility. Naturally, the largest companies are going to do their best to lock down all farming spots necessary for BiS crafting. Right now, the costs are high yet attainable by smaller companies or individual players; if however a large company decides they’re going to outfit their entire regiment in craftable BiS that has massive costs, smaller groups might not be able to access those resources for months at a time due to round-the-clock farming.
This is all the more reason I’d be a fan of randomly-spawning resource locations, within a set area, in addition to the static spawns. A bit of random chance could help even the playing field between large companies and small ones seeking to harvest resources.


Agreed. The problem is that so many (including in this thread) argue that training AI models is no different than training humans—that a human brain inspired by what it sees is functionally the same thing.
My response to why there is still an ethical difference revolves around two arguments: scale, and profession.
Scale: AI models’ sheer image output makes them a threat to artists where other human artists are not. One artist clearly profiting off another’s style can still be inspiration, and even part of the former’s path toward their own style; however, the functional equivalent of ten thousand artists doing the same is something else entirely. The art is produced at a scale that could drown out the original artist’s work, without which such image generation wouldn’t be possible in the first place.
Profession. Those profiting from AI art, which relies on unpaid scraping of artists’s work for data sets, are not themselves artists. They are programmers, engineers, and the CEOs and stakeholders who can even afford the ridiculous capital necessary in the first place to utilize this technology at scale. The idea that this is just a “continuation of the chain of inspiration from which all artists benefit” is nonsense.
As the popular adage goes nowadays, “AI models allow wealth to access skill while forbidding skill to access wealth.”
I find it surreal and profound that there is now a form of cybercrime that is, literally, using poetic maledictions. The line between technology and classic depictions of magic blurs yet further.