• 2 Posts
  • 940 Comments
Joined 1 year ago
cake
Cake day: February 5th, 2025

help-circle
  • LLM, unlikely. ML, probably

    ML already has demonstrated tremendous capability increases for automated machines, starting with postal letter sorters decades ago, proceeding through ever more advanced (and still limited, occasionally flawed - like people) image recognition.

    LLM puts more of a “natural language interface” on things, making phone trees into something less infuriating to use and ultimately more helpful.

    LLMs, which are too costly to train and run

    That’s a matter of application

    inherently too unreliable for safety-critical or health-critical use

    Yeah, although I can see LLMs being helpful as a front end, in addition to the traditional checklist systems used for safety regulation, medical Dx and other guidance, an LLM can (and has, for me) provided (incomplete, sometimes flawed) targeted insights into material it reviews - improving the human review process as an adjunct tool, not as a replacement for the human reviewer.

    too flaky for any use requiring auditability

    Definitely. Mostly I have been using LLM generated code to create deterministic processes which can be verified as correct - it’s pretty good at that, I could write the same code myself but the AI agent/LLM can write that kind of (simple) program 5x-10x faster for 10% of the “brain fatigue” and I can focus on the real problems we’re trying to solve. Having those deterministic tools again makes review and evaluation of large spreadsheets a more thorough and less labor intense process. People make mistakes, too, and when you give them (for this morning’s example) a spreadsheet with 2000 rows and 30 columns to “evaluate” - beyond people’s “context window capacity” as well… we need tools that focus on the important 50 lines and 8 columns without missing the occasional rare important datapoints…

    So far, with LLMs, the game ain’t worth the candle,

    The better modern models, in roughly the past 10 months or so, have turned a corner for some computer programming tasks, and over those 10 months they have improved rather significantly. It’s not the panacea revolution that a lot of breathless journalists describe, but it’s a better tool assisting in the creation of simple programs (and simple components of larger programs) than anything I have used in the previous 45 years, and over the past 10 months the level of complexity / size of programs the LLMs can effectively handle has roughly tripled, in my estimation for my applications.

    even without considering the enormous environmental damage caused by their supporting infrastructure.

    When it’s used for worthless garbage (as most of it seems to be today), I agree with this evaluation. Focused on good use cases? In specifically good use cases, the power / environmental impacts range from trivial to positive - in those cases where the AI agents/LLMs are saving human labor - human labor and its infrastructure has enormous environmental impact too.












  • Subpoena + publicity = uninsurable. And when you work for a low-profit endeavor, your “damages” are limited to the money you might have made were you insurable, at least that’s how the courts measure it and the lawyers decide to take the case or not. OpenAI would probably gladly lose a case and pay whatever income The Midas Project lost as a result of OpenAI’s actions - profit isn’t the point of The Midas Project, reporting what is happening in the industry is, and that mission has been effectively thwarted with the uninsurable status.



  • Odds are you weren’t on the “targeted list”.

    If you don’t know, you’re probably auto updating.

    If you updated or installed in 2025 after June-ish, the safe thing to do is uninstall, then download from the new (theoretically more secure) website and install the new (theoretically more secure) 8.9.1.

    If you were pwned by an update during later 2025, they could disguise just about anything in your Notepad++ and its associated files - make it look perfectly normal, make it act perfectly normal, but have their own malware on your system doing… whatever it is they want it to do.

    I understand one of the things they were doing is running a proxy to carry traffic through your system, so if you see a lot of unexpected network activity (under Windoze how can you tell?) you may have been compromised. But that’s not the only thing they could have done, nobody has really analyzed the attack yet and even after they do, you might have gotten a “special” payload that the analysis team didn’t see…



  • Exactly how hard would it be to place a “cork in the hole” to render the cavity unusable? If (big if) overpopulation becomes a problem, it’s pretty easy - these days - to develop and maintain a database of most of these swift cavities, survey them from a distance to see if they are corked or not, and adjust the number of corks as appropriate to address current population trends.

    I get that you don’t like the approach - but it’s a solid one, which is what works best for swifts’ nests: solid structures.