Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 0 Posts
  • 276 Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • tal@lemmy.todaytoTechnology@beehaw.orgMove Over, ChatGPT
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    In all fairness, while this is a particularly bad case, the fact that it’s often very difficult to safely fiddle with environment variables at runtime in a process, but very convenient as a way to cram extra parameters into a library have meant that a lot of human programmers who should know better have created problems like this too.

    IIRC, setting the timezone for some of the Posix time APIs on Linux has the same problem, and that’s a system library. And IIRC SDL and some other graphics libraries, SDL and IIRC Linux 3D stuff, have used this as a way to pass parameters out-of-band to libraries, which becomes a problem when programs start dicking with it at runtime. I remember reading some article from someone who had been banging into this on Linux gaming about how various programs and libraries for games would setenv() to fiddle with them, and races associated with that were responsible for a substantial number of crashes that they’d seen.

    setenv() is not thread-safe or signal-safe. In general, reading environment variables in a program is fine, but messing with them in very many situations is not.

    searches

    Yeah, the first thing I see is someone talking about how its lack of thread-safety is a problem for TZ, which is the time thing that’s been a pain for me a couple times in the past.

    https://news.ycombinator.com/item?id=38342642

    Back on your issue:

    Claude, being very smart and very good at drawing a straight line between two points, wrote code that took the authentication token from the HTTP request header, modified the process’s environment variables, then called the library

    for the uninitiated - a process’s environment variables are global. and HTTP servers are famously pretty good at dealing with multiple requests at once.

    Note also that a number of webservers used to fork to handle requests — and I’m sure that there are still some now that do so, though it’s certainly not the highest-performance way to do things — and in that situation, this code could avoid problems.

    searchs

    It sounds like Apache used to and apparently still can do this:

    https://old.reddit.com/r/PHP/comments/102vqa2/why_does_apache_spew_a_new_process_for_each/

    But it does highlight one of the “LLMs don’t have a broad, deep understanding of the world, and that creates problems for coding” issues that people have talked about. Like, part of what someone is doing when writing software is identifying situations where behavior isn’t defined and clarifying that, either via asking for requirements to be updated or via looking out-of-band to understand what’s appropriate. An LLM that’s working by looking at what’s what commonly done in its training set just isn’t in a good place to do that, and that’s kinda a fundamental limitation.

    I’m pretty sure that the general case of writing software is AI-hard, where the “AI” referred to by the term is an artificial general intelligence that incorporates a lot of knowledge about the world. That is, you can probably make an AI to program write software, but it won’t be just an LLM, of the “generative AI” sort of thing that we have now.

    There might be ways that you could incorporate an LLM into software that can write software themselves. But I don’t think that it’s just going to be a raw “rely on an LLM taking in a human-language set of requirements and spitting out code”. There are just things that that can’t handle reasonably.



  • I have never used Arch. And it may not be worthwhile for OP. But I am pretty confident that I could get that thing working again.

    Booting into a rescue live-boot distro on USB, mount the Arch root somewhere, bind-mounting /sys, /proc, and /dev from the host onto the Arch root, and then chrooting to a bash on the Arch root and you’re basically in the child Arch environment and should be able to do package management, have DKMS work, etc.








  • The fanboying to the point of blinders is maddening to deal with among Linux users.

    Alien who has arrived on Earth: “I’ve heard that you humans drive motor vehicles to get around. I should get a motor vehicle. Could someone tell me the best type to get?”

    Human A: “You want a Prius.”

    Human B: “No, that’s for tree-hugging, probably-homosexual hippies. You need a proper truck, a Ford.”

    Human C: “Actually, Ford trucks are trash, what you need is a Chevy truck.”



  • Frankly, the right answer is that pretty much any non-specialized distribution (e.g. don’t use OpenWRT, a Linux distribution designed specifically for very small embedded devices) will probably work fine. That doesn’t mean that they all work the same way, but a lot of the differences are around things that honestly aren’t that big a deal for most potential end users. Basically, nobody has used more than at most a couple of the distros out there sufficiently to really come up to speed on their differences anyway. Most end users can adapt to a given packaging system, don’t care about the init system, are aren’t radically affected by mutability/immutability, can get by with different update schedules, etc. In general, people tend to just recommend what they themselves use. The major Linux software packages out there are packaged for the major distros.

    I linked to a timeline of Linux distros in this thread. My own recommendation is to use an established distro, one that has been around for some years (which, statistically, indicates that it’s got staying power; there are some flash-in-the-pan projects where people discover that doing a Linux distro is larger than they want).

    I use Debian, myself. I could give a long list of justifications why, but honestly, it’s probably not worth your time. There are people who perfectly happily use Fedora or Ubuntu or Arch or Gentoo or Mint or whatever. A lot of the differences that most end users are going to see comes down to defaults — like, there are people in this thread fighting over distro because of their preferred desktop environment. Like, Debian can run KDE or GNOME or Cinnamon or XFCE or whatever, provides options as to default in the installer, and any of them (or multiple of them) can be added post-initial-installation. You wouldn’t say that a car is good or bad based on the setting of the thermostat as it comes from the dealer, like.




  • The present-day Linux kernel tree (not the Debian guys) actually has a target to build a Debian kernel package (make bindeb-pkg) straight out of git if you want, so you can pretty readily get a packaged kernel out of the Linux kernel git repo, as long as you can come up with a viable build config for it (probably starting from a recent Debian kernel’s config). I have run off Debian-packaged kernels built that way before, if you want to play on the really bleeding edge.



  • Multiple partitions or single. LLVM-managed or not. Block-level encrypted partitions or not. Do you want your swap on a dedicated partition, as a swap file, and do you want it to be encrypted?

    If you decide that you want a multiple-partition installation and then let the installer do the partitioning, Debian’s installer still does a 100 MB /boot partition, which is woefully inadequate for present-day kernels as Debian packages them. 1 GB, maybe.


  • tal@lemmy.todaytolinuxmemes@lemmy.worldI love choice. I hate choosing.
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    20
    ·
    18 days ago

    Exactly. One is a package format and/or local package utility, and the other is a frontend to do downloads and updates for that local package utility.

    Should be “rpm or dpkg” — assuming that we’re excluding the other options — and then if someone chooses RPM, you can start talking about the frontend:

    https://en.wikipedia.org/wiki/RPM_Package_Manager

    Front ends

    Several front-ends to RPM ease the process of obtaining and installing RPMs from repositories and help in resolving their dependencies. These include:

    • yum used in Fedora Linux, CentOS 5 and above, Red Hat Enterprise Linux 5 and above, Scientific Linux, Yellow Dog Linux and Oracle Linux
    • DNF, introduced in Fedora Linux 18 (default since 22), Red Hat Enterprise Linux 8, AlmaLinux 8, and CentOS Linux 8.
    • up2date used in Red Hat Enterprise Linux, CentOS 3 and 4, and Oracle Linux
    • Zypper used in Mer (and thus Sailfish OS), MeeGo,[16] openSUSE and SUSE Linux Enterprise
    • urpmi used in Mandriva Linux, ROSA Linux and Mageia
    • apt-rpm, a port of Debian’s Advanced Packaging Tool (APT) used in Ark Linux,[17] PCLinuxOS and ALT Linux
    • Smart Package Manager, used in Unity Linux, available for many distributions including Fedora Linux.
    • rpmquery, a command-line utility available in (for example) Red Hat Enterprise Linux
    • libzypp, for Sailfish OS

    Then for dpkg, you can choose from among aptitude, apt, apt-get/apt-query/etc, graphical frontend options like synaptic that one may want to use in parallel with the TUI-based frontends, etc.



  • Clearly there’s an unwarranted assumption baked into this comic that one needs a desktop environment. I have my non-headless Linux systems set up to run the emptty display manager using the Linux console:

    Which then launches the Sway compositor without having Sway start any desktop environment if I want to log into a graphical environment. That’s my favorite option. Let’s not impose an artificially-restricted set of choices, here. :-)