• 4 Posts
  • 44 Comments
Joined 1 year ago
cake
Cake day: August 10th, 2023

help-circle
  • Some software is so complex and difficult that Debian does not maintain it on their own, and instead follows the upstream release cycle.

    Browsers are one such example, and as you’ve discovered for me, Thunderbird is probably another.

    Also, please do not recommend testing for daily usage. It does not receive critical security updates in a timely manner, including for things that would effect desktop users. Use stable, Sid, or another distro. Testing is for testing Debian ONLY, and by using Debian Testing, you are losing the advantage of immediate security fixes that come from literally any other distro.


  • Wish I could transcend into declarativity but the thread’s nix survivor ratio is grim

    Yeah lol.

    I will say, that for my server, I decided to use kubernetes + fluxcd for declaratively. My entire kubernetes “state” is declared in a git repo, and this is the popular, industry standard for things like this, called GitOps. It makes it very easy to add an app, since it’s just adding a folder + some new config files. And unlike Nix, Kubernetes and Flux are very well documented with much tooling as well. Nix doesn’t really have a working LSP or good code autocomplete, but with kubernetes, I can just start typing in a yaml file and then hit tab and it spits out the template for me. Code autocompletion with kubernetes feels much more similar to the tooling of other, more mature tooling

    It’s not as declarative as nix though. There are things missing, like OCI containers could theoretically shift if you don’t rely on hashes and some other nitpicks. But declarativity is a spectrum, and I feel like, outside of scientific scenarios (think simulations where versioning, hardware, runtime etc being the same is very important), I think many non-nixos solutions are declarative enough.


  • Advice online seemed like i needed to basically create a nix flake for the app. I still havent gotten it installed because i have no idea what nix flakes are.

    So, the problem is that flakes are technically an “experimental” feature, and thus are not allowed to be included as a primary solution in the official documentation. But, basically everybody uses flakes, so it leads to this crazy documentation split, and is a big part of why documentation on Nix is so part.

    Some stuff can only be done with flakes, some stuff only with non-flakes and you have to figure out which is which on your own, while also dealing with the poor documentation for either.

    The advice you received was wrong. You could also use a combination of a default.nix file and a shell.nix file to create a package and development environment for your app. But, the documentation is so poor that it’s unlikely you will learn this, and figuring out how to do this on your own, is again, a massive time sink.


  • So, I use Arch, but I don’t use the AUR at all. Instead, I use nixpkgs to get stuff (admittedly only like 3 packages) not in the Arch repos.

    The main reason for this is the quality of AUR packages. Although I don’t really fear a malicious package, I do remember hearing about a package that moved a users /bin to /opt during the install phase.

    Something like that is literally impossible with Nix, due to the way that applications aren’t really installed to the system. But, nixpkgs also requires some level of vetting the package quality, which is also nice.

    I also use nix for managing all my development environments. For example, my blog github repo, has a few nix files at it’s root, and you should just be able to type nix-shell in folder, and then you will get an identical environment to me.

    declarative rollbackable immutability sounds really freakin’ AWESOME

    I have BTRFS snapshots set up, and with grub-btrfs, I can even boot from them and revert to an older kernel (my /boot is stored on BTRFS).

    However, I have given up on NixOS, for many reasons. The documentation is very poor, and it’s more complexity than it’s worth, to make my whole OS reproducible, rather than just my development environments. In addition to that, their are also issues with running certain apps that expect to see a normal FIlesystem Hierarchy, which nix does not provide. Although you can work around this with stuff like steam-run or creating a fake FHS using nix, I would rather not play that game.

    But, considering I installed some stuff in an Ubuntu 22 distrobox recently, because that was what VScode and Unity official provide repos for, maybe this doesn’t really matter. You can probably use distrobox on Nixos, but I’ve seen issues about GPU acceleration with distrobox (and other non-nix apps) as well.

    EDIT: I lied, I use the chaotic aur for some things.



  • Yes. Firstly, it’s about release cycles. Centos Stream is a rolling release distro (although it rolls very, very slowly). But what this means, is that there isn’t a true guarantee of application/ABI/API compatibility between current versions of Centos Stream and future versions.

    In constrast to this, Centos 8 and previous were complete clones of Red Hat Enterprise Linux, which was a stable release distro. During the 10 year lifecycle of each RHEL release, there was a guarantee certain application/ABI/API compatibility not changing, which is what stability in the Linux/software world is defined as.

    Centos 8 was a free alternative, for institutions unwilling, or unable to pay for RHEL stable releases. But, with the death of Centos, an alternative was needed. Alma Linux, Rocky Linux, and Scientific Linux (designed for labs and universities), were rebuilds of RHEL. This meant that, they would take RHEL’s open source code, and recompile it and distribute it in a way that guaranteed application/ABI/API compatibility with RHEL, for the same lifecycle of a RHEL release.

    So Alma Linux and Rocky Linux fill that gap… but recently, RHEL said that they are adjusting policies to make it much harder for people to make rebuilds (likely targeting Oracle Linux, which is a RHEL rebuild), but this change may affect Alma and Rocky as well.

    Rocky said they were going to keep bug-for-bug compatibility, like they used to, but Alma says they are going to do something different. Although they still intend to be ABI compatible, Alma has decided to make some changes to the base system, such as reimplimenting and continuing to support things that Red Hat saw unfit to continue existing in RHEL. One example of this is SPICE, which is a graphics protocol used for low latency display of virtual machines. It had many usecases, and I am very excited to see it back in a distro in the Red Hat ecosystem.



  • I honestly don’t know how this could turn out.

    It could be an amazing change that results in much more progress for hardware acceleration on guests of various types (since that is what vmware is good at) in kvm…

    Or it could mean that they are dropping that feature from vmware altogether.

    Regardless, I like this change because it means I would be able to run vmware machines and libvirt kvm machines at the same time, at least when I am forced to use vmware workstation.

    I also dislike proprietary software in general, so I think less proprietary software and more FOSS is a good thing.



  • I disagree, because they are not the same thing.

    Immutable means read only root.

    Atomic means that updates are done in a snapshotted manner somehow. It usually means that if an update fails, your system is not in a half working state, but instead will be reverted to the last working state, and that updates are all or nothing.

    I create a btrfs snapshot before updates on my Arch Linux system. This is atomic, but not immutable.*

    There is also “image based” which distros like ublue (immutable, atomic) are, but Nixos (also immutable and atomic) are not.

    *only really before big updates tbh, but I know some people do configure snapshits before all updates.







  • As a someone who has used both Arch, and Debian, neither has less or more bugs.

    Debian has the same bugs, over the period of their stable release, and Arch has changing bugs (like a new set every update lol).

    Yes, Arch is going to get a lot more features. But it comes at the cost of “instability”. Which is not so much a lack of reliability but instead, how much the software changes. I remember a firefox bug that caused a crash when I attempt to drag bookmarks in my bookmarks bar around, which lasted for like a week — then it went away.

    The idea behind projects like Debian, is that for an entity that needs stability, you can simply work around the bugs, since you always know what and where they are. (Well, the actual intent is that entities write patches and submit them to Debian to fix the bugs but no one does that).

    Another thing: Debian Stable has more up to date packages than Ubuntu 20.04, and Ubuntu 22.04. This happens because Ubuntu “freezes” a Sid version, and those packages don’t get major updates for a while. So often, the latest Debian stable has newer packages than the older Ubuntu releases.


  • Termux recently got moved off of the play store (kinda), and is now only available on f-droid/github, because Google was further locking down what they allowed on their store.

    And in addition to that, they recently added a restriction in later versions of Android: “Child process limit”. Although this limit used to not there, when enabled, it prevents users from truly running arbitrary linux programs, like via termux.

    Although the child process limit can still be disabled in developer options, it doesn’t bode well for how flexible base android in the future will be, since many times corpos like Google move stuff into the “secret” options before eventually removing that dial all together.

    TLDR: Termux has been, and is a thing… for now.

    Also, I want to shout out winlator. It uses a linux proot, similator to termux, and has box64 and wine inside that proot that people can use to play games. I tested with Gungeon, and it even has controller support and performance, which is really impressive.



  • And before you start whining - again - about how you are fixing bugs, let me remind you about the build failures you had on big-endian machines because your patches had gotten ZERO testing outside your tree.

    As far as I know, the Linux Foundation does not provide testing infrastructure to it’s developers. Instead, corporations are expected to use their massive amount of resources to test patches across a variety of cases before contributing them.

    Yes, I think Kent is in the wrong here. Yes, I think Kent should find a sponsor or something to help him with testing and making his development more stable (stable in the sense of fewer changes over time, rather than stable as in reliable).

    But, I kinda dislike how the Linux Foundation has a sort of… corporate centric development. It results in frictions with individual developers, as shown here.

    Over all of the people Linus has chewed out over the years, I always wonder how many of them were independent developers with few resources trying to figure things out on their own. I’ve always considered trying to learn to contribute, but the Linux kernel is massive. Combined with the programming pieces I would have to learn, as well as the infrastructure and ecosystem (mailing list, patch system, etc), it feels like it would be really infeasible to get into without some kind of mentor or dedicated teacher.


  • So I don’t know how much you know about the shell, but the way that the linux command line works is that there are a set of variables, called environment variables, which dictate so me behavior of the shell. For example, $PATH variable, refers to what directories to search through, when you try to execute a program in your shell.

    The documentation you linked, wants you to create a custom shell variable, called SCALE_PATH, consisting of a folder path, which contains the compiled binaries/programs of scale you want to run.

    This command: export PATH="${SCALE_PATH}/bin:$PATH"

    temporarily edits your PATH variable to add that folder with the scale programs you want to run to your path, enabling you to execute them from your shell.