

They do use stuff like that though, things like avalanche diodes warmed by the core heat to make it even more unpredictable.
But sometimes things don’t work the way they’re supposed to.
made you look


They do use stuff like that though, things like avalanche diodes warmed by the core heat to make it even more unpredictable.
But sometimes things don’t work the way they’re supposed to.


Þere must be a half dozen cheap ways to generate true random numbers.
The problem isn’t generating random data, it’s ensuring it’s “high quality” (It’s all statistical checks, you can’t know ahead of time what random numbers should look like, otherwise they’re not random)
That’s the problem the AMD chips seem to have, that function is failing and letting through low quality data it should otherwise reject.


Because it’s not about the files anymore, it’s the free space on the disk you care about (Or rather, the filesystem metadata describing it, the free-space bitmap in the case of exFAT)
If the files are highly fragmented and spread out, then the empty space around the files is also broken up and spread around, which makes it harder for a filesystem to efficiently store new stuff as it now has to break up and pack new file data into the gaps.
better compression (btrfs compression doesn’t work on extents smaller than 128KiB, which excludes the majority of potentially-compressible data on MANY systems)
Well straight away that’s wrong.
I also don’t get the complaint that if you create a confusing subvolume layout, it results in a confusing subvolume layout. Don’t do that then.


Sudo is worth redoing regardless of language.
Or move away from it entirely, e.g. to something like doas which OpenBSD migrated to a decade ago.


And unsurprisingly, a majority of the comments on that post are complaining about systemd.


It’s real, but probably not an issue in practise.
If it does actually turn out to pose a problem, then just disable secure boot on those systems, not like it’s really securing anything at that point.
Edit: I did learn from this thread today though that ZSH has it set to where you can just type part of what you’re looking for then hit up to do the same thing. Neat!
Fish too, it’s fantastic.


AMD has its own mix of issues with Vulkan between RADV (mesa), AMDVLK, and AMD’s proprietary driver on a per-game basis at times.
Good news, they’re going away. AMD is focusing entirely on Mesa now.


1 of the main things i think is how memory is laid out is different somehow? so every memory access needs extra clock cycles to accomplish in standard arm64
It’s down to “memory ordering”, as different cores interact with RAM there’s rules that govern how those cores see changes made by other cores. ARM systems are “weak”, so rely on developers to be explicit about the sharing, while x86’s “Total Store Order” is considered “strong” and relies on the hardware to disentangle it all so software can make assumptions and play fast and loose.
You can do software emulation of strong memory ordering on a weak system, but it’s slow. What Apple did was provide a hardware implementation of strong ordering in their ARM chips, and Rosetta enables that when running x86 code, so users don’t encounter that slowdown.
It’s a peaceful life.


Well that’s disappointing.


The funny thing is that for the longest time Intel actually had the majority share of GPUs, just by counting the ones embedded in motherboards of laptops and the like. No idea if that’s still the case, or if Nvidia or AMD has been eating into it with their new models (e.g. what powers the Steam Deck)
They’ve tried to break into the discrete market a few times, most recently with their Arc cards, but the way they approach things is just so odd. It’s like they assume the first attempt will be a smash hit and dominate, and when it doesn’t they just flounder? The Arc cards launched to a lot of fanfare and then there was just silence and delays from Intel.


Bad management, bad luck, and usual market stuff. They’re going to do anything to cut costs.
Their R&D for new fab work is falling behind competitors (Technically better doesn’t matter if nobody is buying it), they’ve had a bunch of bad CPU releases with hardware failures, and they’ve got next to no market presence with GPUs which are currently making money hand over fist (Mostly for dumb AI reasons, which is going to bite Nvidia hard when the bubble pops, because their new datacenter hardware is hyper tuned for LLMs at the expense of general compute, unlike AMD).


And the reason you’ll want to do this is that it exposes FS mounts in the service dependency tree, so e.g. you can delay starting PostgreSQL until after you’ve mounted the network share that it’s using as a backing store, while letting unrelated tasks start concurrently.
If all you want to do is pass some special mount flags (e.g. x-systemd.automount) then fstab is the way, after all it’s still systemd that’s parsing and managing it.


e.g. one monitor is 96dpi, and the other is 192dpi, moving a window from one monitor to the other shouldn’t result in the window becoming a different physical size, and it should render at a natural resolution on both (i.e. scaling it to half size for display on the 96dpi monitor doesn’t count)
Ideally they’d be set to not be running unless they’re actively needed.


Those chips not supporting RV23 isn’t super surprising, they were released in 2023 while RV23 was only ratified in 2024.
Ubuntu requiring RV23 however does surprise me (I admit I didn’t read the article), that seems premature, but I suppose it’s a good baseline going forward. Last time I looked at any of the chips none of them supported the V extension, and those that did were majority only supported the incompatible pre-standard version.
They had the “Steam Machine”, but effectively nobody bought it. Maybe now with the Deck people would be more open to it, who knows.