

Going by the store page, the frame is using UFS, aka a hardwired SSD.
made you look


Going by the store page, the frame is using UFS, aka a hardwired SSD.


Yeah, it’s got 256GB or 1TB of internal storage, so you can just use the microSD card to move the game from i.e. the deck to the frame.


Valve uses SDL for their own games, so this stuff would have been worked on internally and developed alongside the hardware itself.
But that’s the benefit of open source in the end, when done well everybody wins. Valve gets to ensure that any game using SDL can function perfectly with their hardware (Deck, Controller and Frame), any devs using SDL in their games knows they get first-party hardware support, and gamers get the benefit of both.


These days a steam console would be much more attractive.
And you’re right, I want one.


They had the “Steam Machine”, but effectively nobody bought it. Maybe now with the Deck people would be more open to it, who knows.
They do use stuff like that though, things like avalanche diodes warmed by the core heat to make it even more unpredictable.
But sometimes things don’t work the way they’re supposed to.
Þere must be a half dozen cheap ways to generate true random numbers.
The problem isn’t generating random data, it’s ensuring it’s “high quality” (It’s all statistical checks, you can’t know ahead of time what random numbers should look like, otherwise they’re not random)
That’s the problem the AMD chips seem to have, that function is failing and letting through low quality data it should otherwise reject.


Because it’s not about the files anymore, it’s the free space on the disk you care about (Or rather, the filesystem metadata describing it, the free-space bitmap in the case of exFAT)
If the files are highly fragmented and spread out, then the empty space around the files is also broken up and spread around, which makes it harder for a filesystem to efficiently store new stuff as it now has to break up and pack new file data into the gaps.
better compression (btrfs compression doesn’t work on extents smaller than 128KiB, which excludes the majority of potentially-compressible data on MANY systems)
Well straight away that’s wrong.
I also don’t get the complaint that if you create a confusing subvolume layout, it results in a confusing subvolume layout. Don’t do that then.


Sudo is worth redoing regardless of language.
Or move away from it entirely, e.g. to something like doas which OpenBSD migrated to a decade ago.


And unsurprisingly, a majority of the comments on that post are complaining about systemd.


It’s real, but probably not an issue in practise.
If it does actually turn out to pose a problem, then just disable secure boot on those systems, not like it’s really securing anything at that point.
Edit: I did learn from this thread today though that ZSH has it set to where you can just type part of what you’re looking for then hit up to do the same thing. Neat!
Fish too, it’s fantastic.


AMD has its own mix of issues with Vulkan between RADV (mesa), AMDVLK, and AMD’s proprietary driver on a per-game basis at times.
Good news, they’re going away. AMD is focusing entirely on Mesa now.


1 of the main things i think is how memory is laid out is different somehow? so every memory access needs extra clock cycles to accomplish in standard arm64
It’s down to “memory ordering”, as different cores interact with RAM there’s rules that govern how those cores see changes made by other cores. ARM systems are “weak”, so rely on developers to be explicit about the sharing, while x86’s “Total Store Order” is considered “strong” and relies on the hardware to disentangle it all so software can make assumptions and play fast and loose.
You can do software emulation of strong memory ordering on a weak system, but it’s slow. What Apple did was provide a hardware implementation of strong ordering in their ARM chips, and Rosetta enables that when running x86 code, so users don’t encounter that slowdown.
It’s a peaceful life.


Well that’s disappointing.


The funny thing is that for the longest time Intel actually had the majority share of GPUs, just by counting the ones embedded in motherboards of laptops and the like. No idea if that’s still the case, or if Nvidia or AMD has been eating into it with their new models (e.g. what powers the Steam Deck)
They’ve tried to break into the discrete market a few times, most recently with their Arc cards, but the way they approach things is just so odd. It’s like they assume the first attempt will be a smash hit and dominate, and when it doesn’t they just flounder? The Arc cards launched to a lot of fanfare and then there was just silence and delays from Intel.


Bad management, bad luck, and usual market stuff. They’re going to do anything to cut costs.
Their R&D for new fab work is falling behind competitors (Technically better doesn’t matter if nobody is buying it), they’ve had a bunch of bad CPU releases with hardware failures, and they’ve got next to no market presence with GPUs which are currently making money hand over fist (Mostly for dumb AI reasons, which is going to bite Nvidia hard when the bubble pops, because their new datacenter hardware is hyper tuned for LLMs at the expense of general compute, unlike AMD).
Yeah, but that’s still not a lot of data, like LTR/RTL shouldn’t be varying within a given script so the values will be shared over an entire range of characters.