spoiler

made you look

  • 0 Posts
  • 49 Comments
Joined 1 year ago
cake
Cake day: July 27th, 2024

help-circle





  • 1 of the main things i think is how memory is laid out is different somehow? so every memory access needs extra clock cycles to accomplish in standard arm64

    It’s down to “memory ordering”, as different cores interact with RAM there’s rules that govern how those cores see changes made by other cores. ARM systems are “weak”, so rely on developers to be explicit about the sharing, while x86’s “Total Store Order” is considered “strong” and relies on the hardware to disentangle it all so software can make assumptions and play fast and loose.

    You can do software emulation of strong memory ordering on a weak system, but it’s slow. What Apple did was provide a hardware implementation of strong ordering in their ARM chips, and Rosetta enables that when running x86 code, so users don’t encounter that slowdown.




  • The funny thing is that for the longest time Intel actually had the majority share of GPUs, just by counting the ones embedded in motherboards of laptops and the like. No idea if that’s still the case, or if Nvidia or AMD has been eating into it with their new models (e.g. what powers the Steam Deck)

    They’ve tried to break into the discrete market a few times, most recently with their Arc cards, but the way they approach things is just so odd. It’s like they assume the first attempt will be a smash hit and dominate, and when it doesn’t they just flounder? The Arc cards launched to a lot of fanfare and then there was just silence and delays from Intel.


  • Bad management, bad luck, and usual market stuff. They’re going to do anything to cut costs.

    Their R&D for new fab work is falling behind competitors (Technically better doesn’t matter if nobody is buying it), they’ve had a bunch of bad CPU releases with hardware failures, and they’ve got next to no market presence with GPUs which are currently making money hand over fist (Mostly for dumb AI reasons, which is going to bite Nvidia hard when the bubble pops, because their new datacenter hardware is hyper tuned for LLMs at the expense of general compute, unlike AMD).


  • And the reason you’ll want to do this is that it exposes FS mounts in the service dependency tree, so e.g. you can delay starting PostgreSQL until after you’ve mounted the network share that it’s using as a backing store, while letting unrelated tasks start concurrently.

    If all you want to do is pass some special mount flags (e.g. x-systemd.automount) then fstab is the way, after all it’s still systemd that’s parsing and managing it.









  • Then there’s accessibility functions, which wayland breaks almost by design by denying apps access to each other. Even something as simple as an on screen keyboard becomes nearly impossible to implement.

    That’s a side effect of just dumping everything into X11, once you switch from it you lose all the random kitchen sink warts it grew over the years.

    Like an on-screen keyboard shouldn’t be fiddling with a display protocol to fake keyboard inputs, it should be using the actual OS input layer to emulate them (So then it’d work with devices that read input directly and not go via X11). Same with accessibility, there’s a reason other OSs use separate communication channels with their own protocol.