

One liter?
That’s like, barely enough for a brief hike. Minimum 20 liters a head.
“The future ain’t what it used to be.”
-Yogi Berra


One liter?
That’s like, barely enough for a brief hike. Minimum 20 liters a head.


It depends what you mean by real time, but GOES is pretty sick:
https://www.star.nesdis.noaa.gov/GOES/sector.php?sat=G18§or=psw&refresh=true
Updates every five minutes.
Then of course MODIS, VIIRS, Landsat, others. Those don’t have the nice viewer though like this, and yeah, there is some processing, but if you grab through GEE you can get some “preprocessed” results.
Edit (I think this will work?):
This image should stay updated to the “live” view of the western GOES satellite:

Yeah but I really need to update the kernel


You have been nominated the Sheriff of Lemmy.


I would be genuinely shocked to find an US police officer here.


The guys been a hate-mongering fixture on the right for around a decade. Far more prominent than Shapiro.


30 year old vendetta against Microsoft.
I mean who doesn’t?


it’s like, e-waste recycling but cooler.


So, could you add additional old hardware to this “pool” to use as needed?
Pretty much. You just take any old scraps of hardware, throw proxmox on it, and it does the rest. Then you just follow the menu and click through to say how many cores you want, how much RAM, how much HD storage, and what image like its a Taco Bell menu.
In a few seconds you’ve got a new machine spun up with its own IP, which you can remote into from the original machine, or really, any other machine on the network.
I only got into it seriously a few months ago (I’ve self hosted in the past, but it was a PITA), and I was gobsmacked at how easy it was.
Check out this guide: https://www.youtube.com/watch?v=zngSuqCM4d8
You could literally be up and running in 15 minutes.


Think of it more as a “distro” bazooka.
Its linux, yes. But what it allows you to do is take old machines that you might have left retired, and create “pools” of compute resource, that you can then deploy whatever image floats your boat onto any size machine you like.
Say for example, like me, you have an old System76 Serval. Its a good processor (6 cores, 32gb ram (ddr4), and a 2070. I haven’t used it as a daily driver since 2022.
I put put proxmox on it, and stuck it next to my NAS, close to my router. Then I took 2 cores and 6gb, and installed Ubuntu LTS on it, and then installed coolify. Using coolify I spun up jellyfin, some home assistant stuff, and a postgres for one of my work projects.
Then I took the other 4 cores and 28 gb of RAM and put PopOS to use as a development machine for that work project. It can stay alive as long as the project is going, and then when I’m finished, I can give that compute back to the pool and redeploy it.
It also makes it incredibly easy and fast to test different versions of Linux (or I think you can do windows from an image this way too, but don’t quote me).


Have you tried proxmox yet?
Here’s all the parts of a meme. Put them together yourself.


Yeah that’s why I was wondering if their machine was fairly new. I’ve found consistently better driver support with time. I’m genuinely surprised that its an older machine and having these issue.


Basically yes.
But thankfully they are equal-opportunity ass and suck on all platforms, not just Linux. I’m on bazzite rn because I couldnt get the bluetooth on either fedora OR ubuntu to work at full speed. Granted, my machine is very new, but like. I’m still getting that occasional issue where a bt device connects and the whole system lags.


Is it a pretty new machine?
Aww and I just graduated to “that weird guy”


I’ll give it a shot, but tbh, it’s been a bit of a slog. I’m on the new Z13, the 128gb variant.
I can’t find an “it just works” variant where both ollama and rocm play nice on the hardware AND the mediatek card works correctly. It’s either I’m able to self host fullsize llms (and do the rest of my ml work) OR I get fully functional wifi.
I’ve got the whole install process for ollama + rocm + openwebui all set on Ubuntu, but the wifi card is barely getting 20 mbps. But access to rocm (and I assume it will be the same in pytorch) is buttery smooth and I can run medium models in the range of hundreds of tokens per second locally.
When I throw on bazzite I’m hitting 350 mbps down but it doesn’t seem like it’s got the right rocm/ driver/ kernel/ ollama combo because I’m not even able to get 5 tps.
Trump’s cabinet probably: