The AMD graphics driver is reputedly the biggest that mainstream Linux users will encounter, approaching six million lines of code.
That does seem a bit … excessive.
🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍
The AMD graphics driver is reputedly the biggest that mainstream Linux users will encounter, approaching six million lines of code.
That does seem a bit … excessive.
I agree; it probably didn’t occur to them. But it was a fairly common job in IT in the 90’s. Not a career or job description, maybe, but a duty you got saddled with.
I can see that, although TBH I almost never have to “admin” EndeavourOS. I just upgrade every once in a while.
Most important to me is being able to find and install whatever software I want, and I have a string preference that it either be installed in my ~, or be managed by the package manager. I really dislike sideloading software globally. And Arch does this better than most. AUR is massive, and packages are trivial to write and install in the rare event something isn’t in AUR.
What they really got wrong was the clothing: so much anime/hentai.
This is a techno-goth.
Base Arch can be fussy, but that’s because there’s a lot to set up, so many opportunities to forget things and only discover them later.
I ran Artix on a laptop for about a year; that was a constant PITA, although I still value their goals.
But EndeavourOS has been an entirely different matter. It’s a “just works” Arch derivative.
I had so many fewer problems with Arch that I went through the effort to convert my 3 personal cloud servers from Debian to it. I went through a lot of work to replace thee default Mint on an ODroid to Arch, and it’s been so much better. I put Endeavor on the last two non-servers I installed. So, yes, I personally find out far more reliable and easier to work with than Ubuntu, Debian, or Mint.
That said, I had dad install Mint on a new computer he bought because I had to do it over the phone and he never, ever, upgrades his packages, and almost never installs anything. If all I’m going to do is install it once and then never change anything, Mint is easier. But for a normal use case where I’m regularly updating and installing software, Arch is far easier and more reliable.
A studio should be able to afford a good LTO tape drive for at least one backup copy; LTO tapes last over 30 years and suffer less from random bitrot than spinning disks. Just pay someone to spend a month duplicating the entire archive every couple of decades. And every decade you can also consolidate a bunch of tapes since the capacity has kept increasing; 18TB tapes are now available: $/MB it’s always far cheaper to use tape.
They could have done that with the drives, but today you’d have to go find an ATA IDE or old SCSI card (of you’re lucky) that’ll work on a modern motherboard.
But I’d guess their problem is more not having a process for maintaining the archives than the technology. Duplicating and consolidating hard drives once a year would have been relatively cheap, and as long as they verified checksums and kept duplicates, HDs would have been fine too.
Github is full of lists of things. There are even several lists of “awesome lists”. Far more than I can list here, and it starts to get painfully recursive (an awesome list of awesome lists of awesome lists?). Just search for “awesome list” on github; some live outside of github, so you could search for “awesome list” in Ecosia, or DDG, or whatever you use.
The Energy Star program had been around for around 15 years at that point
And, for computers, was almost exclusively limited to monitors. In 2009, the Energy Star specification was version 4.0, released in 2006. In that specification, the EPA’s objective was to get 40% of the computers on the market to have power management capabilities 2010 – 40% by the year after Bitcoin was introduced. Intel’s 2009 TCO-driven upgrade cycle document mentions power management, but power use isn’t included in any of the TCO metrics.
All of the focus on low-power processing units in 2009 was for mobile devices and DSPs. Computer-oriented energy savings at the time was focused on processes, e.g. manually powering down computers or use of suspension and hibernation - there was very little CPU clock scaling available for desktop computers – you turned them off to save power. DVFS didn’t become widely available – or effective – until 2006, and a study published in 2009 (again, the same year Bitcoin was introduced) found that “only 20% of initiatives had measurable targets.”
So, yes: technically, there were people thinking about these sorts of things, but it wasn’t a common consumer consideration, and the tools for power management were crude: your desktop was on and consuming power – always the same amount of power – or it was off. And people did power down their computers to save energy. But, like I said, if your desktop was on, it was consuming the same amount of energy whether you were running a miner or weren’t. There was a motto at the time bandied about by SETI@home, that your computer was using energy anyway, so you might as well do science with the spare CPU cycles. That was the mindset of most people who had computers at the time.
Shucks, I switched from screen to tmux over a decade ago, simply because (a) screen wasn’t a ubiquitous tool, and (b) tmux was superior in almost every way. I haven’t encountered screen in the wild in years.
a) is still important; I like and use ripgrep and fd, but grep and find are still useful because they’re always installed, everywhere that’s even halfway POSIX. ripgrep and fd aren’t everywhere - e.g. BusyBox. But screen isn’t in that core toolset, so there was little reason to hang on to it.
I agree, it’s a great idea. The limitation is that the maximum amount of data a QR code can hold is about 3kB. Assuming the encoded data is compressed - either mostly ASCII and compressed, or a custom serialization format referencing external lookup tables - you could reasonably fit around 10kB of debugging information into a single QR code. That’s not too shabby.
I may know someone who used an fmovies site to watch something last night, so some are still up.
Dead.
Pirate streaming is a Hydra. Cut off one head, two more grow.
This is very true; that’s just plain Capitalism, and the government takes advantage of that through simply asking for the data.
It’s a great reason to never use MS or Apple software.
I’m stuck on Android, which is no better, at least until someone sells a phone that is reasonably usable as a reliably daily driver. So, I assume everything going through my phone is surveilled. It’s the price I pay for not wanting to limit myself to a dumb phone; a minimalist phone that will allowed me to use a P2P encrypted chat client would be sufficient; I’d even accept Signal, although I’m not a fan. But phones like the Light Phone are just too dumb, and none provide any sort of encrypted chat. Linux based phones (or, a phone-oriented Linux distro) are almost there, though, and I’m ready to jump when one gets a decent review.
Sure. If anyone is willing to put in that effort; I’m not going to audit all that code.
Does Deepin have its own package sources? B/c if so, you also have you audit all of the third-party packages for trojans, too.
The difference is that laws in China require companies doing business in China provide the Chinese government with means to access all data crossing Chinese borders or involving persons of interest. You can read the DSL of China yourself; and consider that nearly every executive of any significant Chinese company also holds an office of some sort in the Chinese government, there are a vast number of Chinese nationals who are considered “persons of interest” to the national security of China and therefore fall under the DSL purview.
Any company building or selling software in China has to provide the Chinese government with access to data collected in China, or outside of China if it involves persons of interest for national security. Like I said, find the DSL and read it yourself, or read an InfoSec analysis of it from a company you trust - you don’t have to take my word for it.
This immediately puts Chinese software into a different category of risk than non-Chinese software. Of course, the US could twist arms to get companies to put backdoors in software. But it’s a false equivalency to say that they’re the same. When the US does it, they have to do it covertly, and there’s always the risk of a leak. When Chinese companies do it, they’re doing it because Chinese data laws require them to.
We’re gonna put creatives out of work, we’re gonna sell a unified product to replace them, and we’re gonna use their own labor to build their replacements.
Yes, but: it’s short sighted, and wrong. Until we have a sea change in the LLM/AGI space, “creatives” will be needed for seed data. LLMs that are recursively trained on their own output degrade and produce worse output over time.
The “yes” part is that companies looking to replace paying people for their work, but still hoping that Creative Commons types are still posting online for free harvesting.
the practice of deliberately wasting enormous amounts of energy for the purpose of being able to prove that you’ve wasted enormous amounts of energy.
C’mon, that’s being disingenuous. Back when Bitcoin was released, nobody was giving a thought to computer energy use. A consequence of proof-of-work is wasted energy, but a focus on low-power modalities and throttling have been developed in the intervening years. The prevailing paradigm at the time was, “your C/GPU is going to be burning energy anyway, you may as well do something with it.”
It was a poor design decision, but it wasn’t a malicious one like you make it sound. You may as well accuse the inventors of the internal combustion engine of designing it for the express purpose of creating pollution.
And then this article showed up today on EFF’s site; things are getting worse, not better 😠 🏴☠️
The best proof of advancements in the field of AI is Zuckerberg himself. He looks more and more like a real human every time I see a new picture of him.
Compiling has never been the hard part. The challenge is making it through the entire configuration menu system before succumbing to the urge to gouge your own eyes out with blunt sticks.
Once that’s done, kick off
make
take a long break; it’ll be compiled by the time you get back to it.I hear build times are getting longer with the Rust parts, though, so do it soon before you need mainframe access to get a compile within your lifetime.