A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 1 Post
  • 182 Comments
Joined 11 months ago
cake
Cake day: June 25th, 2024

help-circle


  • hendrik@palaver.p3x.detoLinux@lemmy.mlHow to backup around 200 DVD
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 days ago

    I think the best bet to preserve them as is, would be dd or ddrescue (if there are read errors). You might be able to write a small shell script to automate stuff. For example open the tray, read a filename from the user, then close the tray, rip it and then repeat. That way you’ll notice the open tray, change disks, enter the tiltle and hit enter and come back 10mins later. Obviously takes something like 20 days if you do 10 each day. And you’re looking for roughly 1TB of storage, if it’s single layer DVDs.


  • Is a SSD’s cache even about wear? I mean wear only happens on write operations. And I would expect a SSD to apply the writes as fast as possible. Since piling up work (a filled write cache) means additional latency and less performance on the next, larger write operation. Along with a few minor issues like possible data loss on (power) failure.
    And on read, a cache on the wrong side of the bottleneck doesn’t do that much. A SSD has pretty much random access to all the memory, it’s not like it has to wait for a mechanical head to move into position on the platter for data to become available?!

    But I haven’t looked this up. I might be wrong. What I usually do is make sure a computer has enough RAM and it is used properly. That will also cache data and avoid unneccessary transfers. And RAM is orders of magnitude faster, you can get gigabytes worth of it for a few tens of dollars… Though adding RAM might not be easily done on the more recent Thinkpads… I’ve noticed they come with a maximum of one RAM slot for some years already, sometimes none and it’s soldered.


  • I think a SATA connection might be the bottleneck with its maximum throughput of 600 MB/s. So for that use-case you don’t need to be worried about the SSDs speed and cache, it won’t be able to perform due to the SATA slot. But I don’t know how exactly you plan to repurpose it later. Maybe skip the adapter if it’s expensive, buy a cheap SATA SSD now and a new, fast PCIe one in a few years once you get a new computer.


  • Yeah, sure. No offense. I mean we have different humans as well. I got friends who will talk about a subject and they’ve read some article about it and they’ll tell me a lot of facts and I rarely see them make any mistakes at all or confuse things. And then I got friends who like to talk a lot, and I better check where they picked that up.
    I think I’m somewhere in the middle. I definitely make mistakes. But sometimes my brain manages to store where I picked something up and whether that was speculation, opinion or fact, along with the information itself. I’ve had professors who would quote information verbatim and tell roughly where and in which book to find it.

    With AI I’m currently very cautious. I’ve seen lots of confabulated summaries, made-up facts. And if designed to, it’ll write it in a professional tone. I’m not opposed to AI or a big fan of some applications either. I just think it’s still very far away from what I’ve seen some humans are able to do.


  • I think the difference is that humans are sometimes aware of it. A human will likely say, I don’t know what Kanye West did in 2018. While the AI is very likely to make up something. And also in contrast to a human this will likely be phrased like a Wikipedia article. While you can often look a human in the eyes and know whether they tell the truth or lie, or are uncertain. Not always, and we also tell untrue things, but I think the hallucinations are kind of different in several ways.


  • I’m not a machine learning expert at all. But I’d say we’re not set on the transformer architecture. Maybe just invent a different architecture which isn’t subject to that? Or maybe specifically factor this in. Isn’t the way we currently train LLM base models to just feed in all text they can get? From Wikipedia and research papers to all fictional books from Anna’s archive and weird Reddit and internet talk? I wouldn’t be surprised if they start to make things up since we train them on factual information and fiction and creative writing without any distinction… Maybe we should add something to the architecture to make it aware of the factuality of text, and guide this… Or: I’ve skimmed some papers a year or so ago, where they had a look at the activations. Maybe do some more research what parts of an LLM are concerned with “creativity” or “factuality” and expose that to the user. Or study how hallucinations work internally and then try to isolate this so it can be handled accordingly?



  • Yeah, seems we’re on the same page, then. Because I occasionally get into that situation. People ask me stuff and I’ll tell them, there is software XY or Linux distribution XY which does exactly what you’re looking for, but it’s owned by a company which is known for making problematic business decisions, so wouldn’t recommend using it before giving it a good thought if that’s going to impede with your application… Or I’ll tell them about some software project and simultaneously say, I can’t endorse it due to the political stance or behaviour of the devs/maintainers… Happened a few times to me with niche projects, Android distributions and Fediverse projects. I’ll then not walk around and advertise for them, but instead only give a complete picture of the situation on request.

    And I’ll do it in other parts of my life as well… Try to boycott clothes from a particularly bad sweatshop, even if they fit and suit me well… Not buy tasty food if it’s from Nesté or the Coca Cola company… Though those are on a different level of “bad” as this one. Just saying toxic things on the internet isn’t exactly the same as supporting child labor, slavery and stealing poor people’s water supply.

    My current device is a Dell laptop I got second hand.




  • How about we just tell the truth as is? I mean in your analogy… Would you recommend a faulty car with the same words you’d choose for a very nice one? Would you hide that the manufacturer does problematic things? I think the way you phrase it, has indeed some things in common for example with recommending a Tesla car these days. Generally, people don’t keep their mouth shut about who manufactures them. So yeah, I don’t think speaking the truth is babysitting at all… But of course you also don’t hide the fact that Hyperland exists and if it’s any good. I’d advocate for just stating the facts. As an added bonus, everyone can then go ahead and make that desicion themselves. I mean I personally wouldn’t buy a Swasticar. I have less objections using Hyprland. But I always try to give these kind of info out as well, if someone asks me about software. Because I think it’s kind of important if a project is healthy, has a nice community etc. I think the comparison with driving cars falls a bit short, since we don’t recommend people shouldn’t use any desktop. It’s fine to use one. And it’s also fine to drive a car. You should just be aware of the consequences. And in fact I think it’d be beneficial if we were to drive less cars, for several reasons.


  • hendrik@palaver.p3x.detoLinux@lemmy.mlSSH managers on Linux?
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    edit-2
    4 days ago

    Uh, I just type ssh or rsync into the terminal and that’s it. It’s a manageable amount of computers/servers I connect to, so I can remeber their names. Regular ssh stores all the keys or custom ports / IPs in its config. What’s the advantage of using some manager?




  • Uh, I don’t have a good answer for that, but I’d give them something like Linux Mint anyways. That way they can look up stuff, watch tutorials and don’t have a super niche thing running. Or give them one of the popular gaming distros, if it’s that.

    Idk. Gnome feels very much like Android to me. And KDE follows similar design patterns to Windows. And kids and teenagers tend to figure out all the things they want. If they have the motivation to do so.


  • hendrik@palaver.p3x.detoLinux@lemmy.mlIn regard to Hyprland and Fascism
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    5 days ago

    And in addition to that: It’s also kind of a big thing that they get an audience. The more people use the projects, the bigger the audience. They’ll get a Discord and people will join because of the project, people will start reading their blog because of the attention via the software… People will maintain and package their software, or use it, or contribute to it… Directly resulting in interactions with the group which develops a project. That’s a direct consequence of the project getting attention. And “promoting” is a way to draw attention.


  • Yeah you’re right. I didn’t want to write a long essay but I thought about recommending Grok. In my experience, it tries to bullshit people a bit more than other services do. But the tone is different. I found deep within, it has the same bias towards positivity, though. In my opinion it’s just behind a slapped on facade. Ultimately similar to slapping on a prompt onto ChatGPT, just that Musk may have also added that to the fine-tuning step before.

    I think there is two sides to the coin. The AI is the same. Regardless, it’ll tell you like 50% to 99% correct answers and lie to you the other times, since it’s only an AI. If you make it more appeasing to you, you’re more likely to believe both the correct things it generates, but also the lies. It really depends on what you’re doing if this is a good or a bad thing. It’s argualby bad if it phrases misinformation to sound like a Wikipedia article. Might be better to make it sound personal, so once people antropormorphize it, they won’t switch off their brain. But this is a fundamental limitation of today’s AI. It can do both fact and fiction. And it’ll blur the lines. But in order to use it, you can’t simultaneously hate reading it’s output. I also like that we can change the character. I’m just a bit wary of the whole concept. So I try to use it more to spark my creativity and less so to answer my questions about facts. I also have some custom prompts in place so it does it the way I like. Most of the times I’ll tell it something like it’s a professional author and it wants to help me (an amateur) with my texts and ideas. That way it’ll give more opinions rather than try and be factual. And when I use it for coding some tech-demos, I’ll use it as is.


  • I’d have to agree: Don’t ask ChatGPT why it has changed it’s tone. It’s almost for certain, this is a made-up answer and you (and everyone who reads this) will end up stupider than before.

    But ChatGPT always had a tone of speaking. Before that, it sounded very patronizing to me. And it’d always counterbalance everything. Since the early days it always told me, you have to look at this side, but also look at that side. And it’d be critical of my mails and say I can’t be blunt but have to phrase my mail in a nicer way…

    So yeah, the answer is likely known to the scientists/engineers who do the fine-tuning or preference optimization. Companies like OpenAI tune and improve their products all the time. Maybe they found out people don’t like the sometimes patrronizing tone, and now they’re going for something like “Her”. Idk.

    Ultimately, I don’t think this change accomplishes anything. Now it’ll sound more factual. Yet the answers have about the same degree of factuality. They’re just phrased differently. So if you like that better, that’s good. But either way, you’re likely to continue asking it questions, let it do the thinking and become less of an independent thinker yourself. What it said about critical thinking is correct. But it applies to all AI, regardless of it’s tone. You’ll also get those negative effects with your preferred tone of speaking.