• 0 Posts
  • 35 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • Back in the day with dial-up internet man pages, readmes and other included documentation was pretty much the only way to learn anything as www was in it’s very early stages. And still ‘man <whatever>’ is way faster than trying to search the same information over the web. Today at the work I needed man page for setfacl (since I still don’t remember every command parameters) and I found out that WSL2 Debian on my office workstation does not have command ‘man’ out of the box and I was more than midly annoyed that I had to search for that.

    Of course today it was just a alt+tab to browser, a new tab and a few seconds for results, which most likely consumed enough bandwidth that on dialup it would’ve taken several hours to download, but it was annoying enough that I’ll spend some time at monday to fix this on my laptop.


  • IsoKiero@sopuli.xyztoLinux@lemmy.mlMan pages maintenance suspended
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    13 days ago

    I mean that the product made in here is not the website and I can well understand that the developer has no interest of spending time for it as it’s not beneficial to the actual project he’s been working with. And I can also understand that he doesn’t want to receive donations from individuals as that would bring in even more work to manage which is time spent off the project. A single sponsor with clearly agreed boundaries is far more simple to manage.




  • IsoKiero@sopuli.xyztoLinux@lemmy.mlThe Insecurity of Debian
    link
    fedilink
    English
    arrow-up
    10
    ·
    15 days ago

    The threat model seems a bit like fearmongering. Sure, if your container gets breached and attacker can (on some occasions) break out of it, it’s a big deal. But how likely that really is? And even if that would happen isn’t the data in the containers far more valuable than the base infrastructure under it on almost all cases?

    I’m not arguing against SELinux/AppArmor comparison, SElinux can be more secure, assuming it’s configured properly, but there’s quite a few steps on hardening the system before that. And as others have mentioned, neither of those are really widely adopted and I’d argue that when you design your setup properly from the ground up you really don’t need neither, at least unless the breach happens from some obscure 0-day or other bug.

    For the majority of data leaks and other breaches that’s almost never the reason. If your CRM or ecommerce software has a bug (or misconfiguration or a ton of other options) which allows dumping everyones data out of the database, SElinux wouldn’t save you.

    Security is hard indeed, but that’s a bit odd corner to look at it from, and it doesn’t have anything to do with Debian or RHEL.


  • If I had to guess, I’d say that e1000 cards are pretty well supported on every public distribution/kernel they offer without any extra modules, but I don’t have any around to verify it. At least on this ubuntu I don’t find any e1000 related firmware package or anything else, so I’d guess it’s supported out of the box.

    For the ifconfig, if you omit ‘-a’ it doesn’t show interfaces that are down, so maybe that’s the obvious you’re missing? It should show up on NetworkManager (or any other graphical tool, as well as nmcli and other cli alternatives), but as you’re going trough the manual route I assume you’re not running any. Mii-tool should pick it up too on command line.

    And if it’s not that simple, there seems to be at least something around the internet if you search for ‘NVM cheksum is not valid’ and ‘e1000e’, spesifically related to dell, but I didn’t check that path too deep.


  • A part of it is because technology, specially a decade or so ago, had restrictions. Like with ADSL which often/always couldn’t support higher upload speeds due to the end user hardware, and the same goes with 4/5G today, your cellphone just doesn’t have the power to transmit as fast/far as the tower access point.

    But with wired connections, specially with fibre/coax, that doesn’t apply and money comes in to play. ISPs pay for the bandwidth to the ‘next step’ on the network. Your ‘last mile’ ISP buys some amount of traffic from the ‘state wide operator’ (kind-of, depends heavily on where you live, but the analogy should work anyways) and that’s where the “upload” and “download” traffic starts to play a part. I’m not an expert by any stretch here, so take this with a spoonful of salt, but the traffic inside your ISP’s network and going trough their hardware doesn’t cost ‘anything’ (electricity for the switches/routers and their maintenance is excluded as a cost of doing business) but once you push additional 10Gbps to the neighboring ISP it requires resources to manage that.

    And that (at least in here) where the asymmetric connections plays a part. Let’s say that you have a 1Gbps connection to youtube/netflix/whatever. The original source needs to pay for the network for the bandwidth for your stream to go trough in order to get a decent user experience. But the traffic from your ISP to the network is far less, a blunt analogy would be that your computer sends a request to the network ‘show me the latest Me. Beast video’ and youtube server says ‘sure, here’s a few gigabits of video’.

    Now, when everyone pays for the ‘next step’ connection by the actual amount of data consumed (as their hardware needs to have the capacity to take the load). On your generic home user profile, the amount downloaded (and going trough your network) is vastly bigger than the traffic going out of your network. That way your last mile ISP can negotiate with the ‘upstream’ operator to have capacity to take 10Gbps in (which is essentially free once the hardware is purchased) and that you only send 1Gbps out, so ‘upstream’ operator needs to have a lot less capacity going trough their network to ‘the other way’.

    So, as the link speeds and amount of traffic is billed separately, it’s way more profitable to offer 1Gbps down and 100Mbps up for the home user. This all is of course a gross simplification of everything and in real world things are vastly more complex with caching servers, multiple connections to the other networks and so on, but at the end every bit you transfer has a price and if you mostly offer to sink in the data your users want and it’s significantly less than the data your users push trough to the upstream there’s money to be made in this imbalance and that’s why your connection might be asymmetric.




  • IsoKiero@sopuli.xyztoLinux@lemmy.ml33 years ago...
    link
    fedilink
    English
    arrow-up
    2
    ·
    25 days ago

    I’ve read Linus’s book several years ago, and based on that flimsy knowledge on back of my head, I don’t think Linus was really competing with anyone at the time. Hurd was around, but it’s still coming soon™ to widespread use and things with AT&T and BSD were “a bit” complex at the time.

    BSD obviously has brought a ton of stuff on the table which Linux greatly benefited from and their stance on FOSS shouldn’t go without appreciation, but assuming my history knowledge isn’t too badly flawed, BSD and Linux weren’t straight competitors, but they started to gain traction (regardless of a lot longer history with BSD) around the same time and they grew stronger together instead of competing with eachother.

    A ton of us owes our current corporate lifes to the people who built the stepping stones before us, and Linus is no different. Obviously I personally owe Linus a ton for enabling my current status at the office, but the whole thing wouldn’t been possible without people coming before him. RMS and GNU movement plays a big part of that, but equally big part is played by a ton of other people.

    I’m not an expert by any stretch on history of Linux/Unix, but I’m glad that the people preceding my career did what they did. Covering all the bases on the topic would require a ton more than I can spit out on a platform like this, I’m just happy that we have the FOSS movement at all instead of everything being a walled garden today.


  • IsoKiero@sopuli.xyztoLinux@lemmy.ml33 years ago...
    link
    fedilink
    English
    arrow-up
    7
    ·
    25 days ago

    That kind of depends on how you define FOSS. The way we think of that today was in very early stages back in the 1991 and the orignal source was distributed as free, both as in speech and as in beer, but commercial use was prohibited, so it doesn’t strictly speaking qualify as FOSS (like we understand it today). About a year later Linux was released under GPL and the rest is history.

    Public domain code, academic world with any source code and things like that predate both Linux and GNU by a few decades and even the Free Software Foundation came 5-6 years before Linux, but the Linux itself has been pretty much as free as it is today from the start. GPL, GNU, FSF and all the things Stallman created or was a part of (regardless of his conflicting personality) just created a set of rules on how to play this game, pretty much before any game or rules for it existed.

    Minix was a commercial thing from the start, Linux wasn’t, and things just refined on the way. You are of course correct that the first release of Linux wasn’t strictly speaking FOSS, but the whole ‘FOSS’ mentality and rules for it wasn’t really a thing either back then.

    There’s of course adacemic debate to have for days on which came first and what rules whoever did obey and what release counts as FOSS or not, but for all intents and purposes, Linux was free software from the start and the competition was not.



  • Linux, so even benchmarking software is near impossible unless you’re writing software which is able to leverage the specific unique features of Linux which make it more opimized.

    True. I have no doubt that you could set up a linux system to calculate pi to 10 million digits (or something similar) more power efficiently than windows-based system, but that would include compiling your own kernel leaving out everything unnecesary for that particular system, shutting down a ton of daemons which is commonly run on a typical desktop and so on and waste a ton more power on testing that you could never save. And that might not even be faster, just less power hungry, but no matter what that would be far far away from any real world scenario and instead be a competition to build a hardware and software to do that very spesific thing with as little power as possible.


  • Interesting thought indeed, but I highly doubt that difference is anything you could measure and there’s a ton of contributing factors, like what kind of services are running on a given host. So, in order to get a reasonable comparison you should run multiple different software with pretty much identical usage patterns on both operating systems to get any kind of comparable results.

    Also, the hardware support plays a big part. A laptop with dual GPUs and a “perfect” support from drivers on Windows would absolutely wipe the floor with Linux which couldn’t switch GPUs at the fly (I don’t know how well that scenario is supported on linux today). Same with multicore-cpu’s and their efficient usage, but I think on that the operating system plays a lot smaller role.

    However changes in hardware, like ARM CPUs, would make a huge difference globally, and at least traditionally that’s the part where linux shines on compatibility and why Macs run on batteries for longer. But in the reality, if we could squeeze more of our CPU cycles globally to do stuff more efficiently we’d just throw more stuff on them and still consume more power.

    Back when cellphones (and other rechargeable things) became mainstream their chargers were so unefficient that unplugging them actually made sense, but today our USB-bricks consume next to nothing when they’re idle so it doesn’t really matter.


  • I can weigh in with my small experience with their hardware. Back in the day we used quite a lot of their hardware for VPN-clients, firewalls and things like that in small-ish offices on work and I’ve been running my router for 5(ish) years without any hiccups with 1 spf port and 8x1Gbps copper and a 1/1Gbps upstream (trough spf).

    I still have a bunch of old hardware gathering dust in the bin from when we ran them at the office (around 2010-2014, give or take a few years) and all of them still work. Granted, an old 100Mbps router isn’t that useful today, but I still occasionally use them on my homelab for testing/verification of my ideas.

    My current home office goes around 30C in the summer but that hasn’t been an issue at all. And their pricing is pretty decent. The unit I have isn’t available anymore, but vendor claimed that it can push up to 7,5Gbps trough and the price was something around 120€.

    That being said, I don’t have that much experience with them (only a handful of models and none of them was pushed too hard), but personally I’d pick anything from mikrotik over zyxel/d-link/tp-link.


  • I haven’t paid too much attention on what lenovo is doing lately, but at some point they brought L-series thinkpad-branded laptops on the market which was pretty much garbage. At least in here local stores sold first models of L-series as a ‘thinkpad grade laptops for consumer pricing’ and they were just bad on all fronts, as the L-series was just a competition on a*-brands trying to get their share for sub-300€ (or whatever that was at the time) laptops from your equivalent of walmart riding on the brand which they didn’t build.

    Gladly that died out pretty soon and Think* brand is still somewhat strong with their T/W/X models as they used to be when IBM ran the business. Of course they had their own issues too, USB-C docks were garbage with everyone when they started to appear and people at the office still curse on thinkpads for various issues with firmware/hardware/whatever, but in my experience it’s been the same road for all the big players. Dell had a pretty decent sales/support going on at 2010(ish), but their hardware had plenty of problems, HP had pretty good pricing for their hardware a bit later, but they had massive issues with firmware and so on.

    I’ve been pretty happy with thinkpads I’ve got since R50 brand new (if I recall correctly) and for me they’ve been available on second hand market in here since that. But that’s just a personal experience, I’ve never been in charge to buy hunderds of anything on IT department at work.



  • Lenovo makes consumer crap with their own brand and they have Think -line of products from the big blue and the latter is pretty much comparable to all the other big players (dell, hp, fujitsu…) on desktop/laptop market. Each have their own annoyances and fuckups and in general if you ask opinion from 3 IT professionals on which brand to buy you’ll get 4-6 answers.

    Personally if I’m looking for a laptop I’ll go to pre-leased and refurbished thinkpad. I currently have T465 and for wife I got pretty decent Tsomething from the office for peanuts.


  • I assume you don’t intend to copy the files but use them from a remote host? As security is a concern I suppose we’re talking about traffic over the public network where (if I’m not mistaken) kerberos with NFS doesn’t provide encryption, only authentication. You obviously can tunnel NFS with SSH or VPN and I’m pretty sure you can create a kerberos ticket which stores credentials locally for longer periods of time and/or read them from a file.

    SSH/VPN obviously causes some overhead, but they also provide encryption over the public network. If this is something ran in a LAN I wouldn’t worry too much about encrypting the traffic and in my own network I wouldn’t worry about authentication either too much. Maybe separate the NFS server to it’s own VLAN or firewall it heavily.


  • I don’t think there exists a proper alternative even in the commercial sector.

    There is a handful of vendors and they indeed monitor a ton more than just viruses. The solution we’re running at the office monitors pretty much all kinds of logs (dns, dhcp, authentication, network traffic…) and it can lock down clients which are behaving wrongly enough. For example every time I change a hosts file (for a legitimate reason) on my own laptop I get a question from security team if that was intented. And it combines logs/data gathered from different systems to identify potential threats and problematic hosts and that’s why our fleet feeds in data from all kinds of devices.

    I haven’t seen that many different solutions which do this, but the few I’ve worked with are a bit hit or miss with linux. The current solution has a funny feature where it breaks dpkg if the server doesn’t have certain things installed (which are not depencies on the packet itself). And they eat up a pretty decent chunk of CPU-cycles and RAM while running. But apparently someone has done the math and decided that it’s worth the additional capacity, it’s outside my pay range so I just install whatever I’m told to.