• 2 Posts
  • 142 Comments
Joined 6 months ago
cake
Cake day: June 9th, 2024

help-circle


  • really effects performance that much

    Depending on the exact flags, some workloads will be faster, some will be identical, and some will be slower. Compilier optimization is some dark magic that relies on a ton of factors, but you can’t just assume that going from like -O2 to -O3 will provide better performance, since the optimizations also rely on the underlying code as to what they’ll actually make happen… and is why, for the most part, everyone suggests you stop at -O2 since you can start getting unexpected behavior the further up the curve you go.

    And we’re talking low single digit performance improvements at best, not anything that anyone who is doing anything that’s not running benchmarks 24/7 would ever even notice in real world performance.

    Disclaimer: there are workloads that are going to show different performance uplifts, but we’re talking Firefox and KDE and games here, per the OP’s comments.

    Also they do default to a different scheduler, which is almost certainly why anyone using it will notice it feels “faster”, but it’s mainlined in the kernel so it’s not like you can’t use that anywhere else.







  • two commands: dd and resize2fs, assuming you’re using ext4 and not something more exotic.

    one makes a block-level copy of one device to another like so: dd if=/dev/source-drive of=/dev/destination-drive

    the other is used to resize the filesystem from whatever size it was, to whatever size you tell it (or the whole disk; I’d have to go read a manpage since it’s been a bit)

    the dd is completely safe, but the resize2fs command can break things, but you’d still have the data on the original drive, so you could always start over if it does - i’d unplug the source drive before you start doing any expansion stuff.








  • sudo smartctl -a /dev/yourssd

    You’re looking for the Media_Wearout_Indicator which is a percentage starting at 100% and going to 0%, with 0% being no more spare sectors available and thus “failed”. A very important note here, though, is that a 0% drive isn’t going to always result in data loss.

    Unless you have the shittiest SSD I’ve ever heard of or seen, it’ll almost certainly just go read-only and all your data will be there, you just won’t be able to write more data to the drive.

    Also you’ll probably be interested in the Total_LBAs_Written variable, which is (usually) going to be converted to gigabytes and will tell you how much data has been written to the drive.


  • As a FunFact™, you’re more likely to have the SSD controller die than the flash wear out at this point.

    Even really cheap SSDs will do hundreds and hundreds of TB written these days, and on a normal consumer workload we’re talking years and years and years and years of expected lifespan.

    Even the cheap SSDs in my home server have been fine: they’re pushing 5 years on this specific build, and about 200 TBW on the drives and they’re still claiming 90% life left.

    At that rate, I’ll be dead well before those drives fail, lol.