• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: July 18th, 2023

help-circle

  • Not really. The problem with FOSS licensing is that it was too altruistic, with the belief that if enough users and corporations depended on the code, the community would collectively do the work necessary to maintain the project. Instead, capitalism chose to exploit FOSS as free labor most of the time, without any reciprocal investment. They raise an enormous amount of issues, and consume a large amount of FOSS developer time, without paying their own staff to fix the bugs they need resolved — in the software their products depend on. At that point the FOSS developer is no longer a FOSS developer, and instead is the unpaid slave labor of a corporation. Sure, FOSS devs could just ignore external inputs, but that’s not easy to do when you’ve invested years of your life in a project. Exploiting kindness may be legal, but it should never be justified or tolerated.

    Sure, FOSS licenses legally permit that kind of use, but just because homeless shelters allow anyone to eat their food, and sleep in their beds, that doesn’t make the rich man who exploits that charity ethically or morally justified. The rich man who exploits that charity (i.e. free labor), and offers nothing in return, is a scummy dog cunt; there are no two ways about it. The presence of lecherous parasites can destroy the entire charity; they can mean the difference between sustainability and burnout.

    FOSS should always be free for all personal, free, and non profit use, but once someone in the chain starts depending on FOSS to generate income and profit, some of that profit should always be reinvested in those dependencies. That’s what FOSS is now learning; to reject the exploitation and greed of lecherous parasites.











  • WhatAmLemmy@lemmy.worldtoAsklemmy@lemmy.mlSearch engines down?
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    4 months ago

    I was thinking about this and imagined the federated servers handling the index db, search algorithms, and search requests, but instead leverage each users browser/compute to do the actual web crawling/scraping/indexing; the server simply performing CRUD operations on the processed data from clients to index db. This approach would target the core reason why search engines fail (cost of scraping and processing billions of sites), reduce the costs to host a search server, and spread the expense across the user base.

    It also may have the added benefit of hindering surveillance capitalism due to a sea of junk queries from every client, especially if it were making crawler requests from the same browser (obviously needs to be isolated from the users own data, extensions, queries, etc). The federated servers would also probably need to operate as lighthouses that orchestrate the domains and IP ranges to crawl, and efficiently distribute the workload to client machines.


  • WhatAmLemmy@lemmy.worldtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    101
    arrow-down
    1
    ·
    edit-2
    4 months ago

    Should … Should we tell OP that nobody understands all of any moderately large codebase, especially the sub-dependencies … or that even the thousands of developers who wrote most of that code don’t understand how their own code works anymore?

    I could read the same book every year and I still won’t remember most of the minor events on my deathbed. Doesn’t mean I won’t remember the key components that make up the story — coding is like that, except the minor events and key components can be rewritten or removed by someone else whenever you go to read them next.


  • a) why the fuck would they go to that effort for a filthy commoner like yourself, and b) what are the chances that 0.01% of recoverable data contains anything useful!?!

    Nobody is gonna bother doing advanced forensics on 2nd hand storage, digging into megabytes of reallocated sectors on the off chance they to find something financially exploitable. That’s a level of paranoia no data supports.

    My example applies to storage devices which don’t default to encryption (most non-OS external storage). It’s analogous to changing your existing encrypted disks password to a random-ass unrecoverable throwaway.


  • For all average user requirements that just involve backups, PII docs, your sex vids, etc (e.g. not someone who could be persecuted, prosecuted, or murdered for their data) your best bet (other than physical destruction) is to encrypt every usable bit in the drive.

    1. Download veracrypt
    2. Format the SSD as exFAT
    3. Create a new veracrypt volume on the mounted exFat partition that uses 100% of available space (any format).
    4. open up a notepad and type out a long random ass throwaway password e.g. $-963,;@82??/@;!3?$.&$-,fysnvefeianbsTak62064$@/lsjgegelwidvwggagabanskhbwugVg, copy it, and close/delete without saving.
    5. paste that password for the new veracrypt volume, and follow the prompts until it starts encrypting your SSD. It’ll take a while as it encrypts every available bit one-by-one.

    Even if veracrypt hits a free space error at the end of the task, the job is done. Maybe not 100%, but 99.99+% of space on the SSD is overwritten with indecipherable gibberish. Maybe advanced forensics could recover some bits, but a) why the fuck would they go to that effort for a filthy commoner like yourself, and b) what are the chances that 0.01% of recoverable data contains anything useful!?! You don’t really need to bother destroying the header encryption key (as apple and android products do when you wipe a device) as you don’t know the password and there isn’t a chance in hell you or anyone else is gonna guess, nor brute force, it.