• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle






  • The output for a given input cannot be independently calculated as far as I know, particularly when random seeds are part of the input.

    The system gives a probability distribution for the next word based on the prompt, which will always be the same for a given input. That meets the definition of deterministic. You might choose to add non-deterministic rng to the input or output, but that would be a choice and not something inherent to how LLMs work. Random ‘seeds’ are normally used as part of deterministically repeatable rng. I’m not sure what you mean by “independently” calculated, you can calculate the output if you have the model weights, you likely can’t if you don’t, but that doesn’t affect how deterministic it is.

    The so what means trying to prevent certain outputs based on moral judgements isn’t possible. It wouldn’t really be possible if you could get in there with code and change things unless you could write code for morality, but it’s doubly impossible given you can’t.

    The impossibility of defining morality in precise terms, or even coming to an agreement on what correct moral judgment even is, obviously doesn’t preclude all potentially useful efforts to apply it. For instance since there is a general consensus that people being electrocuted is bad, electrical cables normally are made with their conductive parts encased in non-conductive material, a practice that is successful in reducing how often people get electrocuted. Why would that sort of thing be uniquely impossible for LLMs? Just because they are logic processing systems that are more grown than engineered? Because they are sort of anthropomorphic but aren’t really people? The reasoning doesn’t follow. What people are complaining about here is that AI companies are not making these efforts a priority, and it’s a valid complaint because it isn’t the case that these systems are going to be the same amount of dangerous no matter how they are made or used.




  • If you have enough other investments to be comfortable, don’t especially want to change retirement timeline etc. and your wife is fine with it, I’d keep it as a potential hedge against a depression that crashes the value of index funds. I would not split it between whichever small crypto projects can sell you a convincing narrative that they have ‘moon potential’ when your financial circumstances mean you don’t really need that anyway and that is specifically what would open you up to the ‘scamminess’ of crypto.


  • There has been significant growth of crypto as currency, particularly in the developing world with use of USD pegged stablecoins. It remains the only practical solution to make online transactions privately or when alternatives have been censored, potential pitfalls notwithstanding.

    Centralized control is a threat, but it’s one that is taken seriously, and by practical metrics crypto has been largely successful in defending its integrity here. Other related values measures worth looking at are credible neutrality, permissionlessness, and trustlessness, also basically areas it continues to succeed. You submit a valid transaction to a major blockchain, it’s getting included, even if powerful people would rather it wasn’t. Transactions that are illegal as per US sanctions are treated more or less equally to any other. Miners and stakers are not taking control of the money printer dial for their own enrichment. And there’s reason to think this will continue, because in a lot of ways control is a liability, and giving it up is valuable; “CEO of Bitcoin” is not a sane title to aspire to because it would make you the responsible party and valid target for all sorts of legal threats and obligations, and just having it would destroy the value of what you control.

    That said, as a means for the economic salvation of the median person like OP seems to be talking about, it was never going to do that on its own, no one who thinks honestly about it would promise that, and anyone who did is full of shit. It’s just a new type of p2p money with some cool properties, that obviously isn’t going to be enough to fix the mess that is the world’s economic and political systems.









  • The person who predicted 70% chance of AI doom is Daniel Kokotajlo, who quit OpenAI because of it not taking this seriously enough. The quote you have there is a statement by OpenAI, not by Kokotajlo, this is all explicit in the article. The idea that this guy is motivated by trying to do marketing for OpenAI is just wrong, the article links to some of his extensive commentary where he is advocating for more government oversight specifically of OpenAI and other big companies instead of the favorable regulations that company is pushing for. The idea that his belief in existential risk is disingenuous also doesn’t make sense, it’s clear that he and other people concerned about this take it very seriously.