• 0 Posts
  • 199 Comments
Joined 2 years ago
cake
Cake day: June 4th, 2023

help-circle






  • I’m running into this problem in a little web app I wrote for myself. If the tooltip text is selectable and you try to select the hoverable text, it’ll sometimes’s also select the tooltip text. It’s annoying when you’re trying to copy something. Just not annoying enough to fix yet.

    So you have to make a choice in what is more valuable to make available to the user. I think relative time is more useful since I’m more concerned about how recently it was posted, and I don’t want to math it out in my head every time.



  • I understand the concern, but I don’t think you’re asking the right question. I would consider goldfish to be sentient, but I’m not afraid of goldfishes. I don’t consider the giant robotic arms used in manufacturing to be sentient, yet I wouldn’t feel safe going anywhere near them while they’re powered on. What you should be concerned about is alignment, which is the term used to describe how closely the AI agent’s goals match up with that of humans. And also other humans, because even if the AI has the same goals, you still want to make sure that the humans they’re aligned with aren’t malevolent.

    Is sentient AI a “goal” that any researchers are currently working toward?

    It’s possible that someone out there is trying to do it, but in academic settings, if you even hint at sentience, you’re going to get laughed out of the room.





  • Consider this example:
    You have a road that forks and joins up again. You need to reach the end of this road and have a vehicle that takes you there without your input. At the fork, it will flip a coin and choose to either take the left fork or the right fork depending on the results. This agent is therefore stochastic. But no matter what it chooses, it’ll end up at the same place at the same time. Do you consider this to be automation?