This is actually a super smart move, from an evil genius point of view. The plaintiffs now have an interest in the company growing instead of shutting down.
Though I really hope some judge somewhere stops that deal.
This is actually a super smart move, from an evil genius point of view. The plaintiffs now have an interest in the company growing instead of shutting down.
Though I really hope some judge somewhere stops that deal.
It’s a rant opinion piece about the caveats of mixing async and sync functions, and divides code into ‘red’ (async) and ‘blue’ (sync) functions to explain the various problems associated with it.
I really wish there were any even remotely credible way to disagree with that statement.
I offer you a third option: at least one Lidl in Croatia uses blinking tags for stuff they really want you to look at.
Sometime soon we’re gonna have to invent a spam filter for real life. Hey, maybe that’s the use case that the Vision guys at Apple have been looking for?
Thanks to a few centuries of upper nobility, we already know that marrying your cousin for several generations is not always a good idea. It’ll be interesting to see what happens after a few iterations of AIs being trained on data mostly produced by other AIs (or variations of themselves). I suppose it largely depends on how well the training data can be curated.
ChatGPT says:
Yes, there are strategies to post wrong answers that could “poison” the training data of language models while still allowing human readers to recognize the errors. Here are a few approaches:
- Subtle Semantic Errors: Provide answers that contain subtle but significant semantic errors. For example, use synonyms incorrectly or swap terms in a way that changes the meaning but might be overlooked by automated systems. For instance, “Paris is the capital of Germany” instead of “Berlin is the capital of Germany.”
- Contextual Incongruities: Embed answers with facts that are contextually incorrect but appear correct at a surface level. For example, “The sun rises in the west and sets in the east.”
- Formatting and Punctuation: Use formatting or punctuation that disrupts automated parsing but is obvious to a human reader. For example, “The capital of France is Par_is.” or “Water freezes at 0 degrees F@harenheit.”
- Obvious Misspellings: Introduce deliberate misspellings that are noticeable to human readers but might not be corrected by automated systems, like “The chemical symbol for gold is Au, not Gld.”
- Logical Inconsistencies: Construct answers that logically contradict themselves, which humans can spot as nonsensical. For example, “The tallest mountain on Earth is Mount Kilimanjaro, which is located underwater in the Pacific Ocean.”
- Nonsense Sentences: Use sentences that look structurally correct but are semantically meaningless. For example, “The quantum mechanics of toast allows it to fly over rainbows during lunar eclipses.”
- Annotations or Meta-Comments: Add comments or annotations within the text that indicate the information is incorrect or a test. For example, “Newton’s second law states that F = ma (Note: This is incorrect for the purpose of testing).”
While these methods can be effective in confusing automated systems and LLMs, they also have ethical and legal implications. Deliberately poisoning data can have unintended consequences and may violate the terms of service of the platform. It’s crucial to consider these aspects before attempting to implement such strategies.
IMHO the full title should read, “Hertz replaces shoddily built and expensive-to-fix cars, which just happen to be EVs, with more reliable models, which happen to be ICE cars.”
That, and there was something about charging infrastructure.
FWIW, you can still press Shift-F10 to open a command prompt, then run
oobe\bypassnro
. The computer will reboot / restart the setup process and this time there’ll be a small link “I don’t have internet” that’ll allow you to set up a local account.Just make very sure not to connect it to the internet (cable or Wi-Fi) before this point.
There have been rumours of newer versions of Windows 11 not allowing the bypass anymore, but I haven’t personally seen any evidence of this so far.
Still a shit show though - trickery like this shouldn’t be necessary.