The Hanging Dumpsters of Babble-on
He / They
The Hanging Dumpsters of Babble-on
“The internet is the blue ‘e’ swirl thing on my computer’s home screen.”
Speaking as an infosec professional, security monitoring software should be targeted at threats, not at the user. We want to know the state of the laptop as it relates to the safety of the data on that machine. We don’t, and in healthy workplaces can’t, determine what an employee is doing that does not behaviorally conform to a threat.
Yes, if a user repeatedly gets virus detections around 9pm, we can infer what’s going on, but we aren’t tracking their websites visited, because the AUP is structured around impacts/outcomes, not actions alone.
As an example, we don’t care if you run a python exploit, we care if you run it against a machine you do not have authorization to (i.e. violating CFAA). So we don’t scan your files against exploitdb, we watch for unusual network traffic that conforms to known exploits, and capture that request information.
So if you try to pentest pornhub, we’ll know. But if you just visit it in Firefox, we won’t.
We’re not prison guards, like these schools apparently think they are, we’re town guards.
Schools literally, legally, are not companies.
School is not work. Work is compensated. Work is voluntary. School is neither.
Sure, it’s possible to make AVs into basically drone swarms that have perfect coordination, the problem is that unless you also kick all human-controlled cars off the road, it’s not going to work. Drone swarms don’t have human controlled drones, or even drone swarms from other manufacturers, flying through the middle of them, or they would be crashing into each other all the time.
IA is still operating under the misunderstanding that the US is not just several large corporations in a trench coat.
the purpose of my car is to get me from place to place
No, that was the purpose for you, that made you choose to buy it. Someone else could have chosen to buy a car to live in it, for example. The purpose of a tool is just to be a tool. A hammer’s purpose isn’t just to hit nails with, it’s to be a heavy thing you can use as-needed. You could hit a person with it, or straighten out dents in a metal sheet, or destroy a harddrive. I think you’re conflating the intended use of something, with its purpose for existing, and it’s leading you to assert that the purpose of LLMs is one specific use only.
An LLM is never going to be a fact-retrieval engine, but it has plenty of legitimate uses: generating creative text is very useful. Just because OpenAI is selling their creative-text engine under false pretenses doesn’t invalidate the technology itself.
I think we can all agree that it did a thing they didn’t want it to do, and that an LLM by itself may not be the correct tool for the job.
Sure, 100% they are using/ selling the wrong tool for the job, but the tool is not malfunctioning.
Libertarians and ancaps are only anarchist in the most facile sense; they’re not actually anti-authority or anti hierarchy, they’re just anti authority-over-themselves. They have no issue mandating actions to others. Rules for thee but not for me/ rules that bind the outgroup only, is the hallmark of right-wing ideologies, and is ancaps and libertarians to a ‘t’.
The purpose of an LLM, at a fundamental level, is to approximate text it was trained on. If it was trained on gibberish, outputting gibberish wouldn’t be a bug. If it wasn’t, outputting gibberish would be indicative of a bug.
I can still say the car is malfunctioning.
A better analogy would be selling someone a diesel car, when they wanted an electric vehicle, and them being upset when it requires refueling with gas. The car isn’t malfunctioning in that case, the salesman was.
Except Lvxferre is actually correct; LLMs are not capable of determining what is useful or not useful, nor can they ever be as a fundamental part of their models; they are simply strings of weighted tokens/numbers. The LLM does not “know” anything, it is approximating text similar to what it was trained on.
It would be like training a parrot and then being upset that it doesn’t understand what the words mean when you ask it questions and it just gives you back words it was trained on.
The only way to ensure they produce only useful output is to screen their answers against a known-good database of information, at which point you don’t need the AI model anyways.
A software bug is not about what was intended at a design level, it’s about what was intended at the developer level. If the program doesn’t do what the developer intended when they wrote the code, that’s a bug. If the developer coded the program to do something different than the manager requested, that’s not a bug in the software, that’s a management issue.
Right now LLMs are doing exactly what they’re being coded to do. The disconnect is the companies selling them to customers as something other than what they are coding them to do. And they’re doing it because the company heads don’t want to admit what their actual limitations are.
There are more than 1 right wing ideologies.
Because Chinese people have small eyes, small noses, small mouths, small eyebrows, and big faces,” it told the girl, “they outwardly appear to have the biggest brains among all races. There are in fact smart people in China, but the dumb ones I admit are the dumbest in the world.
This feels even more racist than the “average” internet response. Did they solely train this model on *chan boards?
This is a false narrative that stock traders push. The fiduciary duty is just one of several that executives have, and does not outweigh the duty to the company health or to employees. Obviously shareholders will try to argue otherwise or even sue to get their way, because they only care about their own interests, but they won’t prevail in most cases if there was a legitimate business interest and justification for the actions.
Yes, but that is not the entirety or even majority of the problem with algorithmic feed curation by corporations. Reducing visibility of those dumb challenges is one of many benefits.
I am generally very skeptical of lawsuits making social media and other Internet companies liable for their users’ content, because that’s usually a route to censor whatever the government deems “harmful”, but I think this case actually makes perfect sense by attacking the algorithmic “curation” that they do. Imo social media should go back to being a purely chronological feed, curated by the users themselves, and cut corporate influence out of the equation.
No, that’s exactly what it is. All the most moral countries have them, right?
I’m surprised that Lina Khan hasn’t been Boeing’d by some large corporation yet. She’s been the best FTC chair in decades (low bar, but she cleared it well), and I think this is a good starting point for getting anti-AI content discussions into the public space in a strategic way.
It’s a major policy plan created in order to map out an effective takeover of the executive branch by Trump if he wins the election. It’s been all over the news since it was revealed.
I have bad news for you…