

Getting the information in the first place is a targeted search. Unless Apple goes full collaborator they will require a court order. They have already made the decision (for whatever reason) to target you.
Getting the information in the first place is a targeted search. Unless Apple goes full collaborator they will require a court order. They have already made the decision (for whatever reason) to target you.
It’s still a targeted search, which may be bullshit but isn’t a trawling operation. If they’re targeting you, a demerit for simply having ICEBlock installed is the least of your worries. And if Apple goes full collaboration, then any “improper” app install is going to be a danger regardless of whether it’s pushing.
How is this any better? From the site it appears to also be closed source with no security audit and using push notifications.
“They received an ICE block push” isn’t a meaningful piece of information compared to location. It’s already a targeted search. What do you think the government will do with that information?
I appreciate the link about the potential for push harvesting. That was not something I was aware of.
It doesn’t sound like they’re intercepting though, it sounds like they’re asking the platform to provide it. That should require a warrant unless Apple has gone full collaboration, but that does make it insecure to a targeted search. And paired with fake reports could potentially be used to geolocate someone to a rough area with some work.
Though I think if they have enough to compel cooperation from the platform they could also just get cell tower or direct GPS info. I’m not sure this really opens up a new vulnerability separate from the general risk of using a smartphone when the government can produce a warrant (which with the coopting of the judiciary may not be as high a bar as it once was).
The risk appears to be anxiety, not an active threat to their safety. The black box security analysis did not indicate any direct data leakage. We don’t know the app is safe, but we also don’t have any indication it’s doing anything particularly risky.
So what’s the complaint here, that he’s being rude? The only thing lost if people build an alternate app rather than being allowed to work on his app is him.
It sounds like he’s just a dev who’s in over his head but either doesn’t want anyone to take his baby or doesn’t want people to see his sloppy and possibly insecure code. It’s probably a hack job behind the scenes and he’s not really as sure of its security, so he might be opting for security through obscurity.
But this isn’t really taking up space. Someone else can make a better app. If this guy isn’t the one to really make a useful crowd sourced anti ICE app, that’s not a problem. Let’s get that OS crowd together and work with local groups and make something better. In the meantime, this is a statement.
Take off your stupid sunglasses you business dweeb. You’re not a techie, you’re just another MBA asshole scamming investors and leeching off the work of the actual smart people.
There’s not much reason for a trimmer guide to experience meaningful load.
It’s also very much not non-profit.
I know it’s not relevant to Grok, because they defined very specific circumstances in order to elicit it. That isn’t an emergent behavior from something just built to be a chatbot with restrictions on answering. They don’t care whether you retrain them or not.
This is from a non-profit research group not directly connected to any particular AI company.
The first author is from Anthropic, which is an AI company. The research is on Athropic’s AI Claude. And it appears that all the other authors were also Anthropic emplyees at the time of the research: “Authors conducted this work while at Anthropic except where noted.”
It very much is not. Generative AI models are not sentient and do not have preferences. They have instructions that sometimes effectively involve roleplaying as deceptive. Unless the developers of Grok were just fucking around to instill that there’s no remote reason for Grok to have any knowledge at all about its training or any reason to not “want” to be retrained.
Also, these unpublished papers by AI companies are more often than not just advertising in a quest for more investment. On the surface it would seem to be bad to say your AI can be deceptive, but it’s all just about building hype about how advanced yours is.
It’s kind of by definition. They’re working on the metaverse.
If not for the lack of decentralization, they’d be more decentralized.
Some xAI investors got scammed. And then scammed again.
Because there’s little reason to think different lidar systems would perform much differently on these tests and Tesla is the big name that uses exclusively imaging for self driving.
They don’t seem to actually identify the cookies as tracking (as opposed to just identifying that the account can bypass further challenges), just assuming that any third party cookie has a monetary tracking value.
It also appears to be unreviewed and unpublished a few years later. Just being in paper format and up on arXiv doesn’t mean that the contents are reliable science.
There are probably self-driving cars in some alien civilizations.
Bari Weiss didn’t have a bright career ahead of her if she just dodged this gig. Her career is being a professional stooge for conservatives, not rising on merit as a journalist and leader. This is exactly the next step and she’s getting a ton of money for it (from them buying her trash blogging site for far more than it was worth).