Future chips not affected by THIS cpu bug yet.
Future chips not affected by THIS cpu bug yet.
That gave me the idea to toss in a coconut or two into bags this year. I’ll reserve those for the “kids” that are obviously too old for this stuff.
All the star trek. This is not negotiable.
Bi men, for men.
That is SOC2. In this context, it’s Security Operations Center.
It’s one of the better EDR (Endpoint Detection and Response) tools on the market. For enterprises, they are able to suck down tons of system activities and provide alerting for security teams.
For detection, when I say “tons of data”, I mean it. Any background logs related to network activity, filesystem activity, command line info, service info, service actions and much more for every endpoint in an organization.
The response component can block execution of apps or completely isolate an endpoint if it is compromised, only allowing access by security staff.
Because Crowdstrike can (kind of) handle that much data and still be able to run rule checks while also providing SOC services makes them a common choice for enterprises.
The problem is that EDR tools need to run at the kernel level (or at a very high permission level) to be able to read that type data and also block it. This increases the risk of catastrophic problems if specific drivers are blocked by another kind of anti-malware service.
When you look at how EDR tools function, there is little difference between them and well written malware.
Crowdstrike became a choice recently for many companies that got fucked over by Broadcom buying VMWare. VMWare owned another tool, Carbon Black, which became subject to the fuckery of Broadcom so more companies scrambled to Crowdstrike recently.
I hope that was enough of a summary.
Same. I support AI completely as a tool to solve specific problems and that is about it. What is really cool is that AI libraries and such got a massive boost of needed development so plebs like me can code simple ANN apps in Python with little skill. Documentation has improved 100x and hardware support is fairly good.
LinkedIn seems to be an interesting indicator of where tech is in its hype cycle. I guess LinkedIn went from 100% AI-awesome-everything about 2 months ago to almost zero posts and ads about it. I suppose most of the vaporware AI products are imploding now…
Of course, algorithmic feeds are a thing, so your experience might be different.
FYI, you can download your photos in bulk with Google Takeout, but you need to have enough space in Google Drive to do it. (Takeout zips up all your photos and will drop 10GB chunks in Drive.)
I was doing something similar to you recently. I downloaded all my photos and de-duped by generating MD5 hashes for all the pictures that were downloaded. (I was moving all of my photos to a local NAS, so it wasn’t quite what you are doing.)
If your dups have consistent MD5 hashes, that might work for you but it’s hard to say.
I am going to need your 50 point summary of those obvious points in the longest form possible by this afternoon so I can be completely convinced that I have already made up my mind in the correct way. Thanks.
It’s been around for a while. It’s the fluff and the parlor tricks that need to die. AI has never been magic and it’s still a long way off before it’s actually intelligent.
Yeah, I would think memory as well due to the screen artifacts in that low res mode. (That depends on how x86 memory is mapped these days, I suppose.)
I am curious as to why they would offload any AI tasks to another chip? I just did a super quick search for upscaling models on GitHub (https://github.com/marcan/cl-waifu2x/tree/master/models) and they are tiny as far as AI models go.
Its the rendering bit that takes all the complex maths, and if that is reduced, that would leave plenty of room for running a baby AI. Granted, the method I linked to was only doing 29k pixels per second, but they said they weren’t GPU optimized. (FSR4 is going to be fully GPU optimized, I am sure of it.)
If the rendered image is only 85% of a 4k image, that’s ~1.2 million pixels that need to be computed and it still seems plausible to keep everything on the GPU.
With all of that blurted out, is FSR4 AI going to be offloaded to something else? It seems like there would be a significant technical challenges in creating another data bus that would also have to sync with memory and the GPU for offloading AI compute at speeds that didn’t risk create additional lag. (I am just hypothesizing, btw.)