It peaked at 4.05% in March. The last 2 months it went just below 4% as the Unknown category increased. For June the reverse happened, so 4.04% seems to be the real current share of Linux on Desktop as desktop clients were read properly/werent spoofed.
Where do they get data from?
From cookies.
What kind? Personally I wouldn’t trust any kind with raisin.
From analytic cookies.
What about peanut butter? Or are you more of a salted chocolate kind of person?
They get the data from user-agent strings, so it may not be 100% accurate
User agent strings are frozen these days, at least in Chrome. They still have the browser major version and OS name at least, but Windows will always report Windows 10, Android will always report Android 10, MacOS will always report 10.15.7, and Linux is just “Linux x86_64”: https://www.chromium.org/updates/ua-reduction/
User agent strings are essentially deprecated and nobody should be using them any more. They’ve been replaced by User-Agent Client Hints, where the site can request the data it needs, and some high-entropy things (ie fields that vary a lot between users) can prompt the user for permission to share them first.
Oh nice. Googie once again deciding for the entire Internet what it should be using and forcing it down everyone’s throats.
User agents were commonly used for the wrong reasons - fingerprinting, sites that block particular browsers rather than using proper feature detection, etc. so I’m glad to see them slowly going away.
Shit started hitting the fan when everyone started faking Netscape’s “Mozilla” user agent. Then “KHTML, like Gecko” and after that every fork kept the originating name in the string and extended it.
There’s a good explanation about that here: https://webaim.org/blog/user-agent-string-history/
The issue is that a lot of sites used the user-agent to determine if the browser supported particular features (e.g. show a fancy version of a site if the user is using Netscape, otherwise show a basic version for Mosaic, lynx, etc). New browsers had to pretend to be the old good browsers to get the good versions of sites
This is why getting rid of the user agent is a good thing. Sniffing the UA is a mess.
@Tixanou @Madiator2011 Plus they are basing themselves off of a sample of websites, so it’s like polls it’s made to be representative but cannot be 100 % accurate