• 0 Posts
  • 30 Comments
Joined 2 years ago
cake
Cake day: August 8th, 2023

help-circle



  • Senal@programming.devtoAsklemmy@lemmy.mlWhat's a Tankie?
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    3 months ago

    That is my matrix username

    Ah, makes sense.

    I respond to a someone probably exploring communism asking about a term with an emphasis on the deleting of certain posts spreading misinformation, which might miss guide the person asking the question into some kind of vaushist “leftism” or turn them off from exploring marxism. The specific posts spreading misinformation are claiming a very accusatory claim used by western imperialists to make a government look bad, which in a less fortunate country that is just developing, could be the result of support for a coup to put in a puppet government. Whether you support that claim which is objectively false (https://tankie.tube/w/p/kFZ2joQah4kmt2KSpzPHtb?playlistPosition=6&resume=true <-- is an entertaining starter with sources) is irrelevant when people think these people spreading such disinformation are some kind of heroes.

    That also makes sense, mostly, i disagree with some of it on a logical principle level, but i really don’t have a personal horse in any of the political parts i also don’t know/care enough to get one.

    All the things you said might be true, they all might be false, though i suspect they’re all subjective enough to be context dependent, i also suspect we aren’t going to agree on the difference between subjective and objective, which is my main disagreement with the statement as a whole.

    My main point was, there were answers that are now deleted, that is provably true.

    The subjective accuracy of those answers isn’t really the point and no claim was made on that aspect.

    Also, the implied /s for “mysterious” didn’t land and that’s on me.



  • Senal@programming.devtoAsklemmy@lemmy.mlWhat's a Tankie?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    12
    ·
    3 months ago

    @Williama:Genzedong

    I’m not sure what this means, is this a reference i’m supposed to know?

    Come on lib send me the Tiananmen Square video of tanks doing the things you claim they do. @Williama:Genzedong

    Not sure if this is aimed at me, but i haven’t claimed anything to do with tanks, at any point, ever.

    Some answers haven’t “disappeared for mysterious reasons”,

    That’s fair , i meant “mysterious reasons” in a less factual and more sarcasm way, but i can see how that might have not come across.

    it’s for spreading misinformation.

    That’s subjective, which is what that whole thread is about no?

    I wasn’t really emphasizing the subjectivity of the claims, as much as just pointing out that answers had been removed and they might be found in the modlog.

    You seem to have a strong opinion on this, i do not.

    If you disagree then come on, send me a video of the “horrendous crimes committed by China in Tiananmen Square”

    I’m sure you can search for whatever videos you need, i haven’t made any claims i would need to provide video evidence for.

    I won’t be providing evidence of positions i haven’t taken or claims i haven’t made, that would be silly.

    I fully consent. @Williama:Genzedong.

    Still not sure what this reference is.

    Surely at least one of the “victims of the massacre” would have recorded something the “ruthless military regime” and their oh so very “despicable acts of massacre”.

    See the above section about there being no claims or positions taken.

    If you want to imagine i’ve sent you proof of this imaginary claim i’ve made so you can be upset in your imagination , feel free.

    If you and other libs are annoyed that the devs are “tankies”, then go back to reddit.

    See above re: claims that never happened





  • TL;DR;

    It’s weird to be upset at people for having personal boundaries/morals/ethics.

    Using “purity test” like a pejorative, because using a more accurate term makes your argument sound bad, is a bad faith approach.


    You say “purity tests” like it’s some sovcit term imbued with magical powers, like DEI or woke.

    Headcanon replace it with “personal ethics and morals” and you might see how some of those arguments are really just people having boundaries.

    An example of what i mean.

    This is the biggest issue with niche communities: purity tests.

    They can’t unite under one goal and have productive discussions. They are more focused on being correct (their interpretation of correct) and shutting out the incorrect than getting closer to a goal. Sometimes incorrect can be as little as choosing the wrong utility and other times it can be much bigger but they all spark the same amount of ire.

    vs

    This is the biggest issue with niche communities: personal ethics and morals.

    They can’t unite under one goal and have productive discussions. They are more focused on being correct (their interpretation of correct) and shutting out the incorrect than getting closer to a goal. Sometimes incorrect can be as little as choosing the wrong utility and other times it can be much bigger but they all spark the same amount of ire.

    See how the rest of that statement sounds without the bad faith, magic-word interpretation ?

    I’m not expecting any good faith arguments in response, so don’t worry, this was a just-in-case kind of thing.





  • I don’t disagree in principle.

    Lets take your scenario of not voting for fascist-lite as a means to fight against Full-Fat fascist.

    In the current American system ( the greatest and most functional system /s), not voting effectively gives the vote to the eventual victor (that’s reductive but you know what I mean)

    Assuming the BigFash win, the choice of inaction would be more impactful than the action of voting for DietFash.

    On a relative scale and depending on how you feel about fascism I suppose.

    So yes the participation and outcome matter but the effect isn’t always equal.

    Inactively participating in the rise of the GrandMasterFash would be the cost of feeling good about not actively voting for the LesserFash.

    Ultimately it’s shit choices all around, but that’s the point of the lesser of two evils, right?





  • TL;DR;

    You asked why it mattered if it’s LLM generated or not, i provided examples where it does matter, nothing you’ve said in your reply seems to refute that so I’ll just assume we’ve agreed on this point.

    The rest of this reply is just me replying to your additional arguments.


    Ok, so you’re suggesting that people are submitting kernel patches that somehow modify the architecture of the kernel/it’s components, that the new architecture is very complex and hard to analyze, that the those architectural changes are part of roadmap and are not rejected right away and that those big, complex architectural level patches are submitted with high frequency. Somehow I doubt all of it.

    I mean, i didn’t say any of that but feel free to doubt a position you just made up.

    I think the slop patches are small fixes suggested by some AI code analysis tools.

    There’s no reason to believe that LLM usage is limited to small patches.

    that architectural and complex changes are part of well defined roadmap and don’t come out of nowhere and that code that doesn’t follow conventions is easily spotted and rejected.

    In a well maintained project, sure, ish, but let’s just say you’re right about the plan/roadmap phase.

    The spotting and rejection you mentioned are now significantly more time and resource consuming for the reasons i stated in the previous reply.

    Also when i used the word architecturally i was referring to the logical domain of the patch and the things it interacts with, i wasn’t implying that LLM’s would get a chance at re-architecting an entire project as large as the Linux kernel.

    At least i’d hope not.

    The linked article talks only about marking the code as AI generated (IMHO useless but harmless) and increasing volume of AI slop patches.

    I’m not sure of the usefulness of this kind of marking in practice, but i can tell you a way in which it might be useful.

    The way you need to go about evaluating LLM generated code vs human code can be different.

    And before you get on your high horse I’m not saying we shouldn’t be doing a good job reviewing in general, of course we should.

    Review and testing resources are limited in most practical settings, we should be focusing on best utilising that resource in the most efficient manner possible.

    There are tools specifically geared towards evaluating LLM generated code for specific mistakes, this marking would enable a more efficient usage/allocation of review resources over and above the baseline code-quality tests.

    The idea that maintainers spend time analyzing complex LLM generated code submitted by random amateurs looking for possible architectural bugs sounds like a fantasy to me

    Which is clear from your answers, if you don’t understand how pull request review works in practice you’re going to struggle to make a coherent argument that requires that understanding.

    To answer the statement directly, there’s sometimes no efficient way to tell which patches are from amateurs, even without LLM’s.

    The issue isn’t even just relegated to amateurs, i would like to assume a competent dev of any skill level wouldn’t be submitting patches they don’t understand but that’s just not always the case.

    and again, think architecture with a ‘little a’ rather than a ‘big A’.

    Logical flow and domain understanding in a relatively limited scope, rather than system-wide structural change.

    The difference between tactics and strategy.


  • Volume and Moderation.

    Generating slop is significantly quicker.

    You get an increase in volume of people pushing slop, which then has to be reviewed. In addition to the increase in submissions you also get the increase in fidelity/general complexity of the submissions.

    Reviewing a PR generated by LLM’s used by amateurs is more involved than an equivalent PR written directly by said amateur.

    Straight up coding mistakes aren’t most of the issue, it’s the complex architectural and logical bugs that are going to be the problems.

    Stuff that’s functional but logically/architecturally unsound is much harder to spot and it’s significantly easier to generate these kinds of issues with an LLM than to write them out by hand.

    If someone submits bullshit AI generated code he will be ignored in the future.

    Like this for example, a seemingly reasonable functional argument that is relatively logically unsound, in that is focuses on a narrow “happy path” and ignores where the actual issues are.

    1 . To get to the stage where you can block this person you need to review the code first and identify if there is an issue.

    Doing this for LLM generated code takes longer, on average.

    1. It’s also now possible for people less skilled to generate a higher volume of code that looks more reasonable, so that increases the total amount of reviews needed.

    So the existing process of reviewing people and code is now a multiple more difficult and resource consuming.

    Which is generally what people want addressed.

    Can LLM’s help?, possibly.

    Are there issues that are going to become a large resource problem if we don’t actually address them, yes.