• Eggymatrix@sh.itjust.works
    link
    fedilink
    arrow-up
    6
    ·
    21 hours ago

    I think that there is now a phase were the bugs that are findable by AI will be reported en masse, and there will be a period of patching them and working through the queue. After this we will end up with better software overall, which is what Linus predicted a couple months ago.

    That said there still needs to be a penality for crap reports, because those are still received, making us loose time on what is functionally just spam.

  • Jo Miran@lemmy.ml
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    1 day ago

    Infosec professional for almost 30 years here. I can confirm that the latest iterations of AI models are finding high quality bugs and vulnerabilities in the code we work with. If Daniel has access to Mythos, I suspect his experience would be even more shocking.

    The problem I have is that the AI tools can find bugs faster than they can be patched, which is eventually going to prompt companies to use AI to patch bugs found by AI. Before long, no living being will be able to make heads or tails out of the code we run. Just my 2¢.

    • utopiah@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      10 hours ago

      AI tools can find bugs faster than they can be patched

      Not a security expert but wasn’t that the case already? It feels like before AI there were already a lot more bugs, security related or not, on backlogs. That’s precisely why there are metrics like severity.

    • cecilkorik@piefed.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      no living being will be able to make heads or tails out of the code we run.

      Which is fine, because somebody will just vibe code a replacement when it gets too unwieldy and then we’ll start the cycle of unmaintainability all over again. Welcome to the era of disposable, limited-use software.

      While you’re all working on dealing with that, don’t mind me, I’m just going to be over here admiring all this artisanal, hand-crafted software running in a carefully arranged and manually curated legacy virtual machine with loving attention to detail and thoughtful Feng Shui, where it will be safe and protected from the horrors of the open internet until someday NetWatch finally fires up the blackwall to protect us.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      1 day ago

      Perpetual loop of “bounty encourages bad reports”, “canceled bounty”, “bug reports improve”, “bounty comes back”, “bounty encourages bad reports”…

      • thingsiplay@lemmy.ml
        link
        fedilink
        arrow-up
        8
        ·
        1 day ago

        bounty also encourages good reports. So your argumentation is that the bounty program is the reason why reports were bad lately? I don’t think that is the reason and bringing it back will not make it that worse again.

          • thingsiplay@lemmy.ml
            link
            fedilink
            arrow-up
            7
            ·
            1 day ago

            It wasn’t like that before Ai. And since rise of Ai, the quality went down and not only for bounties. So this is not a problem with bounties. Now that the quality of reports went up and is not much of an issue anymore, we can assume it will not an issue anymore with bounties coming back.

            In short, bounties are not the cause of the low quality reports.

    • ffhein@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      If they are getting valid findings with high quality reports from AI tools already, why would they do that?