• Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    63
    ·
    21 hours ago

    Generative AI continues to be a solution looking for a problem.

    It has a very strong future for applications. The current number of applications are extremely limited compared to what CEOs think they are

    • TranquilTurbulence@lemmy.zip
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      6 hours ago

      The marketing department seems to have highly inflated expectations, so I think this is mostly a marketing thing. They are painting a picture of something that doesn’t exist yet. The same thing happened with Siri and Google Assistant when they were released. People expected these tools to be capable of doing the kinds of things that modern LLMs are now beginning to approach, so the amount of disappointment was pretty brutal.

      But is it a solution looking for a problem? If you need to generate pictures for a horror themed book, image AIs can get the job done properly. You don’t even need to request mutated abominations, because the AI gravitates towards those anyway. If you need to write a pointless self help book that meets the page count without conveying any actual information, LLMs are the way to go.

      These are all pretty niche examples, but people should understand that all of the AIs we have today are narrow AIs. The scope is about as narrow as the tip of a torx screwdriver. Try it on a different screw and you’ll see what I mean. Use it as a hammer, and you’ll gain full understanding of the nature of this problem.

      If you have exceedingly low expectations, LLMs can actually do some trivial tasks very well. If you have hardly any experience in writing or programming, an LLM will prove to be an impressive tool. If you are a professional though, you’ll notice pretty quickly that the LLM is only good for the simplest tasks, while you still need to do everything that is even a little bit demanding.

    • bobs_monkey@lemm.ee
      link
      fedilink
      arrow-up
      12
      ·
      19 hours ago

      And at this point, we can safely assume these business leaders will use it for their own personal enrichment at the expense of the rest of humanity, and for that reason alone I’m happy to avoid it wherever possible. Don’t feed the beast as it were.

  • Bronzebeard@lemm.ee
    link
    fedilink
    English
    arrow-up
    28
    ·
    16 hours ago

    What dumbass is letting AI post articles without human oversight? That’s like rule one of what not to do.

  • along_the_road@beehaw.orgOP
    link
    fedilink
    arrow-up
    20
    ·
    21 hours ago

    A major journalism body has urged Apple to scrap its new generative AI feature after it created a misleading headline about a high-profile killing in the United States.

    The BBC made a complaint to the US tech giant after Apple Intelligence, which uses artificial intelligence (AI) to summarise and group together notifications, falsely created a headline about murder suspect Luigi Mangione.

    The AI-powered summary falsely made it appear that BBC News had published an article claiming Mangione, the man accused of the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.

    Now, the group Reporters Without Borders has called on Apple to remove the technology. Apple has made no comment.

      • DdCno1@beehaw.org
        link
        fedilink
        arrow-up
        4
        ·
        8 hours ago

        It’s been that way since at least the first Trump administration. The sanewashing continues.

  • Evil_Shrubbery@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    6 hours ago

    It was a headline that generated engagements, which means revenue, so no, def not axing it, people just can’t find out it was false for a certain amount of time has passed (a few days).

    The AI works according to their prime directive, which is m-m-m-money.