My short response. Yes.

  • tyo_ukko@sopuli.xyz
    link
    fedilink
    arrow-up
    44
    arrow-down
    1
    ·
    1 day ago

    No. The movies get it all wrong. There won’t be terminators and rogue AIs.

    What there will be is AI slop everywhere. AI news sites already produce hallucinated articles, which other AIs refer to and use as training data. Soon you cannot believe anything you read online, and fact checking will be basically impossible.

    • unwarlikeExtortion@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      15 hours ago

      Soon you cannot believe anything you read online.

      That’s a bit too blanket of a statement.

      There are, always were, and always will be reputable sources. Online or in print. Writteb or not.

      What AI will do is increase the amount of slop disproportionately. What it won’t do is suddenly make the real, actual, reputable sources magically disappear. Finding may become harder, but people will find a way - as they always do. New search engines, curated indexes of sites. Maybe even something wholly novel.

      .gov domains will be as reputable as the administration makes them - with or without AI.

      Wikipedia, so widely hated in academia, is proven to be at least as factual as Encyclopedia Britannica. It may be harder for it to deal with spam than it was before, but it mostly won’t be phased.

      Your local TV station will spout the same disinformation (or not) - with or without AI.

      Using AI (or not) is a management-level decision. What use of AI is or isn’t allowed is as well.

      AI, while undenkably a gamechanger, isn’t as big a gamechanger as it’s often sold as, and the parallels between the AI and the dot-com bubble are staggering, so bear with me for a bit:

      Was dot-com (the advent of the corporate worldwide Internet) a gamechanger? Yes.

      Did it hurt the publishing industry? Yes.

      But is the publishing industry dead? No.

      Swap “AI” for dot-com and “credible content” for the publishing industry and you have your boring, but realistic answer.

      Books still exist. They may not be as popular, but they’re still a thing. CDs and vinyl as well. Not ubiquitous, but definitely chugging along just fine. Why should “credible content” die, when the disruption AI causes to the intellectual supply chain is so much smaller than suddenly needing a single computer and an Internet line instead of an entire large-scale printing setup?

    • pilferjinx@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      1 day ago

      Unless we have a bot that’s dedicated to tracing the origin of online information and can roughly evaluate the accuracy to real events.

    • Lunatique @lemmy.mlOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      13
      ·
      1 day ago

      I agree with the slop part but you can’t say the movies get it all wrong if it hasn’t gotten to the point where it can be proven or disproven yet.

      • deadcade@lemmy.deadca.de
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        20 hours ago

        Movies like Terminator have “AGI”, or artificial generalized intelligence. We had to come up with a new term for it after LLM companies kept claiming they had “AI”. Technically speaking, large language models fall under machine learning, but they are limited to just predicting language and text, and will never be able to “think” with concepts or adapt in real time to new situations.

        Take for example chess. We have stockfish (and other engines), that far outperform any human. Can these chess engines “think”? Can they reason? Adapt to new situations? Clearly not, for example, adding a new piece with different rules would require stockfish to re-train from scratch. Humans can take their existing knowledge and adapt it to the new situation. Also look at LLMs attempting to play chess. They can “predict the next token” as they were designed to, but nothing more. They have been trained on enough chess notation that the output is likely a valid notation, but they have no concept of what chess even is, so they will spit out nearly random moves, often without following rules.

        LLMs are effectively the same concept as chess engines. We just put googly eyes on the software, and now tons of people are worried about AI taking over the world. While current LLMs and generative AI do pose a risk (overwhelming amounts of slop and misinformation, which could affect human cultural development. And a human deciding to give an LLM external influence on anything, which could have major impact), it’s nowhere near Terminator-style AGI. For that to happen, humans would have to figure out a new way of thinking about machine learning, and there would have to be several orders of magnitude more computing resources for it.

        Since the classification for “AI” will probably include “AGI”, there will (hopefully) be legal barriers in place by the time anyone develops actual AGI. The computing resources problem is also gradual, an AGI does not simply “tranfer itself onto a smartphone” in the real world (or an airplane, a car, you name it). It will exist in a massive datacenter, and can have its power shut off. If AGI does get created, and causes a massive incident, it will likely be during this time. This would cause whatever real world entity created it to realize there should be safeguards.

        So to answer your question: No, the movies did not “get it right”. They are overexaggerated fantasies of what someone thinks could happen by changing some rules of our current reality. Artwork like that can pose some interesting questions, but when they’re trying to “predict the future”, they often get things wrong that changes the answer to any questions asked about the future it predicts.

      • BlueSquid0741@lemmy.sdf.org
        link
        fedilink
        arrow-up
        13
        arrow-down
        2
        ·
        1 day ago

        The movies depict actual AI. That is, machines/software that is sentient and can think and act for itself.

        The future is going to be more of the shit we have now- LLMs / “guessing software”.

        But also, why ask the question if you think the answer can’t be given yet?