I know current learning models work a little like neurons but why not just make a sim that works exactly like how we understand neurons work

  • Phanatik@kbin.social
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    6 months ago

    I mainly disagree with the final statement on the basis that the LLMs are more advanced predictive text algorithms. The way they’ve been set up with a chatbox where you’re interacting directly with something that attempts human-like responses, gives off the misconception that the thing you’re talking to is more intelligent than it actually is. It gives off a strong appearance of intelligence but at the end of the day, it predicts the next word in a sentence based on what was said previously but it doesn’t do that good job of comprehending what exactly it’s telling you. It’s very confident when it gives responses which also means when it’s wrong, it’s very confidently delivering the incorrect response.

    • rtfm_modular@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      6 months ago

      Talk to anyone who consumes Fox News daily and you’ll get incorrect predictive text generated quite confidently. You may also deny them their intelligence and lack of humanity with the fallacies they uphold.

      I also think intelligence is a gradient—is an ant intelligent? What about a dog? Chimp? Who gets to draw the line?

      It very may be a very complex predictive text generator that hallucinates but I’m concerned that it minimizes its capabilities for better or worse—Its ability to maintain context and has enough plasticity to reason and change its response points to something more, even if we’re at an early stage.

      • Phanatik@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        6 months ago

        What you’re alluding to is the Turing test and it hasn’t been proven that any LLM would pass it. At this moment, there are people who have failed the inverse Turing test, being able to acerrtain whether what they’re speaking to is a machine or human. The latter can be done and has been done by things less complex than LLMs and isn’t proof of an LLMs capabilities over more rudimentary chatbots.

        You’re also suggesting that it minimises the complexity of its outputs. My determination is that what we’re getting is the limit of what it can achieve. You’d have to prove that any allusion to higher intelligence can’t be attributed to coercion by the user or it’s just hallucinating based on imitating artificial intelligence from media.

        There are elements of the model that are very fascinating like how it organises language into these contextual buckets but this is still a predictive model. Understanding that certain words appear near each other in certain contexts is hardly intelligence, it’s a sophisticated machine learning algorithm.

        • rtfm_modular@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          6 months ago

          All fair points, and I don’t deny predictive text generation is at the core of what’s happening. I think it’s a fair statement that most people hear “predictive text” and think it’s like the suggested words in a text message, which it’s more than that.

          I also don’t think Turing Tests are particularly useful long term because humans are so fallible. We too hallucinate all the time with our convictions based on false memories. Getting an AI to have what seems like an emotional response or show uncertainty or confusion in a Turing test is a great way to trick people.

          The algorithm is already a black box as is the mechanics of our own intelligence. We have no idea where the ceiling is for this technology yet. This debate quickly goes into the ontological and epistemological discussion about what it means to be intelligent…if the AI predictive text generation is complex enough where you simply cannot tell a difference, then is there a meaningful difference? What if we are just insanely complex algorithms?

          I also don’t trust that what the market sees in AI products is indicative of the current limits. AGI isn’t here yet, but LLMs are a scary big step in that direction.

          Pragmatically, I will maintain that AI is a different form of intelligence because I think it shortcuts to better discussions around policy and how we want this tech in our lives. I would gladly welcome the news that tells me I’m wrong.