• eveninghere@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    8 months ago

    The title should have been something like “We all knew that AIs today can’t be used for legal advice, and we did a routine demonstration to show why”.

    It’s not like the chatbot looked for ways the user breaks laws, which would be a news.

    Someone who uses ChatGPT daily knows. This generation of chatbot won’t be able to work within all the details of NYC exceptions to go against laws elsewhere, which dominate the training data.

    • Kwakigra@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      8 months ago

      Unfortunately we can’t assume that everyone knows this, even the executives actively replacing their workers with LLMs which in no way are prepared to replace any human worker.

      This chatbot is being presented by the City of New York on an official channel. It would be reasonable that someone desiring to learn the legality of their plan to make a request for clarification to this official-looking service as they might have if a phone representative was available who would have directed them to someone who would actually know. This LLM might have been planned to replace that representative who is still clearly necessary. With authority, the text generated by the LLM from the City of New York advised the requester to break the law.

      You and I know that it’s foolish to trust an LLM to provide correct information, but there is such a marketing push for these things that they can replace workers that many may actually act according to the marketing rather than test the technology for themselves, especially if they are a tech-illiterate manager of some kind responsible for cutting costs. I don’t think the person that put the LLM on the City of New York website thought it would be a threat to the rule of law, and now everyone with the ability to do this in every city needs to know that the tech isn’t there yet and it’s a very bad idea.