New study finds bots and fraud farms responsible for 73% of web traffic::undefined

  • thelastknowngod@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    11 months ago

    Wonder what the engineering solution to this could look like…

    Thinking something like a zero trust model being required for all web requests… Like the target address would need to receive a validated identity token from some third party but that token couldn’t contain identifying info about the requester. Likewise, the validating third party would need to verify the identity of the requester without having knowledge of the target address.

    Then that raises more questions like who would we all be comfortable trusting as a verifier and what data would we use for that validation? The validation system and the data used to validate would need to be provided for free too to account for low income people so no subscription services or hardware MFA keys. Also who counts as an identity to be validated?

    What do enforcement mechanisms look like if this does get built? Are the validators entirely passive or do they actively participate in the process? Like do we have rate limits imposed by the validation engine or do we just leave that to the target address/organization to impose themselves? What happens if someone is banned from a site? Does the site notify the validators to drop requests earlier in the lifetime of a request? Do individuals get a lower request quota than corporations? Would you have to form a company just to prototype a new tool/product?

    If someone seriously wanted to work on this I’d jump on the opportunity to work with them. It sounds like a fascinating project.

    • Rodeo@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      11 months ago

      It’s called Google’s “Web Integrity API” and it’s a horrifically bad idea.