• bacon_pdp@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    I agree currently technology is extremely unlikely to achieve general intelligence but my expression was that we never should try to achieve AGI; it is not worth the risk until after we solve the alignment problem.

    • chobeat@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      “alignment problem” is what CEOs use as a distraction to take responsibility away from their grift and frame the issue as a technical problem. That’s another word that make you lose any credibility