Interpersonal and socio-cultural alignment
In a comment on "Alignment", Sniffnoy wrote:
At least as far as I'm aware, the application of "alignment" to AI comes from Eliezer Yudkowsky or at least someone in his circles. He used to speak of "friendly AI" and "unfriendly AI". However, the meaning of these terms was fairly different from the plain meaning, which confused people. So at some point he switched to talking about "aligned" or "unaligned" AI.
This is certainly true — see e.g. Yudkowsky's 2016 essay "The AI alignment problem: why it is hard, and where to start".
However, an (almost?) exactly parallel usage was established in the sociological literature, more than half a century earlier, as discussed in Randall Stokes and John Hewitt, "Aligning actions" (1976):
Read the rest of this entry »