The concept of “alignment” in AI and ML refers to the goal of aligning machine learning models with human values and preferences to avoid potential risks. However, there has been criticism of the vague definition of alignment and the lack of involvement from the human-computer interaction (HCI) research community. While HCI can offer valuable insights on alignment, there is a disconnect between what they can offer and what the ML research community perceives as necessary.
