Add to Favourites
To login click here

Elon Musk recently expressed his concern over bias in emerging AI tools, such as OpenAI’s ChatGPT and Google’s Bard technologies. He proposed the idea of creating a “maximum truth-seeking AI” as an alternative. Musk has already established a new AI firm called X.AI and has reportedly begun hiring AI staff from OpenAI, Google, and Alphabet. AI bias also has implications for cybersecurity risk. In March, Musk and hundreds of other tech leaders, ethicists, and academicians signed an open letter urging organizations involved in AI research and development to pause their work for at least six months, so policymakers have an opportunity to put some guardrails around the use of the technology. Suzanna Hicks, data strategist and scientist at the International Association of Privacy Professionals (IAPP), states that the fundamental problem with bias in AI and machine learning is that it stems from human decision-making.