Leading voices in the field of artificial intelligence (AI) have raised concerns about the potential for AI to become so powerful that humans lose control of it. However, there is a more fundamental problem with AI systems: they do not help decision-makers understand causation or uncertainty, and they create incentives to collect huge amounts of data, which can lead to risks around security, privacy, legality and ethics. AI systems are great at interpolating, or predicting or filling in the gaps between known values, but they do not generate knowledge or insights. To make better decisions, decision-makers need to understand the cause and effect of their decisions and the confidence level of the predictions.