Machine learning-based fraud decision engines are sometimes viewed as mysterious black boxes that only provide minimal insight into why a decision was made on a login or a transaction. At Sift, they invest in clear decision explainability, which provides clarity for fraud analysts and accuracy for risk strategists. This helps to point the analyst in the right direction of what they should be looking for and taking into consideration during the review. It also helps risk strategists measure decision accuracy and uncover the details of fraud attacks so they can explain the increase in declines needed to mitigate the attack.
