IT organisations are applying artificial intelligence and machine learning (AI/ML) technology to network management, but network engineers are skeptical due to the potential for mistakes. A survey of 250 IT professionals found that 96% have experienced false or mistaken insights and recommendations from AI/ML tools, and 20% cited cultural resistance and distrust from the network team as a roadblock. Explainable AI is an academic concept that is being embraced by commercial AI providers to help build trust in these solutions. It is a subdiscipline of AI research that focuses on making AI models more transparent and interpretable, so that users can understand why the AI system made a certain decision.
Previous ArticleMil-osi New Zealand: Business News – Tcs Cloud Study: Betting On Ai, Global C-suite…
Next Article Ai May Prompt A Wave Of Reskilling For Employees