October 5, 2022

Michael is the CEO and co-founder of virtual. A data scientist and businessman with a background in Finance and Physics.

It’s no secret that AI has a trust issue. Part of the problem is the correct name: the word “synthetic” has baggage. Just think of artificial colors, dyes, sweeteners, plants, and light. However, our trust issues with AI go beyond naming. There has been a lot of hype about the technological magic of AI and how it will change everything for business and very little interest in how people make decisions. A great deal of human trust is based on our actual experience of how others think and how reliable they are. But we’ve been asking people to take AI technology based on faith.

If we want organizations to make decisions based on predictive analytics that have huge implications for their business, public safety, or someone’s health, AI has to deliver more than accuracy. We need to provide the people at the top and bottom of the org chart with ongoing proof that its conclusions are trustworthy. To make business decision makers feel confident about AI, we need to make these technologies more understandable, provide clear information about how AI arrives at its decisions and show user-friendly guidance on the steps to take.

Here are three basic elements of Trustworthy AI And tips on how to bridge the gap between data science and business users.

1. Ability to show relationships within data with better visuals generated by AI

Most data problems are complex and involve many different dimensions. It’s not unusual for data problems to have 50 to 100 scales, but most can be reduced to five to 10 variables that are really important for what you’re trying to predict. In this case, engine failure or other problems lead to unscheduled repairs.

Working with these many scales becomes complex and time consuming. For example, if it is a complex problem that boils down to only 10 variables, how do you visualize the relationship between these 10 variables and the goal of interest? How do you visualize the relationships between these ten variables?

If you want to visualize these variables in pairs using traditional 2D visualizations, you would have to display 45 different plots (there are 45 possible pairs in 10 variables), which makes it difficult for the human brain to piece together a meaningful picture. By the way, if we were to display the relationships between all pairs within 100 variables using traditional 2D plots, we would have to display 4950 pieces!

AI finds key relationships in data and determines the best way to visually represent those relationships. And in 3D, nuances become apparent through the size, color, transparency, and combinations of communities within your data set. In 2D, these nuances will be lost. You don’t have to leave out some metrics because they don’t fit in the X/Y axis or give a cleaner presentation (this is a sure way to oversimplify and miss something important or unexpected). Instead, you allow AI to find and display relationships in a way that helps you understand all of your key data, using advanced 3D graphs.

2. AI must explain what it does and why

The built-in AI algorithms have to generate explanations or annotations in plain English that provide step-by-step explanations of what the AI ​​has found in the data. Interpretation is part of the confidence gap in advanced analytics. Explainable AI is the time when you can clearly describe the AI ​​model being used, the potential biases within it and how the data behind the recommended course of action is formed. It identifies features that are important in decision-making and explains exactly why the algorithm automatically believes these features are important.

When users have this transparency during data pre-processing, exploration, prediction, and prescription, they can fully understand what’s going on in AI-based recommendations. They were told the likelihood of the desired outcome and how confident the program was in its expectation. They can tell if there is bad, missing, skewed or outdated data.

It is important for everyone from subject matter experts to data scientists to the legal department to the C-suite to be able to understand your AI model, to ensure that it is relevant to the real world and is applied as intended.

3. Facilitate decision-making and action

Back up AI-powered recommendations with clear explanations, using simple language. It is not enough to make general recommendations; Managers want specific evidence. Today’s AI solutions will automatically choose the graph that tells this story and provide a short narrative that goes with it, in language that doesn’t require a math degree to understand.

AI should also allow end users to easily run different scenarios to help determine the right course of action and what to do next.

The AI ​​platform should seamlessly connect with other systems and become a part of their daily workflow. This is how a modern interactive AI platform makes data science actionable.

There are a lot of wow factors to artificial intelligence. Platforms are now VR-capable, allowing remote users to stand inside, touch a data set, and collaborate with others in a virtual room. They can handle myriad types of data without a code: numeric, categorical (eg gender, residence status, education level) and unstructured (emails, PowerPoint, survey answers, social media posts, call center texts, etc. ). Business impact can be huge.

But most AI projects fail. AI doesn’t really take off in organizations until it’s really available to those outside the data science department. Search He found that giving people control over algorithms, by letting them tweak them slightly, could help create more confidence in AI predictions — and increase the likelihood that users will use them in the future.

So if you want AI to gain traction, give key stakeholders the opportunity to infuse AI insights with their own areas of expertise. Give more people the opportunity to collaborate with other team members on complex data problems, with the tools that make it possible. By letting everyone see what’s behind AI at every stage of its life cycle, you’ll have an AI model that people can trust.


Forbes Technology Council It is an invite-only community of world-class CIOs, CIOs, and CTOs. Am I eligible?


Source link

Leave a Reply

Your email address will not be published.