How to manage risk as AI spreads across your organization

Sign up now to get your free virtual ticket to the Code-Less/No-Code Summit on November 9. Hear from executives from Service Now, Credit Karma, Stitch Fix , Appian, etc looking for more information.

As AI spreads throughout the enterprise, organizations are finding it difficult to balance the benefits and risks. AI has been embedded in a wide range of tools, from IT infrastructure management to DevOps software to CRM suites, but most of those tools have been used without AI Risk Mitigation Strategy in place.

Of course, it’s important to remember that the list of potential benefits of AI also outweighs the risks, which is why so many organizations skip risk assessment in the first place.

Many organizations have achieved serious breakthroughs that would not have been possible without AI. For example, AI is being deployed across the healthcare industry for everything from robot-assisted surgery to reducing medication dosage errors to streamlined administrative processes. GE Aviation relies on AI to build digital models that better predict when parts will fail, and of course, there are many ways AI is being used to save money, such as letting AI talk Fulfill restaurant orders.

That’s the good side of AI.


Code-less/no-code summit

Join today’s top executives at the Code-Less/No-Code Summit on November 9th. Sign up for your free pass today.

register here

Now, let’s look at the bad and the bad.

The bad and evil of AI: bias, safety issues and war between robots

AI risks are as diverse as many use cases its proponents exaggerate, but three areas have proven particularly worrisome: bias, safety, and war. Let’s look at each of these issues separately.


While HR departments initially thought AI could be used to eliminate bias in hiring, the opposite happened. Models built with implicit bias fed into the algorithm will eventually have a positive bias against women and minorities.

For example, Amazon had to remove AI support automatic background screener because it filtered out female candidates. Similarly, when Microsoft used tweets to train a chatbot to interact with Twitter users, they created a monster. As one CBS News headline said, “Microsoft shuts down AI chatbot after turning into a Nazi. ”

These problems seem inevitable in hindsight, but if market leaders like Microsoft and Google can make these mistakes, so can your business. With Amazon, the AI ​​was trained on resumes from male candidates. With the Microsoft chatbot, one positive thing you can say about the experiment is that at least they didn’t use 8chan to AI training. If you take five minutes to go through Twitter’s maliciousness, you’ll understand the terrible idea of ​​using that dataset to train anything.

Safety issues

Uber, Toyota, GM, Google and Tesla, among others, have been racing to make fleets of self-driving cars a reality. Unfortunately, the more researchers experiment with self-driving cars, the further back the fully self-driving vision gets.

In 2015, the first death due to self-driving car happened in Florida. According to the National Highway Traffic Safety Administration, a Tesla on autopilot failed to stop a tractor-trailer making a left turn at an intersection. The Tesla crashed into a large rig, seriously injuring the driver.

This is just one of a long list of mistakes made by autonomous vehicles. Uber’s self-driving cars don’t realize it pedestrians can walk. One Lexus powered by Google passed by a city bus in Silicon Valley and in April a partially autonomous bus TruSimple pickup truck entered the concrete center divider on I-10 near Tucson, AZ because the driver hadn’t properly restarted the autopilot, causing the truck to obey outdated commands.

In fact, federal regulators report that self-driving cars are linked to nearly 400 accidents on US roads in less than a year (July 1, 2021 through May 15, 2022). Six people died in those 392 accidents and five were seriously injured.

Fog of War

If self-driving car crashes aren’t enough of a safety concern, consider self-propelled warships.

AI-powered autonomous drones are now making life-or-death decisions on the battlefield, and the risks associated with possible mistakes are combination and controversial. According to a United Nations report, by 2020, an autonomous region Quadcopter aircraft made in Turkey decided to attack the retreating Libyan fighters without any human intervention.

Militaries around the world are considering a wide range of applications for autonomous vehicles, from combat to naval transport to formation flying. controlled fighter aircraft. Even when not actively hunting enemies, autonomous military vehicles can still make some of the same deadly mistakes as self-driving cars.

7 steps to mitigating AI risk across the enterprise

For a casual business, your risks won’t be as scary as killing drones, but even a simple mistake that causes product failure or leads to lawsuits can make you miserable.

To better mitigate the risk of AI spreading across your organization, consider these 7 steps:

Start with early adopters

First, let’s look at the places where AI has gained a foothold. Find out what’s working and build on that foundation. From there, you can develop a basic implementation pattern that different departments can follow. However, keep in mind that any AI adoption plan and implementation patterns you develop will need to gain support across the organization to work effectively.

Locate the appropriate beach head

Most organizations will want to start small with their AI strategy, piloting the plan in one or two departments. A good place to start is where risk is already a primary concern, such as Governance, Risk and Compliance (GRC) and Regulatory Change Management (RCM).

GRC is essential to understanding the many threats to your business in a hypercompetitive market, and RCM is essential to keeping your organization in compliance with the laws you must follow in many jurisdictions. Each practice is also a process that includes manual, labor-intensive, and ever-changing processes.

With GRC, AI can handle complex tasks such as initiating the process of identifying ambiguous concepts such as a “risk culture” or it can be used to collect publicly available data from Competitors help guide new product development in a way that doesn’t infringe the law.

In RCM, tackling things like managing regulatory change and monitoring daily attack enforcement actions can get your compliance experts back almost a third of their business day. to perform higher value tasks.

Mapping processes with experts

AI can only track processes that you can map in detail. If AI is going to impact a particular role, make sure those stakeholders are involved in the planning stages. Often, developers plow through without enough input from end users who will accept or reject these tools.

Focus on processes and workflows that keep professionals coming back

Look for processes that are repetitive, manual, error-prone, and can be tedious for the humans performing them. Logistics, sales and marketing, and R&D are all areas of repetitive work that can be delegated to AI. AI can improve business outcomes in these areas by improving efficiency and reducing errors.

Thoroughly check your dataset

Cambridge University researchers recently studied 400 AI models related to COVID-19 and found that each of them has fatal flaws. Errors fall into two general categories, those that use data sets that are too small to be of value, and those that have limited disclosure, leading to different biases.

Small data sets are not the only data types that can corrupt a model. Public data sets may come from invalid sources. For example, Zillow introduced a new feature last year called Zestimate, which uses AI to cash out homes in a fraction of the time it normally takes. The Zestimate algorithm ended up generating thousands of market offers based on flawed Home Mortgage Disclosure Act data, which ultimately prompted Zillow to offer million dollar prize to improve the model.

Choose the right AI model

As AI models evolve, only a small group of them are fully autonomous. However, in most cases, AI models benefit greatly from having active human (or better yet, expert) input. “Supervised AI” relies on humans to guide machine learning, rather than letting algorithms figure things out on their own.

For most knowledge jobs, supervised AI will be required to meet your goals. However, for complex, specialized jobs, supervised AI still won’t get you as far as most organizations would like. To upgrade and unlock the true value of your data, AI needs not only supervision, but expert input.

The Expert-in-the-Loop (EITL) model can be used to solve large problems or problems that require expert human judgment. For example, EITL AI was used to discover new polymers, improve aircraft safetyand even to help with law enforcement planning on how deal with automated vehicles.

Start small but dream big

Make sure to thoroughly test and then continue to test AI-driven processes, but once you’ve solved the problems, you now have a plan to scale AI across your organization based on one patterns that you have tested and demonstrated in specific areas, such as GRC and RCM.

Kayvan Alikhani is co-founder and product manager at Kayvan previously led the Identity Strategy team at RSA. and Co-Founder and CEO of PassBan (acquired by RSA).


Welcome to the VentureBeat community!

DataDecisionMakers is a place where professionals, including technical people who work with data, can share data-related insights and innovations.

If you want to read about cutting-edge ideas and updates, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You can even consider contribute an article your own!

Read more from DataDecisionMakers


Goz News: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, the World everyday world. Hot news, images, video clips that are updated quickly and reliably.

Related Articles

Back to top button