You can’t attend Transform 2022? View all summit sessions in our on-demand library now! See here.
The California Privacy Act (CPRA), the Virginia Consumer Data Protection Act (VCDPA), the Canadian Consumer Privacy Protection Act (CPPA) and many other international regulations all mark significant improvements that have been made in the data security space over the past few years. Under these laws, businesses can face serious consequences for mishandling consumer data.
For example, in addition to the specified consequences of a data breachlaws like the CCPA allow consumers to hold businesses directly responsible for data breaches under the right of individual action.
While these regulations are certainly remedial around the misuse of consumer data, they are not enough – and may never be – to protect disadvantaged communities. Most of three fourths of online households fearing their digital security and privacy, with most of the concerns belonging to the underserved population.
Disadvantaged groups are often negatively impacted by technology and can be in great danger when automated decision-making tools such as artificial intelligence (AI) and machine learning (ML) place biases against them or when their data is misused. AI technologies have even show for perpetuate discrimination in tenant selection, financing, hiring processes, and more.
Demographic bias in AI and ML tools is quite common, as design review processes fundamentally lack the diversity of people to ensure their prototypes are included for all. everyone. Tech companies must develop current approaches to using AI and ML to ensure they don’t negatively impact underserved communities. This article will explore why diversity must play an important role in data privacy and how companies can create more ethical and inclusive technologies.
Threats faced by vulnerable groups
Underserved communities are exposed to significant risks when sharing their data online, and unfortunately, data privacy law cannot protect them from overt discrimination. Even if current regulations are as inclusive as possible, there are ways these populations could be harmed. For example, data brokers can still collect and sell an individual’s geographic location to groups targeting protesters. Information about an individual’s participation at a rally or demonstration can be used in a number of intrusive, unethical, and potentially illegal ways.
While this scenario is hypothetical, there have been numerous real-world instances of similar situations. One year 2020 Research report detailed the data security and privacy risks LGBTQ people face on dating apps. The reported threats include blatant state surveillance, surveillance through facial recognition, and app data shared with advertisers and data brokers. Minorities have always been vulnerable to such risks, but proactive companies can help reduce that risk.
Lack of variety in automated tools
While much progress has been made in diversifying the tech industry over the past few years, a fundamental shift is needed to minimize lingering bias in AI and ML algorithms. In reality, 66.1% the scientists’ data is reported as white and almost 80% are male, highlighting the severe lack of diversity among AI groups. As a result, AI algorithms are trained based on the views and knowledge of the teams that build them.
AI algorithms not trained to recognize certain groups of people can do significant damage. For example, American Civil Liberties Union (ACLU) released research in 2018 demonstrating that Amazon’s facial recognition software “Rekognition” mis-matched 28 members of the United States Congress with a photograph. However, 40% of false matches are people of color, despite the fact that they make up only 20% in Congress. To prevent future instances of AI bias, businesses need to review their design review processes to ensure they are inclusive for everyone.
Comprehensive design review process
There may not be a single source of truth to reduce bias, but there are ways organizations can improve their design review process. Here are four simple ways tech organizations can reduce bias in their products.
1. Ask challenging questions
Developing a list of questions to ask and answer during the design review process is one of the most effective methods for creating a more comprehensive prototype. These questions can help AI teams identify problems they haven’t thought of before.
Essential questions include whether the dataset they’re using has enough data to prevent specific types of bias, or whether they’re performing tests to determine the quality of the data they’re using. use or not. Asking and answering difficult questions allows data scientists to enhance their prototype by determining if they need to review more data or if they need to include a third-party expert in the review process. design review.
2. Hire a privacy expert
Similar to any other specialist involved in compliance, Privacy experts were initially seen as innovation bottlenecks. However, as more and more data regulations have been introduced in recent years, privacy directors have become a core component of the C-suite.
Privacy experts in the company are needed to act as experts during the design review process. Privacy experts can provide unbiased opinions on prototypes, help raise tough questions that data scientists haven’t thought of before, and help create comprehensive products , safe and secure.
3. Take advantage of diverse voices
Organizations can offer diverse voices and perspectives by expanding their recruitment efforts to include candidates from diverse demographics and backgrounds. These efforts should extend to the C-suite and the board of directors, as they may act as representatives of employees and customers who may not have a say.
Increasing diversity and inclusivity in the workforce will create more room for innovation and creativity. Research shows that racially diverse companies have 35% have a higher chance of outperforming their competitors, while organizations with a gender-diverse executive team earn 21% more profits than their competitors.
4. Implement diversity, equality and inclusion (DE&I) training
At the core of any diverse and inclusive organization is a DE&I . Program. Conducting employee education seminars on privacy, AI bias, and ethics can help them understand why they should care about DE&I initiatives. Currently, only 32% of businesses that are implementing DE&I training programs for employees. It is clear that DE&I initiatives need to become a higher priority for real change to be effected within an organization, as well as its products.
The future of ethical AI tools
While some organizations are on their way to creating safer and more secure tools, others still need to make major improvements to create products that are completely unbiased. By incorporating the above recommendations into their design review process, they will not only be a few steps closer to creating holistic and ethical products, but can also boost their innovation efforts. and digital transformation. Technology can bring great benefits to society, but every business will have to work hard to make this a reality.
Veronica Torres, worldwide legal and privacy advisor at Jumio.
Welcome to the VentureBeat community!
DataDecisionMakers is a place where professionals, including technical people who work with data, can share data-related insights and innovations.
If you want to read about cutting-edge ideas and updates, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You can even consider contribute an article your own!