top of page

Is technology keeping up with diversity, specifically in AI?



With advancements in artificial intelligence (AI), the conversation around diversity has seemingly quieted down, all while AI has accelerated. Many believe that AI solves all types of problems, including hiring diversity. Unfortunately, this is a dangerous assumption. The fact is AI is only as unbiased as the data that it is trained on. If the data contains biases, AI will replicate them, potentially harming people from marginalized communities. This data "input" doesn't mean a person programming AI is the only one responsible. It also means AI learns from past experiences like the internet and algorithms created by observing the behaviors of everyone in a system. Historically, AI was built from data collected before or while diversity was absent. That's why the human touch must be heavily integrated into the processes where AI is present. For example, facial recognition systems have been shown to have higher error rates for people with darker skin tones, which is a clear indication of the need for diverse data sets and diverse development teams.


I was amazed to see how quickly AI can learn from others, not in a good way! John Oliver has a great example of AI only being as biased as the ones who created it. He claims "bias input leads to bias output," explaining that in theory, AI should eliminate bias, but it is only as smart as those who made it. You can see the clip here. It's valuable for organizations to prioritize diversity in their AI development teams, ensuring they have diverse perspectives and experiences in the development process. But AI shouldn't be the solution to diversity, inclusion, and equity. In terms of hiring, AI tools can automate certain processes, such as resume screening and candidate communication, saving hiring managers time and effort. But when it's hard for humans to screen resumes and not miss a "diamond in the rough" how can we expect AI not to do the same? When it comes to ensuring diversity in hiring, human oversight is crucial. Again, AI tools are only as unbiased as the data they are trained on, and if that data is biased, AI will perpetuate that bias. Therefore, it is essential to have a diverse team of human recruiters who can provide oversight and ensure that the hiring process is fair and inclusive. As AI continues to evolve and become more integrated into our daily lives, it is essential that we continue to have open and honest conversations about the importance of diversity. We must recognize that diversity is not just a buzzword but a necessary component for developing equitable and inclusive technology.

Comments


bottom of page