Artificial Intelligence (AI), one of the greatest advancements humans have developed to date, has the power to augment the growth of healthcare, education, media, and job training — as well as physical and mental health. However, AI cannot ultimately improve any of these areas if the supporting data encodes biases in race, gender, and ethnicity. These technological advancements will disproportionately affect marginalized populations if the algorithms governing the age of technology are disreputable. To overcome the issues with data, there must be continual efforts to review and rectify the incoming information before it is utilized as a foundation for societal improvement.
AI refers to the ability of systems to emulate human intellect and behavior, and its pervasiveness is rapidly expanding. However, many professionals are torn regarding the benefits to the extensive use of AI in today’s world. Advocates for the development of AI believe that these systems will never surpass human capabilities, while opposers believe that AI will result in a decrease in job availability, a scientific power trip, or a larger disparity between developed and developing countries. However, the relevance of these arguments declines if the results of AI algorithms are inaccurate.
There are various types of algorithmic analyses that exist. Machine learning (ML) is a technique that allows machines to learn without previous instruction. ML fosters the development of correlations over time because correlations within the data become stronger with each refresh. There are two types of ML algorithms: supervised and unsupervised. Supervised ML takes labeled data to recognize patterns while unsupervised ML works with raw data in which the patterns are unknown.
Economist and author Richard R. Nelson writes in his 1977 book, “The Moon and the Ghetto,” of the dichotomy between AI’s ability to run ML algorithms with accurate analyses for situations as complex as the moon landing but not for social problems for those in poor and marginalized communities. Racism’s impact on job opportunities, quality of healthcare received, education, and access to affordable housing (especially with redlining policies) is of great concern, and if the next best advancement cannot eliminate it, then the world’s state of discrimination will be prolonged. One area of concern exists within the criminal justice system.
Facial recognition technology (FRT), an example of AI’s technological impact, uses digital images to identify human faces. Its aid has been controversial in identifying suspects for the criminal justice system and sometimes cases of human trafficking. However, the success of FRTs is contingent on the quality of images that are provided. FRTs have been implemented across the world, but their results fluctuate. Situations may produce false negatives, allowing a suspect to escape, or false positives, such as identifying an innocent person of color as a criminal.
On a similar note, any government using AI in predictive policing — taking data about frequency, location, and type of crime to anticipate which neighborhoods need more active surveillance — must be cautious to avoid illegitimate questioning of minority groups that can lead to biased data collections. Before these AI models can be used as a replacement for humans in the future, they must be independently reappraised for the safety and equity of individuals.
Much of the data in the world is impacted by biases. The dangers associated with such bias must be increasingly surveyed for them to be eliminated as an entirety. People must be on a constant lookout for any instances of discrimination that infiltrates data collections. Although AI ethicists have introduced several solutions, they still heavily emphasize only using AI as supplemental programs until these programs no longer prioritize certain demographics over others.
Image courtesy of Wikimedia Commons
Great article!