Evolving more equitable AI


The pandemic that erupted around the world last year sheds a cold, hard light on many things – varying levels of preparedness to respond; combined attitude toward health, technology, and science; and many financial and social inequalities. As the world continues to navigate the covid-19 health crisis, and some areas are even beginning a gradual return to work, school, travel, and recreation, it is important to address those covid-19. competing priority to protect public health equitably while ensuring privacy.

The escalated crisis has led to rapid changes in work and ethical behavior, as well as increased reliance on technology. It is more critical now than ever that companies, governments, and society are careful in the use of technology and management of personal information. The augmented and rapid adoption of artificial intelligence (AI) demonstrates how adaptive technologies can easily adapt people and social institutions in potentially dangerous or unequal ways.

“Our relationship with technology is in a state of critical post-pandemic change,” said Yoav Schlesinger, head of AI ethics at Salesforce. “There is a negotiation process between people, business, government, and technology; how their data flows between all parties that will agree on a new social data contract.”

AI in action

As the covid-19 crisis began in early 2020, scientists called on AI to support a variety of medical devices, such as identifying potential drug candidates for vaccines or treatments, to help detect -an potential covid-19 symptoms, and allocate scarce resources such as severe unit care beds and fans. Specifically, they rely on the analytical power of AI -enhanced systems to produce vaccines of greater size and treatment.

While advanced data analytics tools can help gain insights from a lot of data, the outcome is not always a more equitable outcome. In fact, AI -driven tools and the data sets they work with can continue to be inherently biased or systemic. Throughout mortality, agencies such as the Centers for Disease Control and Prevention and the World Health Organization collect a lot of data, but the data doesn’t really accurately represent disproportionate and negatively affected populations – including blacks, brown, and indigenous people-nor are some of the diagnostic advances they have made, according to Schlesinger.

For example, biometric devices like the Fitbit or Apple Watch show promise in their ability to detect potential symptoms of covid-19, such as changes in temperature or oxygen saturation. However analyzes rely on frequently erroneous or limited data sets and may identify bias or inequality that does not equally affect vulnerable individuals and communities.

“There is some research that has shown green LED light have a more difficult time with pulse readings and oxygen saturation in darker skin tones, ”says Schlesinger, referring to the semiconductor light source. “That’s why it can’t do an equally good job of catching covid symptoms for those with black and brown skin.”

AI has shown much greater effectiveness in helping to analyze multiple data sets. A team at the Viterbi School of Engineering at the University of Southern California has developed an AI framework to help analyze covid-19 vaccine candidates. After identifying 26 potential candidates, the field was narrowed to 11 most likely to succeed. The data source for the analysis is the Immune Epitope Database, which includes more than 600,000 infectious determinants coming from more than 3,600 species.

Other researchers from Viterbi applied AI to identify cultural codes more accurately and better understand the societal norms that guide ethnic and racial group behavior. That has a significant impact on how much a population fares during a crisis such as a pandemic, due to religious ceremonies, traditions, and other practices that can accelerate the spread of the viral.

Leading scientists Kristina Lerman and Fred Morstatter based their research Theory of Moral Foundation, which outlines the “intuitive ethics” that shape the moral values ​​of a culture, such as nurturing, fairness, loyalty, and authority, helping to inform individual and group behavior.

“Our goal is to create a framework that allows us to understand the dynamics that drive the decision -making process in a culture to a deeper level,” Morstatter said in a report released by USC. “And in doing so, we’ve created a lot of forecasts with cultural information.”

The research also examines how to position AI in an ethical and fair manner. “Most people, but not all, are interested in making the world a better place,” Schlesinger said. “Now we need to go to the next level – what goals do we want to achieve, and what results do we want to see? How do I measure success, and what does it look like?”

Killing ethical concerns

It is critical to question assumptions about collected data and AI processes, according to Schlesinger. “We’re talking about achieving equity through awareness. At each step of the process, you make value judgments or assumptions that weigh your consequences in a specific direction,” he said. “That’s the crux. the challenge of building ethical AI, which is to look at all the areas where people are biased. ”

Part of this challenge is to make a critical analysis of the data sets that inform AI systems. It is important to understand the sources of the data and the composition of the data, and to answer questions such as: How is the data created? Does it include different ranks of stakeholders? What is the best way to deploy that data in a model to reduce bias and increase rationality?

As people return to work, employers can using sensing technologies with AI content, including thermal cameras to detect high temperatures; audio sensors to detect coughs or raised sounds, which contribute to the spread of respiratory droplets; and video streams to monitor hand washing procedures, body detachment regulations, and mask requirements.

Such monitoring and analysis systems not only pose technical accuracy challenges but pose significant risks. human rights, privacy, security, and trust. Enthusiasm for more vigilance is a disturbing effect of the disease. Government agencies use surveillance-camera footage, smartphone location data, credit card purchase records, and even passive temperature scanning of many public places such as airports to help track. in the actions of people who may have contracted or been baptized with covid-19 and established virus transmission chains.

“The first question that needs to be answered is not only can we do it – but do we have to?” according to Schlesinger. “Scanning individuals for their biometric data without their consent raises concerns about ethics, even if it is touted as a benefit for the very best. There needs to be a firm- on talking as a society if there is a compelling reason to implement these technologies in the first place. “

What the future looks like

As society returns to something heading back to normal, it’s time to critically re-examine the relevance of the data and establish new data collection practices, as well as appropriate use-and potential error. use – of data. When creating and deploying AI, technologists will continue to make the necessary assumptions about data and processes, but the causes of that data need to be questioned. Is the data legitimately obtained? Who assembled it? What assumptions are it based on? Is it properly presented? How to maintain the privacy of citizens and consumers?

As AI becomes more widely deployed, it’s important to think about how to also provide confidence. Using AI to increase human decision -making, and not completely replace human input, is one approach.

“There are a lot of questions about the role that AI should play in society, its relationship to people, and what are appropriate tasks for people and what are appropriate tasks for an AI, “said Schlesinger. “There are specific areas where AI skills and the ability to increase human capabilities can accelerate our confidence and trust. In areas where AI doesn’t replace people, but increases their effort, that’s next scene. ”

There are always situations where someone has to participate in the judgment. “In regulated industries, for example, like health care, banking, and finance, there has to be someone to step in to maintain compliance,” Schlesinger said. “You can’t just deploy AI to make care decisions without the input of a clinic. As much as we like to believe that AI can do that, AI has no empathy, and probably not. ”

It is critical for the data to be collected and produced by AI to not exacerbate but minimize inequality. There must be a balance between finding ways for AI to help facilitate human and social progress, fostering equitable actions and responses, and simply recognizing that some problems require human solutions. .

Its content was created by Insights, the standard MIT Technology Review content. It was not written by the editorial staff of the MIT Technology Review.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *