Shocking: Pregnant Woman Arrested Due to AI Mix-Up! How Safe Are We?

Pregnant Black woman wrongly arrested by Detroit police after AI facial recognition misidentification. She's suing police and calling for end to use of this tech.
Pregnant woman wrongly arrested by Detroit police after AI facial recognition misidentification. (Representational Image: Unsplash)
Pregnant woman wrongly arrested by Detroit police after AI facial recognition misidentification. (Representational Image: Unsplash)

By Iqra Batool

Porcha Woodruff, a 32-year-old Black woman from Detroit, Michigan, was eight months pregnant when she was wrongly arrested by police in February 2023. Woodruff was accused of a robbery and carjacking that had taken place in January, but she had nothing to do with the crime.

Police identified Woodruff as a suspect using facial recognition software. The software matched her face to a security camera image of the carjacker, but the match was incorrect. Woodruff and the carjacker did not look alike, and the carjacker was not visibly pregnant.

Despite the fact that the match was clearly wrong, police arrested Woodruff anyway. She was held in jail for 11 hours before being released on a bond of $100,000. A month later, the charges against her were dropped.

Woodruff is now suing the Detroit Police Department for wrongful arrest. She is also calling for an end to the use of facial recognition technology by law enforcement.

Woodruff's case is just one example of the many problems with facial recognition technology. Facial recognition software is often inaccurate, especially when it comes to identifying people of color. It is also prone to bias, which can lead to wrongful arrests like Woodruff's.

Facial recognition technology can also be biased. This is because the software is trained on data that reflects the biases of the people who created it. As a result, the software is more likely to make mistakes when it is used to identify people who are not white, male, and middle-class.
Cathy O'Neil, Author of Weapons of Math Destruction

In addition to being inaccurate and biased, facial recognition technology is also a privacy violation. When police use facial recognition software, they collect and store data about people's faces without their consent. This information can monitor individuals' movements, observe their actions, and even forecast their upcoming behavior.

For all of these reasons, it is clear that facial recognition technology should not be used by law enforcement. Woodruff's case is a wake-up call for policymakers and law enforcement officials. We need to put an end to the use of this dangerous and flawed technology.

The Dangers of Facial Recognition Technology

Facial recognition technology is a powerful tool that can be used for good or for bad. In the wrong hands, it can be used to violate people's privacy, discriminate against certain groups of people, and even facilitate human rights abuses.

Facial recognition technology should not be used by law enforcement.(Unsplash)
Facial recognition technology should not be used by law enforcement.(Unsplash)

Here are some of the dangers of facial recognition technology:

  • Inaccuracy: Facial recognition software is often inaccurate, especially when it comes to identifying people of color. This is because the software is trained on datasets that are predominantly white. As a result, the software is more likely to make mistakes when it is used to identify people of color.

  • Bias: Facial recognition software can also be biased. This is because the software is trained on data that reflects the biases of the people who created it. As a result, the software is more likely to make mistakes when it is used to identify people who are not white, male, and middle-class.

  • Privacy violations: Facial recognition technology can be used to collect and store data about people's faces without their consent. This information can trace individuals' paths, oversee their actions, and potentially forecast their next moves. Such capabilities seriously jeopardize personal privacy.

  • Discrimination: Facial recognition technology can be used to discriminate against certain groups of people. For example, it could be used to target people of color for surveillance or arrest. This could lead to a system of mass surveillance and discrimination that disproportionately affects people of color.

  • Human rights abuses: Facial recognition technology could be used to facilitate human rights abuses. For example, it could be used to identify and target political dissidents or to track and control people in authoritarian regimes.

Pregnant woman wrongly arrested by Detroit police after AI facial recognition misidentification. (Representational Image: Unsplash)
AI Helps Identify New Antibiotic than Can Take One of Three Top ‘Superbugs'

The Case for a Ban on Facial Recognition Technology

The dangers of facial recognition technology are clear. This technology should not be used by law enforcement or any other government agency. We need to ban the use of facial recognition technology until it can be made more accurate, less biased, and more privacy-protective.

Facial recognition technology is a powerful tool that can be used for good or for bad. In the wrong hands, it can be used to violate people's privacy, discriminate against certain groups of people, and even facilitate human rights abuses.
Joy Buolamwini, AI researcher and founder of the Algorithmic Justice League

In the meantime, we can take steps to protect ourselves from the dangers of facial recognition technology. Here are a few tips:

  • Be aware of the use of facial recognition technology in your community.

  • Ask businesses and organizations if they are using facial recognition technology.

  • If you are concerned about your privacy, you can ask businesses and organizations not to use facial recognition technology on you.

  • You can also use privacy-protective tools like face masks and sunglasses to make it more difficult for facial recognition software to identify you.

We need to take action to protect ourselves from the dangers of facial recognition technology. We need to ban the use of this technology until it can be made more accurate, less biased, and more privacy-protective. In the meantime, we can take steps to protect ourselves from this dangerous technology.

Simplify your job search in healthcare with MedBound:

https://www.medbound.com/public-internship

Pregnant woman wrongly arrested by Detroit police after AI facial recognition misidentification. (Representational Image: Unsplash)
No More Fake Sick Leaves: AI to Find Out If People Have a Cold from the Tone of Their Voice

Related Stories

No stories found.
logo
Medbound
www.medboundtimes.com