EXECUTIVE SUMMARY:

From its inception to its implementation, facial recognition software has engendered controversy.

At the very beginning, artificial intelligence researchers invited volunteers to sign consent forms, sit for portraits, and then fed the resulting collection of images into algorithms. This method was time-consuming and produced only few hundred images, so researchers set about scraping for images in other ways.

The Creative Commons allowance on Flickr enables users to permit others to use, create likenesses of and potentially distribute their works. Researchers swept up countless numbers of these images to feed into algorithms. Public accounts on other popular webpages were also a source of photos.

Under the noble pretext of research, facial recognition software tools were born. Less noble was the decision to lend or sell these tools to corporations, especially since the technology is not fully-cooked.

In 2018, MIT researcher, Joy Buolamwini, discovered that facial recognition software has been (accidentally) programmed with algorithmic biases, potentially leading to untoward consequences for minority groups.

Also in 2018, facial recognition software helped save the day in more than 8,000 criminal investigations. In the heartbreaking Capital Gazette shooting, facial recognition software aided police in identifying the suspect. Police departments, and other federal and private groups routinely use facial recognition software to serve the common good.

In Pennsylvania, for example, the Department of Transportation enters drivers license photos into a databank for the purpose of preventing identity theft and fraud, letting people know if they have an impersonator.

In retail stores, companies are experimenting with using the age, gender and mood recognition components of the AI to facilitate the delivery of in-store ad messaging. Getting consumers to make purchases drives businesses forward and powers the economy.

However, advocacy groups like the American Civil Liberties Union (ACLU) contend that government entities could use facial recognition technology for nefarious purposes and insist that big tech companies stop selling to the government. An ethical framework for the responsible use of facial recognition technology, drafted by the ACLU, can be found here.

In May, the city of San Francisco passed a ban on facial recognition tech, but some argue that banning it outright is short-sighted thinking, calling for regulation instead.

Where do you stand?

For additional perspective on the facial recognition technology debate, visit The Wall Street Journal.