As many of you know, the US police departments are well armed with the latest technology to keep us safe. Robotics is being added to their arsenal of things to acquire—social work is usually not part of that arsenal. Face recognition is based on the population of faces used to train the AI algorithm, and this being so, it is very likely that if trained on white faces only, the victims of police abuse via robot will be people of color (which is no different from what we have now by having white police officers in charge of protecting and killing). Of course, one way to level the playing field is to have China program the algorithms on its population of one billion Asian faces so that both black and white faces here in America are killed at the same rate of error, which some might see as going a long way to improving race relations but the 70 million who voted for Donald Trump might object to this solution--especially (and ironically) the black people who voted for him thinking that when the robot sees them it will think 'white face'.

More Edward J Tehovnik's questions See All
Similar questions and discussions