>The issue is that as a society we've (mostly) decided that unfairly classifying someone based on correlated but non-causal characteristics is wrong, EVEN in the extreme that you're right more often than you're wrong if you make that assumption.
This is likely due to an acknowledgment of the limits of human models to account for the full context surrounding correlated-but-non-causal classifications, such that conclusions drawn from them can have unforeseen or highly detrimental ramifications.
Speaking to race in America specifically, the schema through which we judge people are highly susceptible to bias from the white supremacist bent of historical education and general discourse. This is how you end up with cycles like those within the justice system (pushed in part by sentencing software), wherein black defendents are assumed to have a higher likelihood of re-offending, therefore increasing the likelihood of any given black defendent not receiving bond or having a lengthy sentence if convicted. After all, blackness correlates with recidivism. Lying outside this correlative relationship is the likely causal relationships of longer stays in jail and lack of access to employment opportunities, which disproportionately affect black people, causing higher rates of recidivism, regardless of race.
You can still have enhanced vigilance without enhanced annoyance and mistakes.
There's often a superior choice lurking that nobody is thinking about, sometimes expensive, sometimes not, seemingly unrelated to such optimization.
This is why ML is not intelligence, it cannot find new solutions you're not already looking for.
The main problem of the judicial and police system is it tries to be procedurally fair and still fails at it anyway.
I'd counter that in many cases it doesn't even try to be fair. It privileges the ability to craft an argument over bare facts, which immediately privileges those who can afford professional representation. At the core of the fear of a surveillance state isn't simply the loss of privacy (which in and of itself could be worth the accuracy it would bring to judicial proceedings), but the fact that it would just bolster the ability of skilled narrative-builders to pull the most advantageous facts out of context and twist them to their whims.
This is likely due to an acknowledgment of the limits of human models to account for the full context surrounding correlated-but-non-causal classifications, such that conclusions drawn from them can have unforeseen or highly detrimental ramifications.
Speaking to race in America specifically, the schema through which we judge people are highly susceptible to bias from the white supremacist bent of historical education and general discourse. This is how you end up with cycles like those within the justice system (pushed in part by sentencing software), wherein black defendents are assumed to have a higher likelihood of re-offending, therefore increasing the likelihood of any given black defendent not receiving bond or having a lengthy sentence if convicted. After all, blackness correlates with recidivism. Lying outside this correlative relationship is the likely causal relationships of longer stays in jail and lack of access to employment opportunities, which disproportionately affect black people, causing higher rates of recidivism, regardless of race.