Free reading is over, click to pay to read the rest ... pages
0 dollars,0 people have bought.
Reading is over. You can download the document and read it offline
0people have downloaded it
Document pages: 27 pages
Abstract: There is increasing regulatory interest in whether machine learningalgorithms deployed in consequential domains (e.g. in criminal justice) treatdifferent demographic groups "fairly. " However, there are several proposednotions of fairness, typically mutually incompatible. Using criminal justice asan example, we study a model in which society chooses an incarceration rule.Agents of different demographic groups differ in their outside options (e.g.opportunity for legal employment) and decide whether to commit crimes. We showthat equalizing type I and type II errors across groups is consistent with thegoal of minimizing the overall crime rate; other popular notions of fairnessare not.
Document pages: 27 pages
Abstract: There is increasing regulatory interest in whether machine learningalgorithms deployed in consequential domains (e.g. in criminal justice) treatdifferent demographic groups "fairly. " However, there are several proposednotions of fairness, typically mutually incompatible. Using criminal justice asan example, we study a model in which society chooses an incarceration rule.Agents of different demographic groups differ in their outside options (e.g.opportunity for legal employment) and decide whether to commit crimes. We showthat equalizing type I and type II errors across groups is consistent with thegoal of minimizing the overall crime rate; other popular notions of fairnessare not.