Automated decision-making systems, whether calculating a credit score to decide someone's eligibility for a loan or determining health risk—and thus course of treatment—based on historical data have been used for decades. The pervasiveness of big data and machine learning techniques mean these systems are in use everywhere.
Machine learning techniques are used in search, in recommendation systems, and more. Government agencies use them for a wide variety of purposes: NASA, for mission planning, the EPA, for ranking risks of exposure to different chemicals, the Treasury Department, for assessing the quality of newly minted coins, and the FBI for rating and prioritizing threats.
But few things come for free in this world. Complex software systems can be hard to understand (that complexity underlies the cybersecurity problems we face). Understanding the decisions that programs make—and whether there are errors in those programs—is sufficiently important that some courts have decided to allow defendants (or their proxies) to examine the programs for flaws.
The complexity of machine learning (ML), and the inability to trace what triggered a particular ML decision, has upped the ante. Yet because ML systems are so powerful at pulling out seemingly hidden trends in the data, the use of the systems is inevitable. This is especially the case in situations where organizations are dealing with massive sets of data and millions—or billions—of decisions to make. Thus, for example, the U.S. Social Security Administration has an AI system "used by hearings and appeals-level Disability Program adjudicators to help maximize the quality, speed, and consistency of their decision making.”
Here's where the rubber hits the road. By law, government systems serving the public must enable challenges, that is, enable a person to challenge the decision if they believe it has been incorrectly made. In particular, a person has the right to know the basis for the decision so that they have the information to enable a proper challenge. And under certain laws, e.g., the Fair Credit Reporting Act (FCRA), a person has the same type of contestability right for private-sector decisions. The FCRA gives people this right so that they can learn on what basis they were turned down for a loan.
Professor Steven M. Bellovin of Columbia University and Professor Susan Landau of Tufts, along with Jim Dempsey of Berkeley Law School and Dr. Ece Kamar of Microsoft Research, co-organized a workshop for senior members of federal and state governments on Advanced Automated Systems, Contestability, and the Law in late January. On March 29th, Professors Bellovin and Landau offered a 4.5-hour hands-on student workshop at Tufts to give a flavor of this subject to interested students.
The workshop began with a brief discussion of machine learning and contestability by Dr. Suraj Srinivas of Harvard University; the legal issues by Professor Steven Bellovin, and ways one might achieve this by Professor Susan Landau. We worked through one case all together, on the Fair Credit Reporting Act, and then divided the students into groups to work out a case. There were three student groups in all; the scenarios (listed below) were taken from actual agency use cases or President Biden's Executive Order on AI. Suppose that someone is denied Social Security disability benefits because of an ML system; how can this be contested?
What technical features should this ML system have to permit reasonable appeals? What procedural and policy safeguards should exist? Each of the student groups had computer science and social science participants, and the groups were small to enable active participation. After the breakout session, the groups reconvened to share their results and discuss what they have learned. In addition, the workshop summary of the January government workshop will be shared with the student participants once it has been completed.
Resources:
- Slides by Suraj Srinivas
- Slides by Steve Bellovin: Contestability and the Law & Fair Credit Reporting Act Case Study
- Recommendations
Scenarios: