Algorithms Alone Should Not Determine Who Gets COVID Vaccines


By Natasha Matta

With ever-advancing technology, humans are increasingly relying on algorithms to make difficult decisions, such as how to allocate hospital beds in ICUs or which candidate to select for a job. However, they become dangerous when both the algorithms and data they utilize do not consider either ethical implications or representation, and decisions, when made by the software, are not reviewed and vetted by real people. Machine-learning algorithms originally designed to equitably and efficiently determine COVID-19 vaccination prioritization are struggling with this exact problem, as demonstrated at Stanford University. 

In December, Stanford developed a complex algorithm in the hopes of efficiently organizing its COVID-19 vaccine allocation among its faculty and staff. However, the algorithm determined that only 7 of 1,300 resident physicians at Stanford Medical Center, many of whom worked on the front lines treating patients and combating the pandemic, were to be among the first 5,000 vaccine recipients. Meanwhile, those who did make it into the first wave of vaccinations included administrators and doctors doing telehealth or virtual patient visits. 

Over 100 residents protested a publicity event celebrating the first round of COVID-19 vaccinations at Stanford. Hospital leadership apologized to the protesters, vaguely blaming the prioritization error on “a very complex algorithm.” However, many saw this as a thinly-veiled excuse, as the leadership team had been alerted of the issue earlier and responded by merely adding two more residents to the initial vaccination list, allowing for just 7 out of over 1,000 physicians to receive the shot.

“Our algorithm, that the ethicists, infectious disease experts worked on for weeks...clearly didn’t work right,” Tim Morrison, Director of the Ambulatory Care Team, conceded.

A presentation slide illustrating how the algorithm decided vaccine prioritization shows a straightforward rules-based formula, not a complex machine-learning algorithm that hospital leadership often called a “black box.” The formula considered three areas: “employee-based variables,” “job-based variables,” and guidelines provided by the California Department of Public Health. Staff members received a certain number of points for each category, with the highest total possible score of 3.48. The higher the score, the higher the person’s position on the vaccine list.

 People’s scores increased linearly with age, and additional points were added to scores of those over age 65 or under age 25. This put many residents and frontline workers at a disadvantage because they often fell in the middle-aged category, between ages 25 and 65. Job-based variables, not employee-based, contributed most to an individual’s overall score — although how these were weighed and determined was extremely ambiguous — and Stanford would not provide further comment. It is important to note that the algorithm did not take into account exposure to COVID-19 patients, and did not distinguish between those who had been infected through patient contact versus other means. Residents also rotate between different departments, so they did not necessarily receive points for past assignments, where they could have, for example, been exposed to COVID-19 patients.

In the California Department of Public Health’s vaccine allocation guidelines, exposure risk was the single highest factor for vaccine prioritization. However, this advisory is aimed at county and local governments, not hospital or university departments. This may have given residents a higher score than they would have otherwise received from Stanford’s own internal metrics, but it was not enough to overcome the weight of the employee and job-based variables.

Algorithms are commonly used in healthcare and many medical sectors to determine patients’ risk levels, distribute resources equitably, ensure everybody can receive necessary care.

“One of the core attractions of algorithms is that they allow the powerful to blame a black box for politically unattractive outcomes for which they would otherwise be responsible,” critic Roger McNamee posted on Twitter. “But people decided who would get the vaccine,” continued Veena Dubal, Professor of Law at the University of California, Hastings, “the algorithm just carried out their will.”

Stanford issued a formal apology, set to revise the original distribution plan, and promised to be more transparent about the process, but many advocated to dissolve the use of the algorithm entirely, leave it up to division chiefs to determine prioritization for their team, or went and received their COVID-19 vaccination from a facility not affiliated with Stanford. The algorithm has since undergone major changes to prioritize frontline health care workers, but the damage had already been done. The failure of this vaccine allocation algorithm epitomizes the need for transparent, ethical, and accountable AI and reinforces that we can’t leave critical health decisions up to the verdict of a “black box.”