2

Frankie Frericks

Algorithmic surveillance is surveillance that is performed by technology with the use of algorithms. Algorithms are any program used to do a computation from an input to get some sort of an output (Watcher-Boettcher, 2017). These algorithms are then used in various technologies to make clarifications and educated guesses on human beings. It is a form of automated decision-making. Boyd, Levy, Marwick (2014) describe algorithmic surveillance as “Along with information about who you know, technical mechanisms that underlie the “big data” phenomenon—like predictive analytics and recommendation systems—make imputations about who you are like, based on your practices and preferences” (p. 54). These systems are surveillance because they make judgments about people by monitoring their current and past behavior. Murphy (2016) states, “it is also now clear that the algorithmic processing of mass data sets plays an essential role in the modern government surveillance apparatus” (p.1). The information computed by algorithms play a huge role in modern surveillance.

An example of the use of algorithmic surveillance is COMPAS. COMPAS stands for Correctional Offender Management Profiling for Alternative Sanctions. This app uses algorithms to determine how dangerous, and how much of a threat, criminals are on a scale of 1-10. What the COMPAS app takes into account while computing its’ ratings of criminals is hard to know, because the designers of the app usually don’t share these (Wachter-Boettcher, 2017). There are many reasons algorithmic surveillance can help us take a critical approach to surveillance. First of all, algorithmic surveillance can be extremely biased. According to Watcher-Boettcher “COMPAS might be a particularly problematic example- it can directly affect how long a convicted player spends in jail, after all. But it’s far from alone. Because, no matter how much tech companies talk about algorithms like their advanced math, they always reflect the values of their creators: the programmers and product teams working in tech. And as we’ve seen time and again, the values that tech culture holds aren’t neutral” (p. 121). This helps explain why many people associated with the criminal justice system see COMPAS as racially biased. The creators of COMPAS design the algorithms in a way that support their existing thinking on criminality. If their views include racial bias, then COMPAS will show racial bias as well.

References

Boyd, D., Levy, K., Marwick, A. (2014). The networked nature of algorithmic discrimination. Open Technology Institute. Retrieved from https://www.danah.org/papers/2014/DataDiscrimination.pdf

Murphy, M. H. (2016). Algorithmic surveillance: True negatives. Tech Law for Everyone. Retrieved from https://www.scl.org/articles/3717-algorithmic-surveillance-true-negatives

Wachter-Boettcher, S. (2018). Technically wrong: Sexist apps, biased algorithms, and other threats of toxic tech. New York, NY: W. W. Norton & Company.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Key Concepts in Surveillance Studies Copyright © 2019 by Frankie Frericks is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book