Critical Dataset Studies Reading List

Contributors: Frances Corry, Edward B. Kang, Hamsini Sridharan, Sasha Luccioni, Mike Ananny, Kate Crawford
Last updated: 22 September 2022

How should we study datasets in machine learning? As machine learning (ML) increasingly becomes a site of sociotechnical inquiry, invoking numerous social, political, legal, and ethical issues, datasets are a crucial component as they are core material used to train models. Inspired by Tarleton Gillespie and Nick Seaver’s Critical Algorithm Studies reading list, this collection is meant to serve as an entry point to the growing literature on ML datasets across the fields of computer science, human-computer interaction, science and technology studies, media studies, and histories of technology, among others. We compiled this list primarily as a resource for researchers seeking to understand—from a variety of perspectives—how ML datasets work, do work, and are worked upon. We hope it will also be of use to technology practitioners and students seeking to build ML systems.

We limit our scope to works that focus on datasets deployed in the training and testing of ML systems, and despite some overlap, this list is not a primer for the field of critical technology studies more generally. Entries are sorted into various sections with the intention of providing readers a preliminary structure that will help them follow their specific interests. We acknowledge that classificatory practice is always subjective and that many of these titles can fit appropriately under multiple sections or named in different ways. The current iteration is a reflection of our own ideas and what we find helpful as a way to organize the emerging literature that we are working with. There are certainly other ways to structure this reading list, and we are open to suggestions that expand its range and improve usability. Our focus is primarily on academic publications, but for those who are more interested in understanding how datasets have been discussed in the press as of July 2022, we offer a selection of examples at the end of the reading list. 

This list is also not meant to be exhaustive. We see the list as a living resource and invite readers to make suggestions and contributions via this form if there are key titles that they think should be included. Please note that while all links are functional as of July 2022, we are unable to continuously monitor for updated versions of papers or fix broken links.

Despite these limitations, we hope this reading list might serve as a useful resource for scholars and practitioners investigating ML datasets as sociotechnical assemblages that shape and are shaped by social worlds.

Table of Contents











This section contains a broad set of introductory texts and locales to ground the study of training data. Resources included in this section cover the politics, possibilities, and pitfalls of ML training data and offer early provocations for thinking about particular aspects of training data, such as privacy or bias.

  • Barocas, S., & Selbst, A. D. (2016). Big Data’s Disparate Impact. California Law Review,104(3), 671–732.
  • Crawford, K. (2021). Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence, see ‘Data’ chapter (pp. 89-122). New Haven, CT: Yale University Press. 
  • Crawford, K., & Paglen, T. (2019). Excavating AI: The Politics of Images in Machine Learning Training Sets.
  • Denton, E., Hanna, A., Amironesei, R., Smart, A., Nicole, H., & Scheuerman, M. K. (2020). Bringing the People Back In: Contesting Benchmark Machine Learning Datasets. 6. ArXiv.
  • Harvey, A. (2021). Face and Biometric Image Datasets.
  • MacKenzie, A., & Munster, A. (2019). Platform Seeing: Image Ensembles and Their Invisualities. Theory, Culture & Society, 36(5), 3–22.
  • Miceli, M., Posada, J., & Yang, T. (2022). Studying Up Machine Learning Data: Why Talk About Bias When We Mean Power? Proceedings of the ACM on Human-Computer Interaction, 6(GROUP), 1–14.
  • Paullada, A., Raji, I. D., Bender, E. M., Denton, E., & Hanna, A. (2020). Data and Its (Dis)Contents: A Survey of Dataset Development and Use in Machine Learning Research. ArXiv.
  • Roberge, J., & Castelle, M. (Eds.). (2020). The Cultural Life of Machine Learning: An Incursion into Critical AI Studies (1st ed. 2021 edition). Palgrave Macmillan.
  • Srinivasan, R., & Chander, A. (2021). Biases in AI Systems: A Survey for Practitioners. Queue, 19(2), 45-64.
  • Suresh, H., & Guttag, J. V. (2021). A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. ArXiv.
  • Thylstrup, N. B. (2022). The Ethics and Politics of Data Sets in the Age of Machine Learning: Deleting Traces and Encountering Remains. Media, Culture & Society.


This section consists of broader foundational readings that don’t all necessarily deal specifically with machine learning datasets, but which the authors of this list have found useful to contextualize their study. We acknowledge that the titles below do not form an exhaustive index of all foundational readings, but point to them as particularly helpful ones for thinking about the ontological and epistemological complexities of the “dataset” as an object/genre of analysis. 

a. Politics of Classification

This subsection focuses on classification as a practice of not only world-ordering, but also world-making, and how its logics underlie the ways in which datasets are conceived and built. 

  • Boutyline, A., & Soter, L. K. Cultural Schemas: What They Are, How to Find Them, and What to Do Once You’ve Caught One. American Sociological Review, 86(4), 728–758.
  • Bechmann, A., & Bowker, G. C. (2019). Unsupervised by any other name: Hidden layers of knowledge production in artificial intelligence on social media. Big Data & Society, 6(1).
  • Bowker, G. C., & Star, S. L. (2000). Sorting Things Out: Classification and Its Consequences. Cambridge, MA: MIT Press.
  • Crawford, K. (2021). Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence, see ‘Classification’ chapter (pp. 123-150). New Haven, CT: Yale University Press. 
  • Fourcade, M., & Healy, K. (2013). Classification Situations: Life-Chances in the Neoliberal Era. Accounting, Organizations and Society, 38(8), 559-572.
  • Goodwin, C. (2000). Practices of Color Classification. Mind, Culture, and Activity, 7(1&2), 19-36.
  • Rieder, B. (2017). Scrutinizing an Algorithmic Technique: The Bayes Classifier as Interested Reading of Reality. Information, Communication & Society, 20(1), 100-117.
  • Sadre-Orafai, S. (2020). Typologies, Typifications, and Types. Annual Review of Anthropology, 49(1), 193-208.

b. Critical Data Studies 

Here, we introduce a few titles from the emerging field of Critical Data Studies which we believe are especially useful for the purposes of acquiring a nuanced and interdisciplinary understanding of datasets. 

  • Andrejevic, M. (2019). Automated Media (1st edition). Routledge.
  • Beer, D. (2018). The Data Gaze. London, UK: SAGE.
  • Cheney-Lippold, J. (2017). We Are Data: Algorithms and the Making of our Digital Selves. New York, NY: NYU Press.
  • Chun, W. (2021). Discriminating Data. Cambridge, MA: MIT Press.
  • Cifor, M., Garcia, P., Cowan, T. L., Rault, J., Sutherland, T., Chan, A., . . . Nakamura, L. (2019). Feminist Data Manifest-No. Retrieved from
  • Couldry, N., & Mejias, U. A. (2019). The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford, CA: Stanford University Press.
  • D’Ignazio, C., & Klein, L. F. (2020). Data Feminism. MIT Press.
  • Gitelman L. (2013). “Raw Data” Is an Oxymoron. MIT Press.
  • Hansson, K., & Dahlgren, A. (2022). Open research data repositories: Practices, norms, and metadata for sharing images. Journal of the Association for Information Science and Technology , 73(2), 303-316.
  • Iliadis, A., & Russo, F. (2016). Critical data studies: An introduction. Big Data & Society, 3(2), 1-7.
  • Jaton, F. (2021). The Constitution of Algorithms: Ground-Truthing, Programming, Formulating. Cambridge, MA: MIT Press.
  • Kitchin, R. (2021). Data Lives. Bristol, UK: Bristol University Press.
  • Koopman, C. (2019). How We Became Our Data: A Genealogy of the Informational Person. Chicago, IL: University of Chicago Press.
  • Thorp, J. (2021). Living in Data: A Citizen's Guide to a Better Information Future. New York, NY: MCD.

c. Methodologies for Reading Data

This final subsection includes texts that deal more specifically with the different conceptualizations and methodologies through which datasets can be studied/read/analyzed. 

  • boyd, d., & Crawford, K. (2012). Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon. Information, Communication & Society, 15(5), 662-679.
  • Brock A. (2015). Deeper Data: A Response to boyd and Crawford. Media, Culture & Society, 37(7):1084-1088.
  • Driscoll, K., & Walker, S. (2014). Working Within a Black Box: Transparency in the Collection and Production of Big Twitter Data. International Journal of Communication, 8, 1745–1764.
  • Kitchin, R. (2014). The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences. London, UK: SAGE.
  • Leonelli, S., & Tempini, N. (Eds.). (2020). Data Journeys in the Sciences. Springer International Publishing.
  • Malevé, N. (2020). On the Data Set’s Ruins. AI & Society, 36, 1117–1131.
  • Metcalf, J., & Crawford, K. (2016). Where Are Human Subjects in Big Data Research? The Emerging Ethics Divide. Big Data & Society, 3(1), 1-14.
  • Munk, A. K., Olesen, A. G., & Jacomy, M. (2022). The Thick Machine: Anthropological AI Between Explanation and Explication. Big Data & Society, 9(1), 1-14.
  • Pasquale, F. (2021). Licensure as Data Governance. Knight First Amendment Institute.
  • Poirier, L. (2021). Reading Datasets: Strategies for Interpreting the Politics of Data Signification. Big Data & Society, 8(2), 1-19.
  • Suchman, L., & Trigg, R. H. (1993). Artificial Intelligence as Craftwork. In S. Chaiklin & J. Lave (Eds.), Understanding Practice (pp. 144-178). New York, NY: Cambridge University Press.
  • Zook, M., Barocas, S., boyd, d., Crawford, K., Keller, E., Gangadharan, S. P., Goodman, A., Hollander, R., Koenig, B. A., Metcalf, J., Narayanan, A., Nelson, A., & Pasquale, F. (2017). Ten Simple Rules for Responsible Big Data Research. PLOS Computational Biology, 13(3), e1005399.


While some datasets lie behind proprietary company walls, numerous datasets are available for public download. This section lists technical papers that accompany major public dataset releases, as well as popular repositories where disparate datasets are organized and made available to the broader public.

a. Source Papers for Noteworthy Datasets 

New training datasets are typically accompanied by technical papers explaining the composition of the dataset and its potential applications. These papers often also include analyses of models using the new dataset and comparisons to similar existing datasets. There are infinitely more dataset source papers than can be included on this list; below is a sampling of the most highly cited and broadly influential releases.

  • Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., & Schiele, B. (2016). The Cityscapes Dataset for Semantic Urban Scene Understanding. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3213–3223.
  • Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). ImageNet: A Large-Scale Hierarchical Image Database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, 248–255.
  • Geiger, A., Lenz, P., & Urtasun, R. (2012). Are We Ready for Autonomous Driving? The KITTI Vision Benchmark Suite. 2012 IEEE Conference on Computer Vision and Pattern Recognition, 3354–3361.
  • Huang, G. B., Mattar, M., Berg, T., & Learned-Miller, E. (2008). Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. Workshop on Faces in “Real-Life” Images: Detection, Alignment, and Recognition.
  • Krizhevsky, A. (2009). Learning Multiple Layers of Features from Tiny Images.
  • LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11), 2278–2324.
  • Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft COCO: Common Objects in Context. In D. Fleet, T. Pajdla, B. Schiele, & T. Tuytelaars (Eds.), Computer Vision – ECCV 2014 (pp. 740–755). Springer International Publishing.
  • Marcus, M., Santorini, B., & Marcinkiewicz, M. A. (1993). Building a Large Annotated Corpus of English: The Penn Treebank. Technical Reports (CIS).
  • Martin, D., Fowlkes, C., Tal, D., & Malik, J. (2001). A Database of Human Segmented Natural Images and Its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics. Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, 2, 416–423.
  • Miller, G. A. (1995). WordNet: A Lexical Database for English. Communications of the ACM, 38(11), 39–41.
  • Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C. D., Ng, A. Y., & Potts, C. (2013). Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, 1631–1642.
  • Thomee, B., Shamma, D. A., Friedland, G., Elizalde, B., Ni, K., Poland, D., Borth, D., & Li, L.-J. (2016). YFCC100M: The New Data in Multimedia Research. Communications of the ACM, 59(2), 64–73.

b. Dataset Repositories

These sites provide infrastructure for the organization, finding, and downloading of varying datasets. 


​​Training data requires significant human and computational effort to create. It is through this process of production that many of the effects of training data come to be shaped, from the processes of collection to labeling, deployment to deprecation. Texts in this section provide glimpses into the work behind datasets from varying angles, whether examining these production processes from a critical lens or describing the overall workflow of training data production from a technical standpoint.

a. Sociotechnical / Critical Approaches to Labor of Training Data 

These texts draw on approaches and frameworks from science and technology studies, political economy, and labor studies to examine the production of training data from a critical lens, understanding how power relations are at work in this process.

b. Organizational Workflows in Dataset Production 

Texts included here look to training data production from a practitioner-oriented lens. They survey either the entire workflow of training data production or specific stages within this process to identify challenges and suggest best practices.

  • Amershi, S., Begel, A., Bird, C., DeLine, R., Gall, H., Kamar, E., Nagappan, N., Nushi, B., & Zimmermann, T. (2019). Software Engineering for Machine Learning: A Case Study. 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), 291–300.
  • Ashmore, R., Calinescu, R., & Paterson, C. (2019). Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges. ArXiv.
  • Barclay, I., Taylor, H., Preece, A., Taylor, I., Verma, D., & de Mel, G. (2020). A Framework for Fostering Transparency in Shared Artificial Intelligence Models by Increasing Visibility of Contributions. Concurrency and Computation: Practice and Experience, 33(19), e6129.
  • Bhardwaj, A., Bhattacherjee, S., Chavan, A., Deshpande, A., Elmore, A. J., Madden, S., & Parameswaran, A. G. (2014). DataHub: Collaborative Data Science & Dataset Version Management at Scale. ArXiv.
  • Chandrabose, A., & Chakravarthi, B. R. (2021). An Overview of Fairness in Data – Illuminating the Bias in Data Pipeline. LTEDI.
  • Dong, W., & Fu, W.-T. (2010). Cultural Difference in Image Tagging. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 981–984.
  • Hanley, M., Khandelwal, A., Averbuch-Elor, H., Snavely, N., & Nissenbaum, H. (2020). An Ethical Highlighter for People-Centric Dataset Creation. ArXiv.
  • Hutchinson, B., Smart, A., Hanna, A., Denton, E., Greer, C., Kjartansson, O., Barnes, P., & Mitchell, M. (2021). Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure. ArXiv.
  • Geiger, R., Cope, D., Ip, J., Lotosh, M., Shah, A., Weng, J., & Tang, R. (2021). “Garbage In, Garbage Out” Revisited: What Do Machine Learning Application Papers Report About Human-Labeled Training Data? ArXiv.
  • Holstein, K., Vaughan, J. W., Daumé III, H., Dudík, M., & Wallach, H. (2019). Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–16.
  • Muller, M. J., Wolf, C. T., Andres, J., Desmond, M., Joshi, N. N., Ashktorab, Z., Sharma, A., Brimijoin, K., Pan, Q., Duesterwald, E., & Dugan, C. (2021). Designing Ground Truth and the Social Life of Labels. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1-16.
  • Polyzotis, N., Roy, S., Whang, S. E., & Zinkevich, M. (2018). Data Lifecycle Challenges in Production Machine Learning: A Survey. ACM SIGMOD Record, 47(2), 17–28.
  • Roh, Y., Heo, G., & Whang, S. E. (2021). A Survey on Data Collection for Machine Learning: A Big Data - AI Integration Perspective. IEEE Transactions on Knowledge and Data Engineering, 33(4), 1328–1347.
  • Tatman, R. (2018). Setting Up Your Public Data for Success. 2018 IEEE International Conference on Big Data (Big Data), 3261–3262.
  • Sachdeva, P. S., Barreto, R., von Vacano, C., & Kennedy, C. J. (2022). Assessing Annotator Identity Sensitivity via Item Response Theory: A Case Study in a Hate Speech Corpus. 2022 ACM Conference on Fairness, Accountability, and Transparency, 1585–1603.
  • Sambasivan, N., & Veeraraghavan, R. (2022). The Deskilling of Domain Expertise in AI Development. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1-14.
  • Shanmugam, D., Diaz, F., Shabanian, S., Funck, M., & Biega, A. (2022). Learning to Limit Data Collection via Scaling Laws: A Computational Interpretation for the Legal Principle of Data Minimization. 2022 ACM Conference on Fairness, Accountability, and Transparency, 839-849.
  • Vaughan, J. W. (2018). Making Better Use of the Crowd: How Crowdsourcing Can Advance Machine Learning Research. Journal of Machine Learning Research, 18(193), 1–46.
  • Wang, D., Prabhat, S., & Sambasivan, N. (2022). Whose AI Dream? In search of the aspiration in data annotation. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1-16.


This section highlights works that analyze training datasets from a variety of methodological and theoretical perspectives. While we understand that many of the titles that span across the major headings in this reading list involve some form of “dataset analysis,” we highlight in this particular section studies in which the analysis itself comprises the thrust of the article/chapter/work. The works in this section focus primarily on the details of the analysis as opposed to conducting an analysis as a preliminary step to introduce a more central argument or intervention. 

a. Sociotechnical & Critical Studies

This subsection focuses on articles and chapters that approach their analyses of training datasets grounded in frameworks primarily taken from critical studies or science and technology studies. 

b. Technical Approaches to Studying Datasets

Here, we introduce works that detail “technical” methods for the study of datasets. While the titles housed under the following subsection 5c, “Technical Audits,” deal with the investigative technical analysis of particular datasets, the works in this subsection are more concerned with introducing technical methods to approach the study of datasets and their particular components. Many of these studies do contain audit-style analyses, but we differentiate them from subsection 5c because their focus is on introducing or using technical methods for dataset analysis in general, as opposed to dissecting various components of particular datasets. 

  • Balayn, A., Kulynych, B., & Guerses, S. (2021). Exploring Data Pipelines through the Process Lens: A Reference Model for Computer Vision. ArXiv.
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Mitchell, M. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. FAccT.
  • Blodgett, S. L., Lopez, G., Olteanu, A., Sim, R., & Wallach, H. (202). Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, 1, 1004-1015. 
  • Cheng, V., Suriyakumar, V., Dullerud, N., Joshi, S., & Ghassemi, M. (2021). Can You Fake It Until You Make It?: Impacts of Differentially Private Synthetic Data on Downstream Classification Fairness. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 149-160.
  • Gardner, M., Merrill, W., Dodge, J., Peters, M. E., Ross, A., Singh, S., & Smith, N. A. (2021). Competency Problems: On Finding and Removing Artifacts in Language Data. ArXiv.
  • Fabbrizzi, S., Papadopoulos, S., Ntoutsi, E., & Kompatsiaris, Y. (2021). A Survey on Bias in Visual Datasets. ArXiv.
  • Hirota, Y., Nakashima, Y., & Garcia, N. (2022). Gender and Racial Bias in Visual Question Answering Datasets. 2022 ACM Conference on Fairness, Accountability, and Transparency, 1280–1292.
  • Hutchinson, B., Rostamzadeh, N., Greer, C., Heller, K., & Prabhakaran, V. (2022). Evaluation Gaps in Machine Learning Practice. 2022 ACM Conference on Fairness, Accountability, and Transparency, 1859–1876.
  • Jung, T., Kang, D., Mentch, L., & Hovy, E. (2019). Earlier Isn’t Always Better: Sub-aspect Analysis on Corpus and System Biases in Summarization. ArXiv.
  • Kilgarriff, A., & Grefenstette, G. (2003). Introduction to the Special Issue on the Web as Corpus. Computational Linguistics, 29(3), 333–348.
  • Koesten, L., Vougiouklis, P., Simperl, E., & Groth, P. (2020). Dataset Reuse: Toward Translating Principles to Practice. Patterns, 1(8), 100136.
  • Laranjeira da Silva, C., Macedo, J., Avila, S., & dos Santos, J. (2022). Seeing without Looking: Analysis Pipeline for Child Sexual Abuse Datasets. 2022 ACM Conference on Fairness, Accountability, and Transparency, 2189–2205.
  • Madras, D., Creager, E., Pitassi, T., & Zemel, R. (2019). Fairness through Causal Awareness: Learning Causal Latent-Variable Models for Biased Data. Proceedings of the Conference on Fairness, Accountability, and Transparency, 349–358.
  • Moreno-Torres, J. G., Raeder, T., Alaiz-Rodríguez, R., Chawla, N. V., & Herrera, F. (2012). A Unifying View on Dataset Shift in Classification. Pattern Recognition, 45(1), 521–530.
  • Olson, R. S., La Cava, W., Orzechowski, P., Urbanowicz, R. J., & Moore, J. H. (2017). PMLB: A Large Benchmark Suite for Machine Learning Evaluation and Comparison. BioData Mining, 10(36).
  • Rabanser, S., Günnemann, S., & Lipton, Z. C. (2019). Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift. ArXiv.
  • Rieke, A., Sutherland, V., Svirsky, D., & Hsu, M. (2022). Imperfect Inferences: A Practical Assessment. 2022 ACM Conference on Fairness, Accountability, and Transparency, 767-777.
  • Straw, I., & Callison-Burch, C. (2020). Artificial Intelligence in Mental Health and the Biases of Language Based Models. PLOS ONE, 15(12), e0240376.
  • Welty, C., Paritosh, P., & Aroyo, L. (2019). Metrology for AI: From Benchmarks to Instruments. ArXiv.
  • Wesley, A. M., & Matisziw, T. C. (2021). Methods for Measuring Geodiversity in Large Overhead Imagery Datasets. IEEE Access, 9, 100279–100293.
  • Zanella-Béguelin, S., Wutschitz, L., Tople, S., Rühle, V., Paverd, A., Ohrimenko, O., Köpf, B., & Brockschmidt, M. (2020). Analyzing Information Leakage of Updates to Natural Language Models. Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, 363–375.
  • Zhong, R., Chen, Y., Patton, D., Selous, C., & McKeown, K. (2019). Detecting and Reducing Bias in a High Stakes Domain. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 4765–4775.

c. Technical Audits

This subsection includes works that employ technical audit-style investigations (e.g., Buolamwini & Gebru, 2018; Raji et al, 2020) of particular datasets. 

  • Babaeianjelodar, M., Lorenz, S., Gordon, J., Matthews, J., & Freitag, E. (2020). Quantifying Gender Bias in Different Corpora. Companion Proceedings of the Web Conference 2020, 752–759.
  • Bountouridis, D., Makhortykh, M., Sullivan, E., Harambam, J., Tintarev, N., & Hauff, C. (2019). Annotating Credibility: Identifying and Mitigating Bias in Credibility Datasets. ROME 2019 - Workshop on Reducing Online Misinformation Exposure.
  • Buolamwini, J., & Gebru, T. (2018, January). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Conference on fairness, accountability and transparency, 77-91.
  • Costanza-Chock, S., Raji, I. D., & Buolamwini, J. (2022). Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. 2022 ACM Conference on Fairness, Accountability, and Transparency, 1571–1583.
  • Davidson, T., Bhattacharya, D., & Weber, I. (2019). Racial Bias in Hate Speech and Abusive Language Detection Datasets. ArXiv.
  • ​​Dulhanty, C., & Wong, A. (2019). Auditing ImageNet: Towards a Model-driven Framework for Annotating Demographic Attributes of Large-Scale Image Datasets. ArXiv.
  • Dulhanty, C., & Wong, A. (2020). Investigating the Impact of Inclusion in Face Recognition Training Data on Individual Face Identification. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 244–250.
  • Dulhanty, C. (2020). Issues in Computer Vision Data Collection: Bias, Consent, and Label Taxonomy [University of Waterloo].
  • Heinzerling, B. (2019, July 21). NLP’s Clever Hans Moment has Arrived. Benjamin Heinzerling.
  • Hutchinson, B., Prabhakaran, V., Denton, E., Webster, K., Zhong, Y., & Denuyl, S. (2020). Social Biases in NLP Models as Barriers for Persons with Disabilities. ArXiv.
  • Klockmann, V., von Schenk, A., & Villeval, M. C. (2021). Artificial Intelligence, Ethics, and Diffused Pivotality. Working Paper Series, GATE.
  • Luccioni, A., & Viviano, J. (2021). What’s in the Box? A Preliminary Analysis of Undesirable Content in the Common Crawl Corpus. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 182-189.
  • Mecati, M., Cannavò, F. E., Vetrò, A., & Torchiano, M. (2020). Identifying Risks in Datasets for Automated Decision–Making. In G. Viale Pereira, M. Janssen, H. Lee, I. Lindgren, M. P. Rodríguez Bolívar, H. J. Scholl, & A. Zuiderwijk (Eds.), Electronic Government (pp. 332–344). Springer International Publishing.
  • Raji, I. D., & Fried, G. (2021). About Face: A Survey of Facial Recognition Evaluation. ArXiv.
  • Raji, I. D., Gebru, T., Mitchell, M., Buolamwini, J., Lee, J., & Denton, E. (2020). Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing. ArXiv.
  • Rambachan, A., & Roth, J. (2020). Bias In, Bias Out? Evaluating the Folk Wisdom. ArXiv.
  • Shankar, S., Halpern, Y., Breck, E., Atwood, J., Wilson, J., & Sculley, D. (2017). No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World. ArXiv.
  • Vidgen, B., & Derczynski, L. (2020). Directions in Abusive Language Training Data: Garbage In, Garbage Out. ArXiv.
  • Wang, T., Zhao, J., Yatskar, M., Chang, K.-W., & Ordonez, V. (2019). Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations. 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

d. Visual & Artistic Approaches to Datasets

​​This final subsection assembles artistic and visual approaches/formats for the analysis of datasets.


Here we assemble literature that proposes responses to commonly identified sociotechnical problems with ML datasets. Most of the articles in this vein focused on technical responses to addressing bias (writ broadly), while a few address other concerns such as privacy and security. We do not necessarily endorse these approaches; rather, this is a loose mapping of emerging areas of focus in response to problems. Note that there is some overlap with the readings suggested in Section 5, as many of these papers investigate particular datasets; however, the papers listed here emphasize approaches to addressing specific problems. 

a. General Recommendations for Dataset Design

This subsection covers miscellaneous broad recommendations for the creation of fairer and more accountable datasets. 

  • Andrus, M., & Villeneuve, S. (2022). Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection in the Pursuit of Fairness. 2022 ACM Conference on Fairness, Accountability, and Transparency, 1709–1721.
  • Bilstrup, K.-E. K., Kaspersen, M. H., Assent, I., Enni, S., & Petersen, M. G. (2022). From Demo to Design in Teaching Machine Learning. 2022 ACM Conference on Fairness, Accountability, and Transparency, 2168–2178.
  • Bowman, S. R., & Dahl, G. E. (2021). What Will it Take to Fix Benchmarking in Natural Language Understanding? NAACL.
  • Boyd, K. (2022). Designing Up with Value-Sensitive Design: Building a Field Guide for Ethical ML Development. 2022 ACM Conference on Fairness, Accountability, and Transparency, 2069–2082.
  • Kiela, D., Bartolo, M., Nie, Y., Kaushik, D., Geiger, A., Wu, Z., Vidgen, B., Prasad, G., Singh, A., Ringshia, P., Ma, Z., Thrush, T., Riedel, S., Waseem, Z., Stenetorp, P., Jia, R., Bansal, M., Potts, C., & Williams, A. (2021). Dynabench: Rethinking Benchmarking in NLP. NAACL.
  • Panch, T., Pollard, T. J., Mattie, H., Lindemer, E., Keane, P. A., & Celi, L. A. (2020). “Yes, But Will It Work for My Patients?” Driving Clinically Relevant Research with Benchmark Datasets. Npj Digital Medicine, 3(1), 1–4.
  • Peng, K., Mathur, A., & Narayanan, A. (2021). Mitigating Dataset Harms Requires Stewardship: Lessons from 1000 Papers. ArXiv.
  • Rogers, A. (2020). Changing the World by Changing the Data. ArXiv.
  • Rolf, E., Worledge, T., Recht, B., & Jordan, M. I. (2021). Representation Matters: Assessing the Importance of Subgroup Allocations in Training Data. ArXiv.
  • Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and Abstraction in Sociotechnical Systems. Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68.
  • Suresh, H., Movva, R., Lee Dogan, A., Bhargava, D., Isadora, C., Martinez Cuba, A., Taurino, G., So, W., & D’Ignazio, C. (2022). Towards Intersectional Feminist and Participatory ML: A Case Study in Supporting Femicide Counterdata Collection. 2022 ACM Conference on Fairness, Accountability, and Transparency, 667-678.
  • Stasaski, K., Yang, G. H., & Hearst, M. A. (2020). More Diverse Dialogue Datasets via Diversity-Informed Data Collection. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 4958–4968.

b. Creating New Datasets and/or Remediation of Existing Datasets

This subsection includes articles that either remediate specific existing datasets or detail the creation of alternative datasets to address identified privacy and bias issues.

  • Asano, Y., Rupprecht, C., Zisserman, A., & Vedaldi, A. (2021). PASS: An ImageNet Replacement for Self-Supervised Pretraining Without Humans. ArXiv.
  • Brown, H., Lee, K., Mireshghallah, F., Shokri, R., & Tramèr, F. (2022). What Does it Mean for a Language Model to Preserve Privacy? 2022 ACM Conference on Fairness, Accountability, and Transparency, 2280–2292.
  • Cai, W., Encarnacion, R., Chern, B., Corbett-Davies, S., Bogen, M., Bergman, S., & Goel, S. (2022). Adaptive Sampling Strategies to Construct Equitable Training Datasets. 2022 ACM Conference on Fairness, Accountability, and Transparency, 1467–1478.
  • Jernite, Y., Nguyen, H., Biderman, S., Rogers, A., Masoud, M., Danchev, V., Tan, S., Luccioni, A. S., Subramani, N., Johnson, I., Dupont, G., Dodge, J., Lo, K., Talat, Z., Radev, D., Gokaslan, A., Nikpoor, S., Henderson, P., Bommasani, R., & Mitchell, M. (2022). Data Governance in the Age of Large-Scale Data-Driven Language Technology. 2022 ACM Conference on Fairness, Accountability, and Transparency, 2206–2222.
  • Khashabi, D., Chaturvedi, S., Roth, M., Upadhyay, S., & Roth, D. (2018). Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 1, 252–262.
  • Yang, K., Qinami, K., Fei-Fei, L., Deng, J., & Russakovsky, O. (2020). Towards Fairer Datasets: Filtering and Balancing the Distribution of the People Subtree in the ImageNet Hierarchy. FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 547-558.
  • Yang, K., Yau, J., Fei-Fei, L., Deng, J., & Russakovsky, O. (2021). A Study of Face Obfuscation in ImageNet. ArXiv.
  • Zellers, R., Bisk, Y., Schwartz, R., & Choi, Y. (2018). SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference. ArXiv.

c. Data Annotation Workflows

Articles in this subsection address biased machine learning datasets by proposing changes to data annotation processes.

  • Barbosa, N. M., & Chen, M. (2019). Rehumanized Crowdsourcing: A Labeling Framework Addressing Bias and Ethics in Machine Learning. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–12.
  • Beretta, E., Vetrò, A., Lepri, B., & Martin, J. C. D. (2021). Detecting Discriminatory Risk Through Data Annotation Based on Bayesian Inferences. FAccT.
  • Beretta, E., Vetrò, A., Lepri, B., & De Martin, J. C. (2019). Ethical and Socially-Aware Data Labels. In J. A. Lossio-Ventura, D. Muñante, & H. Alatrista-Salas (Eds.), Information Management and Big Data, 320–327. Springer International Publishing.
  • Rateike, M., Majumdar, A., Mineeva, O., Gummadi, K. P., & Valera, I. (2022). Don’t Throw it Away! The Utility of Unlabeled Data in Fair Decision Making. 2022 ACM Conference on Fairness, Accountability, and Transparency, 1421–1433.

d. Data Augmentation

Articles in this subsection offer approaches to reducing bias in datasets by changing their composition via techniques such as oversampling or the use of synthetic/pseudo-data.

  • Iosifidis, V., & Ntoutsi, E. (2018). Dealing with Bias via Data Augmentation in Supervised Learning Scenarios.
  • Pastaltzidis, I., Dimitriou, N., Quezada-Tavarez, K., Aidinlis, S., Marquenie, T., Gurzawska, A., & Tzovaras, D. (2022). Data augmentation for fairness-aware machine learning: Preventing algorithmic bias in law enforcement systems. 2022 ACM Conference on Fairness, Accountability, and Transparency, 2302–2314.
  • Sharma, S., Zhang, Y., Ríos Aliaga, J. M., Bouneffouf, D., Muthusamy, V., & Varshney, K. R. (2020). Data Augmentation for Discrimination Prevention and Bias Disambiguation. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 358–364.
  • Tomalin, M., Byrne, B., Concannon, S., Saunders, D., & Ullmann, S. (2021). The Practical Ethics of Bias Reduction in Machine Translation: Why Domain Adaptation is Better than Data Debiasing. Ethics and Information Technology, 23, 419-433.

e. Bias Detection

This subsection gathers tools and approaches for detecting bias in datasets.

  • Chapman, A., Grylls, P., Ugwudike, P., Gammack, D., & Ayling, J. (2022). A Data-Driven Analysis of the Interplay Between Criminology Theory and Predictive Policing Algorithms. 2022 ACM Conference on Fairness, Accountability, and Transparency, 36-45.
  • Goyal, P., Romero Soriano, A., Hazirbas, C., Levent, S., & Usunier, N. (2022). Fairness Indicators for Systematic Assessments of Visual Feature Extractors. 2022 ACM Conference on Fairness, Accountability, and Transparency, 70-88.
  • Harris, C., Halevy, M., Howard, A., Bruckman, A., & Yang, D. (2022). Exploring the Role of Grammar and Word Choice in Bias Toward African American English (AAE) in Hate Speech Classification. 2022 ACM Conference on Fairness, Accountability, and Transparency, 789-798.
  • Hu, X., Wang, H., Vegesana, A., Dube, S., Yu, K., Kao, G., Chen, S.-H., Lu, Y.-H., Thiruvathukal, G. K., & Yin, M. (2020). Crowdsourcing Detection of Sampling Biases in Image Datasets. Proceedings of The Web Conference 2020, 2955–2961.
  • Leavy, S., Meaney, G., Wade, K., & Greene, D. (2020). Mitigating Gender Bias in Machine Learning Data Sets. In L. Boratto, S. Faralli, M. Marras, & G. Stilo (Eds.), Bias and Social Aspects in Search and Recommendation, 12–26. Springer International Publishing.
  • Pahl, J., Rieger, I., Mӧller, A., Wittenberg, T., & Schmid, U. (2022). Female, White, 27? Bias Evaluation on Data and Algorithms for Affect Recognition in Faces. 2022 ACM Conference on Fairness, Accountability, and Transparency, 973-987.
  • Srinivasan, R., & Chander, A. (n.d.). Understanding Bias in Datasets using Topological Data Analysis. 7.
  • Verma, S., Ernst, M., & Just, R. (2021). Removing Biased Data to Improve Fairness and Accuracy. ArXiv.
  • Wang, A., Barocas, S., Laird, K., & Wallach, H. (2022). Measuring Representational Harms in Image Captioning. 2022 ACM Conference on Fairness, Accountability, and Transparency, 324-335.
  • Wang, A., Narayanan, A., & Russakovsky, O. (2020). REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets. ECCV, 733-751.
  • Wang, A., Ramaswamy, V. V., & Russakovsky, O. (2022). Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresetation, and Performing Evaluation. 2022 ACM Conference on Fairness, Accountability, and Transparency, 336-349.
  • Zamfirescu-Pereira, J. D., Chen, J., Wen, E, Koenecke, A., Garg, N., & Pierson, E. (2022) Trucks Don’t Mean Trump: Diagnosing Human Error in Image Analysis. 2022 ACM Conference on Fairness, Accountability, and Transparency, 799-813.

f. Algorithms to Debias Datasets or Mitigate Bias

Research in this subsection deploys algorithmic techniques to either debias datasets before training ML models on them or intervene to mitigate bias after training.

  • Abbasi-Sureshjani, S., Raumanns, R., Michels, B. E. J., Schouten, G., & Cheplygina, V. (2020). Risk of Training Diagnostic Algorithms on Data with Demographic Bias. In J. Cardoso et al (Eds.), Interpretable and Annotation-Efficient Learning for Medical Image Computing, 183–192. Springer.
  • Almuzaini, A. A., Bhatt, C. A., Pennock, D. M., & Singh, V. K. (2022). ABCinML: Anticipatory Bias Correction in Machine Learning Applications. 2022 ACM Conference on Fairness, Accountability, and Transparency, 1552–1560.
  • Anahideh, H., Asudeh, A., & Thirumuruganathan, S. (2021). Fair Active Learning. ArXiv.
  • Bolukbasi, T., Chang, K.-W., Zou, J., Saligrama, V., & Kalai, A. (2016). Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. ArXiv.
  • Hendricks, L. A., Burns, K., Saenko, K., Darrell, T., & Rohrbach, A. (2018). Women Also Snowboard: Overcoming Bias in Captioning Models. ECCV, 771–787.
  • Lum, K., Zhang, Y., & Bower, A. (2022). De-Biasing “Bias” Measurement. 2022 ACM Conference on Fairness, Accountability, and Transparency, 379-389.
  • Reimers, C., Bodesheim, P., Runge, J., & Denzler, J. (2021). Towards Learning an Unbiased Classifier from Biased Data via Conditional Adversarial Debiasing. ArXiv.
  • Ryu, H. J., Mitchell, M., & Adam, H. (2017). InclusiveFaceNet: Improving Face Attribute Detection with Race and Gender Diversity. ArXiv.
  • Schick, T., Udupa, S., & Schütze, H. (2021). Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP. ArXiv.
  • Sikdar, S., Lemmerich, F., & Strohmaier, M. (2022). GetFair: Generalized Fairness Tuning of Classification Models. 2022 ACM Conference on Fairness, Accountability, and Transparency, 289-299.
  • Zhao, J., Wang, T., Yatskar, M., Ordonez, V., & Chang, K.-W. (2017). Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus-level Constraints. ArXiv.


In recent years, there have been calls to increase transparency and standardization for ML datasets so that researchers can better study their composition and effects, as well as identify problems. This section collects these various approaches to dataset documentation.

  • Bandy, J., & Vincent, N. (2021). Addressing “Documentation Debt” in Machine Learning Research: A Retrospective Datasheet for BookCorpus. ArXiv.
  • Barclay, I., Preece, A., Taylor, I., Radha, S. K., & Nabrzyski, J. (2021). Providing Assurance and Scrutability on Shared Data and Machine Learning Models with Verifiable Credentials. ArXiv.
  • ​​Barclay, I., Preece, A., Taylor, I., & Verma, D. (2019). Towards Traceability in Data Ecosystems Using a Bill of Materials Model. ArXiv.
  • Bender, E. M., & Friedman, B. (2018). Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. Transactions of the Association for Computational Linguistics, 6, 587–604.
  • Benjamin, M., Gagnon, P., Rostamzadeh, N., Pal, C., Bengio, Y., & Shee, A. (2019). Towards Standardization of Data Licenses: The Montreal Data License. ArXiv.
  • Boyd, K. (2020). Understanding and Intervening in Machine Learning Ethics: Supporting Ethical Sensitivity in Training Data Curation. ProQuest [University of Maryland, College Park].
  • Crisan, A., Drouhard, M., Vig, J., & Rajani, N. (2022). Interactive Model Cards: A Human-Centered Approach to Model Documentation. 2022 ACM Conference on Fairness, Accountability, and Transparency, 427-439.
  • Díaz, M., Kivlichan, I., Rosen, R., Baker, D., Amironesei, R., Prabhakaran, V., & Denton, E. (2022). CrowdWorkSheets: Accounting for Individual and Collective Identities Underlying Crowdsourced Dataset Annotation. 2022 ACM Conference on Fairness, Accountability, and Transparency, 2342–2351.
  • Fabris, A., Messina, S., Silvello, G., & Susto, G. A. (2022). Algorithmic Fairness Datasets: The Story so Far. ArXiv.
  • Gansky, B., & McDonald, S. (2022). CounterFAccTual: How FAccT Undermines Its Organizing Principles. 2022 ACM Conference on Fairness, Accountability, and Transparency, 1982–1992.
  • Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2020). Datasheets for Datasets. ArXiv.
  • Holland, S., Hosny, A., Newman, S., Joseph, J., & Chmielinski, K. (2018). The Dataset Nutrition Label: A Framework To Drive Higher Data Quality Standards. ArXiv.
  • Luccioni, A. S., Corry, F., Sridharan, H., Ananny, M., Schultz, J., & Crawford, K. (2022). A Framework for Deprecating Datasets: Standardizing Documentation, Identification, and Communication. 2022 ACM Conference on Fairness, Accountability, and Transparency, 199–212.
  • McMillan-Major, A., Osei, S., Rodriguez, J. D., Ammanamanchi, P. S., Gehrmann, S., & Jernite, Y. (2021). Reusable Templates and Guides For Documenting Datasets and Models for Natural Language Processing and Generation: A Case Study of the HuggingFace and GEM Data and Model Cards. ArXiv.
  • Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229.
  • Pushkarna, M., Zaldivar, A., & Kjartansson, O. (2022). Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI. 2022 ACM Conference on Fairness, Accountability, and Transparency, 1776–1826.
  • Rostamzadeh, N., Mincu, D., Roy, S., Smart, A., Wilcox, L., Pushkarna, M., Schrouff, J., Amironesei, R., Moorosi, N., & Heller, K. (2022). Healthsheet: Development of a Transparency Artifact for Health Datasets. 2022 ACM Conference on Fairness, Accountability, and Transparency,, 1943–1961.
  • Seck, I., Dahmane, K., Duthon, P., & Loosli, G. (2018). Baselines and a Datasheet for the Cerema AWP dataset. ArXiv.
  • Schramowski, P., Tauchmann, C., & Kersting, K. (2022). Can Machines Help Us Answering Question 16 in Datasheets, and In Turn Reflecting on Inappropriate Content? 2022 ACM Conference on Fairness, Accountability, and Transparency, 1350–1361.
  • Srinivasan, R., Denton, E., Famularo, J., Rostamzadeh, N., Diaz, F., & Coleman, B. (2021). Artsheets for Art Datasets. Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
  • Zhang, W., Ohrimenko, O., & Cummings, R. (2022). Attribute Privacy: Framework and Mechanisms. 2022 ACM Conference on Fairness, Accountability, and Transparency, 757-766.


The scholarship summarized in this list spans academic fields, from science and technology studies (STS) to computer science, and human computer interaction (HCI) to library science. During the construction of this list, it became clear that certain conference venues and their proceedings are often associated with emerging work on training data. Other conference venues have dedicated workshops or particular tracks to the study of datasets. While this broader list represents training data scholarship at a particular moment in time, these locales provide sites where work on training data has been concentrated or is likely to be found.


Popular press treatments of training data have provided a foundation for broader public conversations about these artifacts. The press gathered here represents just a small sample of both the important investigative work into training data as well as cogent introductions to the subject. Articles are frequently published on these issues, so this is just a selection of starting points.