A B C D E F G H I J K L M N O P Q R S T U V W X Y Z AA AB AC AD AE AF AG AH AI AJ AK AL AM AN AO AP AQ AR AS AT AU AV AW AX AY AZ BA BB BC BD BE BF BG BH
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
A CRITICAL FIELD GUIDE FOR WORKING WITH MACHINE LEARNING DATASETS
Written by Sarah Ciston {1}
Editors: Mike Ananny {2} and Kate Crawford {3}







Part of the Knowing Machines research project.
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
TABLE OF CONTENTS
39
1. Introduction to Machine Learning Datasets
40
2. Benefits: Why Approach Datasets Critically?
41
3. Parts of a Dataset
42
4. Types of Datasets
43
5. Transforming Datasets
44
6. The Dataset Lifecycle
45
7. Cautions & Reflections from the Field
46
8. Conclusion
47
48
49

1

50
INTRODUCTION TO MACHINE LEARNING DATASETS
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
Maybe you’re an engineer creating a new machine vision system to track birds. You might be a journalist using social media data to research Costa Rican households. You could be a researcher who stumbled upon your university’s archive of handwritten census cards from 1939. Or a designer creating a chatbot that relies on large language models like GPT-3. Perhaps you’re an artist experimenting with visual style combinations using DALLE-2. Or maybe you’re an activist with an urgent story that needs telling, and you’re searching for the right dataset to tell it.
72
WELCOME.
73
No matter what kind of datasets you’re using or want to use, whether you’re curious but intimidated by machine learning or already comfortable, this work is complicated. Because machine learning relies on datasets, and because datasets are always tangled up in the ways they’re created and used, things can get messy. You may have questions like:
74
75
Does this dataset tell the story of my research in the way I want?
76
How do the dataset pre-processing methods I choose affect my outcomes?
77
How might this dataset contribute to creating errors or causing harm?
78
79
More than likely you will encounter at least some of these conundrums — as many of us who work with machine learning datasets do. Anyone using datasets will weigh choices and make tradeoffs. There are no universal answers and no perfect actions — just a tangle of dataset forms, formats, relationships, behaviors, histories, intentions, and contexts.
80
When choosing and using machine learning datasets, how do you deal with the issues they bring? How can you navigate the mess thoughtfully and intentionally? Let’s jump in.
81
82
83
84
85
INTRODUCTION TO MACHINE LEARNING DATASETS
86
1.1
87
WHAT IS THIS GUIDE ?
88
89
90
91
Machine learning datasets are powerful but unwieldy. They are often far too large to check all the data manually, to look for inaccurate labels, dehumanizing images, or other widespread issues. Despite the fact that datasets commonly contain problematic material — whether from a technical, legal, or ethical perspective — datasets are also valuable resources when handled carefully and critically. This guide offers questions, suggestions, strategies, and resources to help people work with existing machine learning datasets at every phase of their lifecycle. Equipped with this understanding, researchers and developers will be more capable of avoiding the problems unique to datasets. They will also be able to construct more reliable, robust solutions, or even explore promising new ways of thinking with machine learning datasets that are more critical and conscientious. {4}, {5}
92
93
If you aren’t sure whether this guide is for you, consider the many places you might find yourself working with machine learning datasets. This guide can be helpful if you are…
94
95
- making a model
96
- working with a pre-trained model
97
- researching an existing machine learning tool
98
- teaching with datasets
99
- creating an index or inventory
100
- concerned about how datasets describe you or your community
101
- learning about datasets by exploring one
102
- stewarding or archiving datasets
103
- investigating as an artist, activist, or developer
104
105
106
107
This list is non-exhaustive, of course. Datasets are being used widely across countless domains and industries. How else can you imagine working with machine learning datasets?
108
109
110
111
112
113
114
The appetite for massive datasets is huge and still accelerating, fueled by the perceived promise of machine learning to convert data into meaningful, monetizable information.{6} Too often, this work is done without regard for how datasets can be partial, imperfect, and historically skewed. Take the widely publicized examples of police departments and courts selecting “future criminals” from software that relied on historical crime records, which ProPublica journalists found was grossly inaccurate and targeted Black people in its predictions [6]. More troubling still, researchers (and public and private organizations) continue to make use of such datasets despite learning of their harms — perhaps because they seem more efficient or effective, because they are already part of common practices in their communities, or simply because they are the most readily available options. This is exactly why critical care is so needed — datasets’ potential harms are subtle, localized, and complex. You will need to make conscientious decisions and compromises when working with any dataset. There is no perfect representation, no correct procedure, and no ideal dataset.
115
This guide aims to help you navigate the complexity of working with datasets, giving you ways to approach conundrums carefully and thoughtfully. Section 1 describes how DATA and DATASETS are dynamic research materials, and Section 2 outlines the BENEFITS of working critically with datasets. Then you’ll find more on the common PARTS of datasets (Section 3), examples of the TYPES of datasets you may encounter (Section 5), and how to TRANSFORM datasets (Section 4) — all to help make critical choices easier. Then Section 6 provides a DATASET LIFECYCLE framework with important questions to ask as you engage critically at each stage of your work. Finally, Section 7 offers some CAUTIONS & REFLECTIONS for careful dataset stewardship.
116
117
FIELD GUIDES AND DATASETS AS FORMS
118
119
120
121
The field guide format frames this text because, like datasets, field guides teach their readers particular ways of looking at the world — for better and for worse. Carried in a knapsack, a birder might use their field guide to confirm a species sighting in a visual index. A hiker might read trail warnings to prepare for their trek. With these practical uses, the field guide speaks to a desire to connect deeply with dataset tools and practices, and a sense of careful responsibility that data stewardship shares with environmental stewardship. However, naturalist field guides also draw on the same problematic histories of classifying and organizing information that are foundational to many machine learning tasks today. This critical field guide aims to help bring understanding to the complexities of datasets, so that the decisions you make while using them are conscientious. It invites you to mess with these messy forms and to approach any logic of classification with a critical eye.
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
When we say CLASSIFICATION in this guide, generally we refer to the choices, logics, and paradigms that inform sociotechnical communities of practice — how people sort and are sorted into categories, how those categories come to be seen as dominant and naturalized, and how people are differently affected by those categories. We acknowledge that the term CLASSIFICATION also refers to specific machine learning tasks that label and sort items in a dataset by discrete categories. For example, asking whether an image is a dog or a cat is handled by a classification task. These are distinguished from REGRESSION tasks, which show the relationship between features in a dataset, for example sorting dogs by their age and number of spots. In this guide, we will specify ‘tasks’ when referring to these techniques, but simply say ‘classification’ when referring to the sociotechnical phenomenon more broadly.{7}
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
INTRODUCTION TO MACHINE LEARNING DATASETS
159
1.2
160
WHAT ARE DATA?
161
162
163
164
165
DATA ARE CONTINGENT ON HOW WE USE THEM
166
167
DATA are values assigned to any ‘thing’, and the term can be applied to almost anything. Numbers, of course, can be data; but so can emails, a collection of scanned manuscripts, the steps you walked to the train, the pose of a dancer, or the breach of a whale. How you think about the information is what makes it data. Philosopher of science Sabina Leonelli sees data as a “relational category” meaning that, “What counts as data depends on who uses them, how, and for which purposes.” She argues data are “any product of research activities [...] that is collected, stored, and disseminated in order to be used as evidence for knowledge claims[8]. This definition reframes data as contingent on the people who create, use, and interact with them in context.
168
169
170
DATA MUST BE MADE, AND MAKING DATA SHAPES DATA
171
172
As a form of information,{8} data do not just exist but have to be generated, through collection by sensors or human effort. Sensing, observing, and collecting are all acts of interpretation that have contexts, which shape the data. For example, when collecting images of faces using infrared cameras, that data can provide heat signatures but not the eye color of its subjects. Studies are designed with specific equipment to achieve their goals and not others. Whether they are quantitative data captured with a sensor or qualitative data described in an interview, the context in which that data is collected has already created a limit for what it can represent and how it can be used. It is easy to think that calling information “data” makes it discrete, separate, fixed, organized, computable — static [1]. But dataset users impose these qualities on information temporarily — to organize it into familiar forms that suit machine learning tasks and other algorithmic systems.
173
174
175
MACHINE LEARNING, DEEP LEARNING, NEURAL NET, ALGORITHM, MODEL — WHAT’S THE DIFFERENCE?
176
177
178
179
180
An ALGORITHM is a set of instructions for a procedure, whether in the context of machine learning or another task. Algorithms are often written in code for machines to process, but they are also widely used in any system of step-by-step instructions (e.g. cooking recipes). Algorithms are not a modern Western invention, but predate computation by thousands of years, as technology culture researcher Ted Striphas has shown [16]. That said, algorithms stayed associated mainly with mathematical calculation until quite recently, according to historian of science Lorraine Datson, who traces their expansion into a computational catch-all in the mid-20th century [17].
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
A MODEL is the result of a machine learning algorithm, once it includes revisions that take into account the data it was exposed to during its training. It is the saved output of the training process, ready to make predictions about new data. One way to think of a model is as a very complex mathematical formula containing millions or billions of variables (values that can change). These variables, also called model parameters, are designed to transform a numerical input into the desired outputs. The process of model training entails adjusting the variables that make up the formula until its output matches the desired output.

Much focus is put on machine learning models, but models depend directly on datasets for their predictions. While a model is not part of a dataset, it is deeply shaped by the datasets it is based upon. Traces of those datasets remain embedded within the model no matter how it is used next. (This guide won’t cover the detailed aspects of working critically with machine learning models and understanding how they learn — that’s a whole other discussion. Terms like ‘activation functions’, ‘loss functions’, ‘learning rates’, and ‘fine-tuning’ give a taste of the many human-guided processes behind model making, an active conversation and set of practices that are beyond the scope of this guide.)
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
Artificial NEURAL NETWORKS describe some of the ways to structure machine learning models (see TYPES in Section 4), including making large language models. Named for the inspiration they take from brain neurons (very simplified), they move information through a series of nodes (steps) organized in layers or sets. Each node receives the output of the previous layers’ nodes, combines them using a mathematical formula, then passes the output to the next layer of nodes.
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
MACHINE LEARNING is a set of tools used by computer programmers to find a formula that best describes (or models) a dataset. Whereas in other kinds of software the programmer will write explicit instructions for every part of a task, in machine learning, programmers will instruct the software to adjust its code based on the data it processes, thus “learning” from new information [18]. Its learning is unlike human understanding and the term is used metaphorically.

Some formulas are "deeper" than others, so called because they contain many more variables, and DEEP LEARNING refers to the use of complex, many layers in a machine learning model. Due to their increasing complexity, the outputs of machine learning models are not reliable for making decisions about people, especially in highly consequential cases. When working with datasets, include machine learning as one suite of options in a broader toolkit — rather than a generalizable multi-tool for every task.
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
INTRODUCTION TO MACHINE LEARNING DATASETS
262
1.3
263
WHAT ARE DATASETS?
264
265
266
267
A DATASET can be any kind of collected, curated, interrelated data. Often, datasets refer to large collections of data used in computation, and especially in machine learning. Information collections are transformed into datasets through a LIFECYCLE of processes (collection/selection, cleaning and analyzing, sharing and deprecating), which shape how that information is understood. (For critical questions you can ask at each phase of a dataset’s lifecycle, see Section 6.)
268
269
270
DATASETS ARE TIED TO THEIR MAKERS
271
272
The many choices that go into dataset creation and use make them extremely dynamic. They always reflect the circumstances of their making — the constraints of tools, who wields them and how, even who can afford the equipment to train, store, and transmit data. For example, analysis of national datasets in the Processing Citizenship project revealed how some European nations collected information differently, with a range of specificity in categories like ‘education level’ or ‘marital status’. Software engineer Wouter Van Rossem and science and technology studies professor Annalisa Pelizza examined not only the data in that dataset, but how they were labeled, organized, and utilized to show that these reflected how nations perceived the migrants they cataloged [19]. When gathered by a different group, using different tools, a dataset will be quite different — even if it attempts to collect similar information.
273
274
DATASETS ARE TIED TO THEIR SETTINGS
275
276
Datasets can be frustratingly limited, but this does not mean they are static; instead, the information in datasets is always wrapped up in the contexts that make and use them. Media scholar Yanni Alexander Loukissas, author of All Data Are Local, calls datasets “data settings,” arguing that “data are indexes to local knowledge.” They remain tied to the communities, individuals, organisms, and environments where they were created. Instead of treating data as independent authorities, he says we should ask, “Where do data direct us, and who might help us understand their origins as well as their sites of potential impact?” [1]. These questions extend the possibilities for exploring datasets as dynamic materials. Therefore, datasets must be used carefully, with consideration for their material connection to their origins.
277
278
279
280
For your consideration: How does framing information as “data” change your relationship to it? What other forms of information do you work with? What kinds of information should not be included in datasets?
281
282
283
284
285
286
287
288

2

289
BENEFITS: WHY APPROACH DATASETS CRITICALLY?
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
HERE ARE SOME EXAMPLES OF HOW DATASET STEWARDSHIP CAN BENEFIT YOUR PRACTICE, AS WELL AS BENEFIT OTHERS:
311
312
313
MORE ROBUST DATASETS ARISE BY CONSIDERING MULTIPLE PERSPECTIVES AND WORKING TO REDUCE BIAS.
314
315
316
317
318
TO-DO: Include interdisciplinary, intersectional communities in designing, developing, implementing, and evaluating your work. (See ALTERNATIVE APPROACHES TO DATASET PRACTICES)
319
320
321
322
323
MORE RELIABLE RESULTS COME FROM ANTICIPATING AND ADDRESSING CONTINGENCIES LIKE DEPRECATED DATASETS AND UNINFORMED CONSENT.
324
325
326
327
328
TO-DO: Apply checkpoints at each stage, asking critical questions about data provenance and reflecting on your own methodologies.
329
330
331
332
333
GAIN INCREASED PROTECTION FROM LIABILITY FOR DATASETS WITH LEGAL OR ETHICAL ISSUES BY PROACTIVELY ADDRESSING POTENTIAL CONCERNS BEFORE USE.
334
335
336
337
338
TO-DO: This does not constitute legal advice. However, always perform due diligence before working with existing datasets, including checking any licenses or terms of use. Simply downloading some datasets can create legal liability [20], [21]. So try to be aware of potential consent issues, misuse, or ethical concerns beyond those outlined by the dataset creators, especially as they may have changed since creation or arise from your new usage. You can check data repositories and data journalism to see how datasets have already been used.
339
340
341
342
343
CRITICAL PRACTICES ARE BECOMING FIELD-STANDARD AND REQUIRED FOR ACCESS TO TOP CONFERENCES AND JOURNALS.
344
345
346
347
348
TO-DO: Help shape the future of the field by modeling and advocating for best practices. Suggest new frameworks and methods for making, using, and deprecating datasets.
349
350
351
352
353
MORE CAREFUL AND CONSCIENTIOUS OUTCOMES FOR THOSE IMPACTED BY RESULTS.
354
355
356
357
358
TO-DO: Engage the people and groups affected by datasets and your use of them, to learn what careful and conscientious practices mean to them.
359
360
361
362
363
OPEN-SOURCE, OPEN-ACCESS, AND OPEN RESEARCH COMMUNITIES BUILD POSITIVE FEEDBACK LOOPS THROUGH DATASET STEWARDSHIP OF RELIABLE MATERIALS.
364
365
366
367
368
TO-DO: Share datasets responsibly, through centralized repositories and with thorough documentation.
369
370
371
372
373
NO NEUTRAL CHOICE (OR NON-CHOICE) EXISTS. “WHEN THE FIELD OF AI BELIEVES IT IS NEUTRAL,” SAYS AI RESEARCHER PRATYUSHA KALLURI, IT “BUILDS SYSTEMS THAT SANCTIFY THE STATUS QUO AND ADVANCE THE INTERESTS OF THE POWERFUL” [23].
374
375
376
377
378
TO-DO: Working with datasets brings challenges that need conversations and multiple perspectives. Discuss issues with your team using the Dataset’s Lifecycle questions in Section 6, plus the wide range of critical positions shared in the “Critical Dataset Studies Reading List” compiled by the Knowing Machines research project [22].
379
Before publishing or launching your work, ask hard questions and share your project with informal readers within your networks who can provide constructive feedback. Go slow. Pause or even stop a project if needed.
380
Remember that taking “no position” on a dataset’s ethical questions is still taking a position. Consider the tradeoffs for choosing one dataset or technique over another.
381
382
383
384
385

3

386
PARTS OF A DATASET
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
What actually makes up a machine learning dataset, practically speaking? Here are some of the key terms that are helpful for understanding their parts and dynamics:
411
412
413
INSTANCE
414
415
One data point being processed or sorted, often viewed as a row in a table. For example, in a training dataset for a classification task that will sort images of dogs from cats, one instance might include the image of a dog and the label “dog,” while another instance would be an image of a cat and the label “cat” as well as other pertinent metadata (see also LABEL, METADATA, and TRAINING DATA below in this section and SUPERVISED machine learning in Section 4).
416
417
418
FEATURE
419
420
One attribute being analyzed, considered, or explored across the dataset, often viewed as a column in a table. Features can be any machine-readable (i.e. numeric) form of an instance: images converted into a sequence of pixels, for example. Note: Researchers often select and “extract” the features most relevant for their purpose. Features are not given by default. They are the results of decisions made by datasets’ creators and users. (For more discussion of ENGINEERING FEATURES see Section 5.)
421
422
423
LABEL
424
425
The results or output assigned by a machine learning model, or a descriptor included in a training dataset meant for the model to practice on as it is built, or in a testing or benchmark dataset used for evaluation or verification. (See Section 7.2.2. for more on labels’ creation and their potentially harmful impacts.)
426
427
428
METADATA
429
430
Data about data, metadata is supplementary information that describes a file or accompanies other content, e.g. an image from your camera comes with the date and location it was shot, lens aperture, and shutter speed. Metadata can describe attributes of content and can also include who created it, with what tools, when, and how. Metadata may appear as TABULAR DATA (a table) and can include captions, file names and sizes, catalog index numbers, or almost anything else. Metadata are often subject- or domain-specific, reflecting how a group organizes, standardizes and represents information [23].
431
432
433
DATASHEET
434
435
A document describing a dataset’s characteristics and composition, motivation and collection processes, recommended usage and ethical considerations, and any other information to help people choose the best dataset for their task. Datasheets were proposed by diversity advocate and computer scientist Timnit Gebru, et al., as a field-wide practice to “encourage reflection on the process of creating, distributing, and maintaining a dataset, including any underlying assumptions, potential risks or harms, and implications for use” [24]. Datasheets are also resources to help people select and adapt datasets for new contexts.
436
437
438
SAMPLE
439
440
A selection of the total dataset, whether chosen at random or using a particular feature or property; samples can be used to analyze a dataset, perform testing, or train a model. For more on practices like sampling that transform datasets, Section 5.
441
442
443
TRAINING DATA
444
445
A portion of the full dataset used to create a machine learning model, which will be kept out of later testing phases. Imagining a model like a student studying for exams, you could liken the training data to their study guide which they use to practice the material. For example, in supervised machine learning (see Section 4), training data includes results like those the model will be asked to generate, e.g. labeled images. Training datasets can never be neutral, and they commonly “inherit learned logic from earlier examples and then give rise to subsequent ones,” says critical AI scholar Kate Crawford [25].
446
447
448
VALIDATION DATA
449
450
A portion of the full dataset that is separated from training data and testing data, validation data is held back and used to compare the performance of different design details. Validation data is separate from testing data, because validation data is used during the training process to optimize the model while adjustments are being made; therefore, the resulting model will be familiar with its data. That means separate testing data is still needed to confirm how the final model performs. Imagine validation data as practice tests that programmers can administer to check on the model’s progress so far.
451
452
453
TESTING DATA
454
455
A portion of the full dataset that is separated from the training data and validation data, and that is not involved in creation of a machine learning model. Testing data is then run through the completed model in order to assess how well it functions. Testing data for the model would be similar to the student’s final exam.
456
457
458
TENSORS: SCALARS, VECTORS, MATRICES (oh my!)
459
460
Software for working with machine learning datasets organizes information in numerical relationships, in grids called TENSORS. Understanding tensors can help you understand how data are viewed, compared, and manipulated in computational models. Their grids can have many dimensions, not only two-dimensional X-and-Y graphs [26]. A SCALAR describes a single number. A VECTOR is a list (aka an array), like a line of numbers. A MATRIX is a 2D tensor, like a rectangle of numbers. And a grid of three (or more) dimensions is a TENSOR, like a cube of numbers, or a many-dimensional cube of numbers.
461
462
463
DATA SUBJECTS
464
465
The people and other beings whose data are gathered into a dataset. Even if identifying information has been removed, datasets are still connected to the subjects they claim to represent.
466
467
468
DATA SUBJECTEES
469
470
This new and somewhat unwieldy term is used here to describe people impacted directly or indirectly by datasets, distinct from data subjects. Data subjectees include anyone affected by predictions made with machine learning models, for example someone forced to use a facial detection system to board a flight or eye-tracking software to take a test at school. Similarly, Batya Friedman and David G. Hendry of the Value Sensitive Design Lab distinguish between “direct” and “indirect stakeholders” to describe the different types of entanglement with technologies [27].
471
472
473
474
475
476
For your consideration: What other parts of a dataset are not included here but could be? How do you see dataset parts differently when you consider them within “data settings,” or contexts, tied to data subjects and data subjectees?[1] What kinds of contexts are impossible to include in datasets?
477
478
479
480
481
482
483
484
485
486
487

4

488
TYPES OF DATASETS
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
TYPES OF DATASETS
512
4.1
513
WHAT DISTINGUISHES TYPES OF DATASETS?
514
515
516
517
518
You may choose a dataset based on what it contains, how it is formatted, or other needs. For example, computer vision datasets include thousands of IMAGE or VIDEO files, while natural language processing datasets contain millions of bytes of TEXT. You may work with waveforms as SOUND files or time series data, or network GRAPH stored in structured text formats like JSON. In tables you might find PLACE data as geographic coordinates or X-Y-Z coordinates, TIME as historical date sequences or milliseconds. Likely, you’ll work with other types, too, or with combinations of MULTIMODAL data. Each dataset may include corresponding METADATA, documentation, and (hopefully) a complete DATASHEET.
519
520
521
You can also consider datasets based on whether the information is STRUCTURED, such as tabular data formatted in a table with labeled columns, or UNSTRUCTURED, such as plain text files or unannotated images. Annotating or coding a dataset prepares it for analysis, including supervised machine learning; and annotation raises important questions about labor, classification, and power. (See Section 6.1 for more on annotation and labeling.)
522
523
524
Datasets for SUPERVISED machine learning need to include labels for at least a portion of the data that the system is designed to “learn.” This means, for example, that a dataset for object recognition would contain images as well as a table to describe the manually located object(s) they contain. It might have columns for the object name or label, as well as coordinates for the object position or outline, and the corresponding image’s file name or index number.
525
526
527
In contrast, UNSUPERVISED machine learning looks for patterns that are not yet labeled in the dataset. It uses different kinds of machine learning algorithms, such as clustering groups of data together using features they share. However, it would be a misnomer to think that conclusions drawn from unsupervised machine learning are somehow more pure or rational. Much human judgment goes into developing an unsupervised machine learning model — from adjusting weights and parameters to comparing models’ performance. Often supervised and unsupervised approaches are used in combination to ask different kinds of questions about the dataset. Other kinds of machine learning approaches (like reinforcement learning) don’t fall neatly into these high-level categories.

(For a discussion of deprecated datasets, see Section 7.2.3, and for critical questions at every stage of working with datasets, see Section 6.)
528
529
530
531
TYPES OF DATASETS
532
4.2
533
EXAMPLES: HOW HAVE RESEARCHERS, ENGINEERS, JOURNALISTS, AND ARTISTS PUT DATASETS TO USE?
534
535
536
537
When starting a project, you may not know what kind of dataset you need. You might work with a particular kind of media or file type most often, so you start there — or maybe you want to try a new form. You may start with a curiosity, and you’re open to datasets in any format, from any source. To spark your imagination, here are four projects that used pre-existing datasets in novel and creative ways:
538
539
JOURNALISTS UNCOVER RAINFOREST EXPLOITATION WITH GEOSPATIAL DATA
540
Brazilian investigative journalists at Armando.info, collaborating with El País and Earthrise Media, used field reports and satellite images to find deforestation, hidden runways, and illegal mining in the Venezuelan and Brazilian Amazon. Through computer vision analysis developed from analog maps and used on imagery from a European Space Agency satellite, the journalists compared this analysis with existing information, including complaints from Indigenous communities. “It's not that this was a technology-only job,” says Joseph Poliszuck, Armando.info’s co-founder. “The technology allowed us to go into the field without being blindfolded” [28]. Using similar methods, Pulitzer fellow Hyury Potter detected approximately 1,300 illegal runways, more than the number of legally registered ones in the Brazilian Amazon. Combining data work with fieldwork in international collaborations helped these journalists connect local stories to larger scale climate crises and to support communities’ efforts to create change.
541
542
543
HISTORIANS ASSEMBLE FRAGMENTS OF ANCIENT TEXTS
544
Researchers from Google’s DeepMind used a neural net on an existing scholarly dataset to complete, date, and find attributions for fragments of ancient texts. They drew on 178,551 ancient inscriptions written on stone, pottery, metal, and more. that had been transcribed in the text archive Packard Humanities Institute’s Searchable Greek Inscriptions [29]. They said that the “process required rendering the text machine-actionable [in plain text formats], normalizing epigraphic notations, reducing noise and efficiently handling all irregularities” [30]. They collaborated with historians and students to corroborate the machine learning outputs, calling it a “cooperative research aid” showing how machine learning research can include humans in the training process. They also created an open-source interface: ithaca.deepmind.com
545
546
547
SOUND ARTIST EXPERIMENTS WITH REFUGEE ACCENT DETECTION TOOLS
548
Pedro Oliveira’s work [31] explores the accent recognition software used since 2017 by the German Federal Office for Migration and Refugees (BAMF). Though BAMF does not disclose the software’s datasets, Oliveira traced the probable source to two annotated sound databases from the University of Pennsylvania — unscripted Arabic telephone conversations named “CALL FRIEND” [32] and “CALL HOME” [33]. In 2019 the software had an error rate of 20 percent despite its deployment 9,883 times in asylum seekers’ cases [34]. Oliveira utilizes sounds removed from the datasets and reverse engineers the algorithm (as musical transformations rather than for classification tasks), in order to show how politically charged it is to define and detect accents. “How can you say it’s an accurate depiction of an accent?” he says. “Arabic is such a mutating language. That’s the beauty of it actually” [35]. He presents this through live performance and the online sound essay “On the Apparently Meaningless Texture of Noise” [36].
549
550
551
HUMAN RIGHTS ACTIVISTS ACCOUNT FOR WAR CRIMES WITH SYNTHETIC DATA
552
Sometimes important training data is missing from a dataset, because not enough of it exists, and these absences can amplify narrow assumptions about a diverse community. In other cases that don’t involve human subjects, synthetic data can fill gaps in creative ways. When human rights activists from Mnemonic, who were investigating Syrian war crimes using machine learning, struggled to find enough images of cluster munitions to train their model, computer vision group VFRAME created synthetic data — 10,000 computer-generated 3D images of the specialized weapon and its blast sites — which researchers then used to sift through the Syrian Archive’s 350,000 hours of video, searching for evidence of war crimes [37], [38]. Such systems can reduce the number of videos people need to comb through manually, while still keeping humans involved with pattern review and confirmation.
553
554
THERE ARE MANY MORE EXAMPLES LIKE THESE OF HOW TO SOURCE, USE, AND COMBINE DATASETS THAT ALREADY EXIST. THE CRITICAL AND CREATIVE POSSIBILITIES ARE NEARLY ENDLESS.
555
556
557
558
For your consideration: What kind of dataset(s) will you use, and how can you approach it more critically? How will you apply what you’ve learned here to your next machine learning project?
559
560
561
562
563
564
565
566
567

5

568
TRANSFORMING DATASETS
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
Just as there is no such thing as neutral data, no dataset is ready to use off the shelf. From preprocessing (sometimes confusingly called ‘cleaning’) to model creation, transformations reflect the perspectives of the dataset creators and users. This overview covers some of the technical details of getting a dataset ready for your tasks [18], [23], [26], [40], [41], {9}, while asking critical questions along the way.

As artist and researcher Kit Kuksenok argues, “Data cleaning is unavoidable! Each round of repeated care of data and code is an opportunity to invite new perspectives to code/data technical objects” [42]. Preprocessing is a key part of building any system with a dataset, so it is crucial to document and reflect upon preprocessing transformations.
590
591
592
593
CAUTION: Be on the lookout for dataset transformations that result in lost meanings, new misconceptions, or skewed information.
594
595
596
597
598
599
600
STORING DATA
601
602
A dataset must live somewhere, and once it grows beyond a single manageable file, it usually lives in a DATABASE. While ‘dataset’ describes what the data are, ‘database’ describes how data are stored — whether as a set of tables with columns and rows (e.g. a relational database like `SQL`, or a collection of documents with keys and values like `MongoDB`). Database structures should suit what they hold, but they will also shape what they hold and reflect how database designers see data. “They also contain the legacies of the world in which they were designed,” says media studies scholar Tara McPherson [43].
603
604
605
606
607
608
609
610
611
612
613
614
615
616
ACCOUNTING FOR MISSING DATA
617
618
You may have entries in your dataset that read `NaN` (not a number) or `NULL`, which may or may not cause errors, depending on what kinds of calculations you do. You may also have manual entries like, ‘?’, ‘huh’, or blanks that lack context. Should you remove the missing information? If you replace it, how will you know what goes in its place? Is data missing in uniform ways such that whole categories can be eliminated, or is it only missing for subgroups in ways that could skew results? How will you know what impacts your edits may have? Consider what missing data might mean. “Unavailable [is] semantically different from data that was simply never collected in the first place,” says data scientist David Mertz [41]. Filtering out data and filling in data have very different implications. Could you consult data subjects to get more context on missing data or the implications of removal or substitutions? How have others handled similar challenges? Can you run tests that treat missing data differently and compare the results? Mimi Ọnụọha’s “The Library of Missing Datasets” reflects on how missing data imply what will not or cannot be collected, or what has been considered not worthy of collection. The project creates a physical archive of empty files, covering topics that are excluded despite our data-hungry culture. She says, “That which we ignore reveals more than what we give our attention to” [44].
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
HANDLING EXTRA DATA
643
644
You’ll probably encounter dataset anomalies, outliers, and duplicates and then need ways to identify, adjust, or remove them. In text datasets for unsupervised learning, you’ll likely remove punctuation and “stop words” (commonly used conjunctions or articles like ‘an’ or ‘the’, for example). But, as software engineer Francois Chollet says, ”even perfectly clean and neatly labeled data can be noisy when the problem involves uncertainty and ambiguity” [18]. Outliers can also be accurate and contain meaningful information. As Crawford emphasizes, such acts of data cleaning and categorization create their own concepts of outside and otherness that can restrict “how people are understood and can represent themselves” [25]. Defining outliers, anomalies, or extra data means deciding what is ‘normal’, unexpected, or distracting — what is signal and what is noise.
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
DISCRETIZING DATA:
662
663