forum id
stringlengths 10
10
| title
stringlengths 21
154
| scores
sequencelengths 3
8
| text
stringlengths 52.4k
300k
|
---|---|---|---|
bR1J7SpzrD | Synthio: Augmenting Small-Scale Audio Classification Datasets with Synthetic Data | [
8,
8,
5,
6
] | Under review as a conference paper at ICLR 2025
SYNTHIO: AUGMENTING SMALL-SCALE AUDIO CLAS-
SIFICATION DATASETS WITH SYNTHETIC DATA
Anonymous authors
Paper under double-blind review
ABSTRACT
We present Synthio, a novel approach for augmenting small-scale audio1 classi-
fication datasets with synthetic data. Our goal is to improve audio classification
accuracy with limited labeled data. Traditional data augmentation techniques,
which apply artificial transformations (e.g., adding random noise or masking seg-
ments), struggle to create data that captures the true diversity present in real-world
audios. To address this shortcoming, we propose to augment the dataset with
synthetic audio generated from text-to-audio (T2A) diffusion models. However,
synthesizing effective augmentations is challenging because not only should the
generated data be acoustically consistent with the underlying small-scale dataset,
but they should also have sufficient compositional diversity. To overcome the first
challenge, we align the generations of the T2A model with the small-scale dataset
using preference optimization. This ensures that the acoustic characteristics of
the generated data remain consistent with the small-scale dataset. To address the
second challenge, we propose a novel caption generation technique that leverages
the reasoning capabilities of Large Language Models to (1) generate diverse and
meaningful audio captions and (2) iteratively refine their quality. The generated
captions are then used to prompt the aligned T2A model. We extensively evaluate
Synthio on ten datasets and four simulated limited-data settings. Results indicate
our method consistently outperforms all baselines by 0.1%-39% using a T2A model
trained only on weakly-captioned AudioSet.
1
INTRODUCTION
Audio classification is the foundational audio processing task of understanding the input audio and
assigning it to one or multiple predefined labels. However, training audio classification models
requires a lot of high-quality labeled data, which is not always readily available (Ghosh et al., 2022).
Manually collecting and annotating large-scale audio datasets is an expensive, time-consuming, and
noisy process (Nguyen et al., 2017; Mart´ın-Morat´o & Mesaros, 2021), and recent concerns about
data privacy and usage rights further hinder this process (Ren et al., 2023).
Data augmentation, which involves expanding original small-scale datasets with additional data,
is a promising solution to address data scarcity. Traditional augmentation techniques attempt to
diversify audio samples by applying randomly parameterized artificial transformations to existing
audio. These methods include spectral masking (Park et al., 2019), temporal jittering (Nanni et al.,
2020), cropping (Niizumi et al., 2021), mixing (Seth et al., 2023; Ghosh et al., 2023b; Niizumi et al.,
2021) and other techniques (Saeed et al., 2021; Al-Tahan & Mohsenzadeh, 2021; Manocha et al.,
2021). While these approaches have shown success, they operate at the level of observed data rather
than reflecting the underlying data-generating process that occurs in real-world scenarios. As a result,
they statistically modify the data without directly influencing the causal mechanisms that produced it,
leading to high correlations between augmented samples and limited control over diversity.
Generating synthetic data from pre-trained text-to-audio (T2A) models addresses the limitations
of standard data augmentation techniques while retaining their strengths of universality, controlla-
bility, and performance (Trabucco et al., 2024). The recent success of generative models makes
this approach particularly appealing (Long et al., 2024; Evans et al., 2024b). However, generat-
ing synthetic audio presents unique challenges due to the complexity of waveforms and temporal
1We use “audio” to refer to acoustic events comprising non-verbal speech, non-speech sounds, and music.
1
000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
Under review as a conference paper at ICLR 2025
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
dependencies (Ghosh et al., 2024b). We highlight the 3 main challenges in generating effective
synthetic data for audio classification: i) Consistency with the original data: Synthetic audio that
does not align acoustically with the original dataset can hinder effective augmentation and may cause
catastrophic forgetting (Geiping et al., 2022). This misalignment includes spectral, harmonic, and
other inherent acoustic characteristics not easily controlled through prompts. Maintaining consistency
with T2A models trained on internet-scale data remains a challenge, and standard fine-tuning can
often lead to overfitting (Weili et al., 2024). ii) Diversity of generated data: Ensuring compositional
diversity in the generated synthetic data (e.g., sound events, temporal relationships, background
elements, etc.) is critical for effective augmentation. Additionally, a lack of diversity can lead to poor
generalization and learning of spurious correlations, impacting performance. Simple, hand-crafted
prompts (e.g., “Sound of a metro”) often result in repetitive patterns, and creating diverse, meaningful
prompts is labor-intensive. Complex prompts can generate audios that do not preserve the original
label. iii) Limitations of current T2A models: T2A models often struggle to generate diverse audios
and follow details in prompts. This is largely due to the lack of large-scale, open-source datasets for
training, as well as the inherent complexity of non-speech audio domains (Ghosal et al., 2023). These
limitations highlight the need for more advanced approaches for synthetic data generation in audio.
Our Contributions. To address these challenges, we propose Synthio, a novel, controllable and
scalable approach for augmenting small-scale audio classification datasets with synthetic data.
Our proposed approach has 2 main steps:
i)
Aligning the Text-to-Audio Models with Prefer-
ence Optimization: To generate synthetic audios
with acoustic characteristics consistent with the
small-scale dataset, we introduce the concept
of aligning teaching with learning preferences.
Specifically, we align the generations of the T2A
model (acting as the teacher) with the target char-
acteristics of the small-scale dataset using pref-
erence optimization. This approach ensures that
the synthetic audios reflect the acoustic prop-
erties of (or sound similar to) the downstream
dataset, enabling the classification model (the
student) to perform well on test data with sim-
ilar characteristics. To achieve this, we train a
diffusion-based T2A model with preference op-
timization, where audios generated from Gaus-
sian noise are treated as losers and audios from
the downstream dataset are treated as winners.
ii) Generating Diverse Synthetic Augmentations: To generate diverse audios for augmentation, we
introduce the concept of language-guided audio imagination and imagine novel acoustic scenes with
language guidance. Specifically, we generate diverse audio captions that are then used to prompt T2A
models to generate audios with varied compositions. To achieve this, we propose MixCap, where
we prompt LLMs iteratively to generate captions combining existing and new acoustic components.
Additionally, we employ a self-reflection module that filters generated captions and prompts the LLM
to revise those that do not align with the intended label. To summarize, our main contributions are:
1. We introduce Synthio, a novel data augmentation approach for audio classification that
expands small-scale datasets with synthetic data. Synthio uses novel methods to tackle the
inherent challenges of producing consistent and diverse synthetic data from T2A models.
2. We evaluate Synthio across 10 datasets in 4 simulated low-resource settings, demonstrating
that, even with a T2A model trained on weakly captioned AudioSet, Synthio outperforms
all baselines by 0.1%-39%.
Figure 1: Performance comparison of Synthio with
other augmentation methods on down-sampled ESC-
50 (100 samples). Traditional augmentation, such as
SpecAug, degrades performance on small-scale datasets.
Naive synthetic augmentation outperforms traditional
methods significantly but plateaus with higher sample
counts. Synthio further enhances performance by gener-
ating consistent and diverse synthetic data.
3. We conduct an in-depth analysis of the generated augmentations, highlighting Synthio’s
ability to produce diverse and consistent data, its scalability, and its strong performance on
complex tasks such as audio captioning.
2 RELATED WORK
Data Augmentation for Audio and Beyond. Expanding or augmenting small-scale datasets with
additional data has been widely studied in the literature. Traditional augmentation methods, which
2
Classification Accuracy0.40.50.60.70.80.90100400500No AugmentationSpecAugVanilla Syn. Aug.Synthio (ours) 200 300 Number of Generated AugmentationsUnder review as a conference paper at ICLR 2025
apply randomly parameterized artificial transformations to data during training, remain the most
common approach across language Wei & Zou (2019); Karimi et al. (2021), vision (Shorten &
Khoshgoftaar, 2019; Wang et al., 2017; Yun et al., 2019), and audio (Park et al., 2019; Spijkervet,
2021). For audio, specific techniques include SpecAugment, adding background noise, reverberation,
and random spectrogram transformations. With the emergence of generative models, synthetic
data augmentation has been increasingly adopted for language (Ghosh et al., 2023a; 2024c; Chen
et al., 2021) and vision (Trabucco et al., 2024; Zhao et al., 2024), proving to be more effective than
traditional methods. These approaches generally incorporate explicit steps to ensure the consistency
and diversity of generated augmentations. In contrast, application of synthetic data to audio and
speech remain underexplored. Recent attempts include generating synthetic captions for improving
audio-language pre-training (Xu et al., 2023), improving T2A models with synthetic captions (Kong
et al., 2024) and environmental scene classification (Ronchini et al., 2024; Feng et al., 2024).
Few- and Zero-Shot Audio Classification. Few-shot audio classification focuses on training models
to classify audio samples with very limited labeled data per class, often leveraging transfer learning
or meta-learning approaches (Zhang et al., 2019; Wang et al., 2021; Heggan et al., 2022). In contrast,
zero-shot audio classification enables models to generalize to unseen categories without direct training
on those classes, relying on learned representations or external knowledge (Xie & Virtanen, 2021;
Elizalde et al., 2023). Synthetic data research complements these by generating additional labeled
data, improving model performance under low-resource settings while addressing data scarcity
without directly requiring labeled instances from the target categories.
Text-to-Audio Generation. In recent years, there has been a significant surge in research on text-
to-audio (T2A) models. The most popular architectures include auto-regressive models based on
codecs (Kreuk et al., 2023; Copet et al., 2024) and diffusion models Liu et al. (2023); Ghosal
et al. (2023); Evans et al. (2024a). Clotho (Drossos et al., 2020) and AudioCaps (Kim et al.,
2019) remain the largest human-annotated datasets for training these models. However, large-scale
datasets for T2A model training are still scarce. Recently, Yuan et al. (2024) synthetically captioned
AudioSet (Gemmeke et al., 2017), demonstrating its effectiveness for training T2A models. For
downstream adaptation, earlier works have primarily relied on Empirical Risk Minimization (ERM).
Majumder et al. (2024) introduced preference optimization for T2A models, creating a synthetic
preference dataset based on scores provided by a CLAP model (Elizalde et al., 2023).
3 BACKGROUND
Diffusion Models. Diffusion models consist of two main processes: a forward process and a reverse
process. Given a data point x0 with probability distribution p(x0), the forward diffusion process
gradually adds Gaussian noise to x0 according to a pre-set variance schedule γ1, · · · , γT and degrades
the structure of the data. We request readers to refer to App. A.1 for more details on diffusion models.
Reward Modeling. Estimating human preferences for a particular generation x0 (hereafter treated as
a random variable for language), given the context c, is challenging because we do not have direct
access to a reward model r(c, x0). In our scenario, we assume only ranked pairs of samples are
available, where one sample is considered a “winner” (xw
0) under the
same conditioning c. Based on the Bradley-Terry (BT) model, human preferences can be modeled as:
0|c) = σ(r(c, xw
(1)
where σ represents the sigmoid function. The reward model r(c, x0) is parameterized by a neural
network ϕ and trained through maximum likelihood estimation for binary classification:
0 ) and the other a “loser” (xl
0 ) − r(c, xl
pBT(xw
0 ≻ xl
0))
Here, prompt c and data pairs (xw
LBT(ϕ) = −E
0 , xl
0 ,xl
0
(cid:2)log σ(rϕ(c, xw
0 ) − rϕ(c, xl
c,xw
0) are drawn from a dataset labeled with human preferences.
0))(cid:3)
(2)
RLHF : (Christiano et al., 2017) The goal of RLHF is to optimize a conditional distribution pθ(x0|c),
where c ∼ Dc, such that the latent reward model r(c, x0) is maximized. This is done while regulariz-
ing the distribution through the Kullback-Leibler (KL) divergence from a reference distribution pref,
resulting in the following objective:
max
pθ
Ec∼Dc,x0∼pθ(x0|c)[r(c, x0)] − βDKL[pθ(x0|c)∥pref(x0|c)]
(3)
Here, the hyperparameter β controls the strength of regularization.
3
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
Under review as a conference paper at ICLR 2025
DPO : DPO directly optimizes the conditional distribution pθ(x0|c) to align data generation with the
preferences observed in (any form of) feedback. The goal is to optimize the distribution of generated
data such that it maximizes alignment with human preference rankings while maintaining consistency
with the underlying reference distribution pref(x0|c).
The optimal solution p∗
θ(x0|c) for the DPO objective can be expressed as:
p∗
θ(x0|c) = pref(x0|c)
exp(r(c, x0)/β)
Z(c)
where Z(c) is the partition function, defined as:
Z(c) =
(cid:88)
x0
pref(x0|c) exp(r(c, x0)/β)
(4)
(5)
This term ensures proper normalization of the distribution, and β controls the regularization, balancing
between adherence to the reference distribution and preference maximization. The reward function
r(c, x0) is then reparameterized as:
r(c, x0) = β log
p∗
θ(x0|c)
pref(x0|c)
+ β log Z(c)
Using this reparameterization, the reward objective can be formulated as:
LDPO(θ) = −E
c,xw
0 ,xl
0
(cid:20)
(cid:18)
log σ
β log
pθ(xw
pref(xw
0 |c)
0 |c)
− β log
(cid:19)(cid:21)
pθ(xl
pref(xl
0|c)
0|c)
(6)
(7)
By optimizing this objective, DPO enables direct preference learning, optimizing the conditional
distribution pθ(x0|c) in such a way that it better reflects human preferences, as opposed to traditional
approaches that optimize the reward function first and then perform reinforcement learning.
DPO for Diffusion Models: Very recently, Wallace et al. Wallace et al. (2024) propose a formulation
for optimizing diffusion models with DPO. The primary issue with optimizing diffusion with DPO is
that the distribution pθ(x0|c) is not tractable due to the need to consider all possible diffusion paths
leading to x0. To address this, Wallace et al. propose to leverage the evidence lower bound (ELBO)
to incorporate latents x1:T , which represent the diffusion path. The reward R(c, x0:T ) accounts for
the entire sequence, leading to the reward function:
r(c, x0) = Epθ(x1:T |x0,c)[R(c, x0:T )]
(8)
Instead of directly minimizing the KL-divergence as typically done, they propose to utlize the
upper bound of the joint KL-divergence DKL[pθ(x0:T |c)||pref(x0:T |c)]. This is integrated into the
optimization objective, enhancing the practicality of training diffusion models with preferences. The
new objective, aiming to maximize the reward and match the distribution of the reverse process of pθ
to the reference model pref, is given by:
max
pθ
Ec,x0∼pθ(x0:T |c)[r(c, x0)] − βDKL[pθ(x0:T |c)||pref(x0:T |c)]
(9)
Training efficiency is improved by approximating the intractable reverse process using a forward
approximation q(x1 : T |x0). The DPO then integrates this into the loss function, which involves
comparing the log likelihood ratio of the probabilities under pθ and pref for winning and losing paths:
LDPO-Diffusion(θ) = −E
(c,xw
0 ,xl
0)∼Dpref
(cid:20)
(cid:18)
log σ
βT log
pθ(xw
pref(xw
1:T |xw
0 )
1:T |xw
0 )
− βT log
(cid:19)(cid:21)
pθ(xl
pref(xl
1:T |xl
0)
1:T |xl
0)
(10)
After applying Jensen’s inequality to take advantage of the convexity of − log σ, we push the
expectation outside, allowing us to simplify the objective. By approximating the denoising process
with the forward process, the final form of the loss for DPO in diffusion models, in terms of the L2
noise estimation losses, becomes:
LDPO-Diffusion(θ) = −E
(11)
0)∼Dpref,t,ϵw
where ∆L is the L2 weighted noise estimation losses between the preferred (winner) and less
preferred (loser) samples.
[log σ (−βT ω(λt)∆L)]
(c,xw
0 ,xl
t ,ϵl
t
4 METHODOLOGY
Let Dsmall = {(ai, li), 1 ≤ i ≤ n} be a high-quality, small-scale human-annotated audio classification
dataset with n audio-label pairs. Let Da-c be a potentially noisy, large-scale weakly-captioned dataset
of audio-caption pairs with zero intersection with Dsmall. Our goal is to train a T2A model T θ using
4
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
Under review as a conference paper at ICLR 2025
Da-c, then use it to generate a synthetic dataset Dsyn and then finally add it to Dsmall (now attributed as
Dtrain) to improve audio classification performance. This is accomplished through two key steps: first,
aligning the generations from T θ with the acoustic characteristics of Dsmall, and second, generating
new captions to prompt T θ for creating synthetic audio data.
4.1 ALIGNING THE TEXT-TO-AUDIO MODEL USING PREFERENCE OPTIMIZATION
T2A models trained on internet-scale data often gen-
erate audio that diverges from the characteristics of
small-scale datasets, resulting in distribution shifts.
These mismatches can include variations in spectral
(e.g., frequency content), perceptual (e.g., pitch, loud-
ness), harmonic, or other acoustic characteristics 2.
This misalignment arises from the non-deterministic
nature of T2A generation and it is impractical to pro-
vide detailed attributes (like “loud” or “high-pitched”)
in prompts, as (i) there are no scalable methods for
extracting specific attributes for each label, and (ii)
T2A models struggle with accurately following fine-
grained prompt details (Wang et al., 2024).
Figure 2: We propose to align the T2A model
T θ with the small-scale dataset Dsmall using DPO.
This helps us generate audios with acoustic char-
acteristics aligned to that of Dsmall.
To address these issues, we propose the concept of
aligning teaching with learning preferences. Our ap-
proach assumes that the classification model (viewed
as the student) performs better when trained on syn-
thetic audio that closely matches the inherent acoustic
properties of our high-quality and human-labeled Dsmall. Thus, we align the generations of the T2A
model (viewed as the teacher) to Dsmall, ensuring that the generated augmentations align with the de-
sired characteristics and sound similar, ultimately enhancing the student model’s ability to generalize
to similarly characterized test data. As shown in Fig. 2, we achieve this using preference optimization
(DPO in our case) and align generations of T θ with Dsmall. Unlike standard fine-tuning, which can
lead to less diverse outputs and overfitting due to a narrow focus on minimizing loss, preference
optimization encourages greater exploration in the model’s output space, preventing mode collapse
and fostering more diverse augmentations. Additionally, DPO leverages pairwise learning, offering
richer training signals compared to the independent outputs used in standard fine-tuning, further
mitigating overfitting risks. We detail our two-step approach for DPO optimization below:
1 , al
j , al
1), · · · , (aw
Step 1: Construction of the Preference Dataset. To create our preference dataset Dpref =
{(aw
j)}, we first generate template-based captions for each instance in Dsmall in the
form: “Sound of a label”, where label is the category associated with the audio. For each instance,
we prompt the T2A model j times, with all generations starting from randomly initialized Gaussian
noise (generation configuration is detailed in Section 5). Each generated audio is then paired with
the corresponding ground-truth audio from the gold dataset. This resulting Dpref dataset has n × j
instances, where the generated audio is treated as the “loser” and the ground-truth audio as the
“winner”. This simple approach has proven highly effective in aligning generations by generative
models by prior work (Majumder et al., 2024; Tian et al., 2024).
Step 2: Preference Optimization Using DPO. After constructing Dpref, we train our T2A model
on this dataset with DPO using the approach outlined in Section 3. The resulting aligned model is
referred to as T θ
aln. Details of the hyper-parameters used for training are provided in Section 5.
4.2 GENERATING DIVERSE SYNTHETIC AUGMENTATIONS
It is not well-studied in the literature on how to leverage synthetic audio generation for downstream
tasks. The only existing work relied on manually crafted prompt templates (e.g., “Sound of a
{label}”) (Ronchini et al., 2024). It has a significant limitation: there is no precise control over
2When prompted with “sound of a bus” for the category “bus” in the TUT-Urban dataset, the generated audio
may not reflect the typical bus sounds in European cities (where TUT was recorded), as bus sounds can vary by
region, with some featuring loud engines and dense crowds while others have quieter engines and sparse crowds.
5
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
Sound of a {label}Random Gaussian NoiseText ConditionWinning AudiosLosing AudiosText-to-AudioModelAligned Text-to-Audio ModelGeneratedAudiosAdapters Training🔥🔥❄PreferenceOptimizationUnder review as a conference paper at ICLR 2025
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
Figure 3: Overview of our proposed Language-Guided Audio Imagination for generating diverse synthetic
augmentations. Starting with the small-scale dataset, we first generate audio captions and use an LLM to extract
acoustic components (Prompt 1). Using these components and audio labels, we prompt the LLM to generate new
and diverse captions (Prompt 2), which are then used to prompt the aligned T2A model for audio generation.
The generated audios are filtered for label consistency using CLAP, with accepted audios added to the final
synthetic dataset. Rejected audios undergo caption revision (Prompt 3) through a self-reflection process, and the
revised captions are used to regenerate audios, iterating this process i times. Example captions are in Table 6.
the specific components in the generated audio for a given caption. This can result in repetitive or
completely inconsistent patterns, particularly with weaker T2A models 3. These could bias the model
to learn spurious correlations, a known issue in synthetic data augmentation (Ghosh et al., 2024c).
While the alignment stage helps the T2A model generate audio with acoustic characteristics similar to
the small-scale dataset (e.g., spectral, harmonic, etc.), it does not fully account for the compositional
diversity of the generated audios (e.g., sound events, their temporal relationships, background
elements). To tackle this, we propose the concept of language-guided audio imagination, where we
propose to imagine novel audios guided by language. Specifically, we leverage the reasoning abilities
of LLMs to generate diverse and meaningful captions for a category label in a controlled yet scalable
manner. These captions are then used to prompt our aligned T2A model for generating novel audios.
4.2.1 GENERATING DIVERSE PROMPTS WITH MIXCAP
We propose MixCap, a prompt generation method that creates diverse and effective captions in
three steps: First, we employ GAMA (Ghosh et al., 2024a) to caption all audio files in Dsmall. Next,
we prompt an LLM to extract phrases describing the acoustic components of the audio. These
components correspond to the acoustic elements such as backgrounds and foreground events, and
their attributes and relations, etc (see prompt in Appendix A.2). Finally, for each training instance
in Dsmall, we prompt the LLM with the ground-truth label and the extracted components from all
instances to generate N diverse audio captions that blend existing and new components.
4.2.2 FILTERING & SELF-REFLECTION
Filtering. After generating captions and their corresponding audio, we filter the audio for label
consistency. While LLMs can generate diverse captions, the audio produced must remain aligned
with the ground-truth label. To ensure this, we use CLAP to evaluate the generated audio, accepting
those that meet a similarity threshold of p% and rejecting the rest. We denote the accepted audios as
Dacc
syn. Our CLAP model is pre-trained on Da-c and we fine-tune the last
layer with Dsmall to adapt to the target dataset. Example captions are in Table 6.
Self-Reflection. For the rejected audios in Drej
syn, we prompt the LLM to reflect on its generated
captions and revise them to better align with the target label. Precisely, we feed the LLM with the
syn and the rejected ones as Drej
3For example, when prompted with “Sound of a park”, we observed that 9 out of 10 times, the model
generated the sound of children playing as part of the generated audio. On the other hand, when prompted with
“Sound of a airport”, the model generates audios with background announcements, which could vary by regions.
6
ASTSmall-ScaleDatasetSyntheticDataLLMGenerated AudiosRejectedAudiosAccepted AudiosSelf-ReflectionExisting AcousticComponents CLAP FilteringLLMAudio CaptioningModelAudio CaptionsPrompt 1Text-to-AudioModelNew AudioCaptionsPrompt 2Prompt 3MixCap🔥❄❄❄Trainable🔥Frozen❄Under review as a conference paper at ICLR 2025
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
original caption of each rejected audio along with extracted components from all accepted captions
in Dacc
syn and task it to rewrite the rejected captions. The revised captions are then used to generate new
audio, which is again filtered using CLAP. Audios that meet the threshold are accepted while ones
that don’t go through the process. This repeats for i iterations or until there are no rejected samples.
Fine-tuning for Audio Classification. After the self-reflection stage, the final set of accepted
synthetic audios is denoted as Dsyn, containing ≈ N × n audio-label pairs, where N represents the
augmentation factor (e.g., with 100 gold samples, we generate 100 × N synthetic samples). This set
is then combined with Dsmall to form the final training dataset Dtrain, which is then used to train the
audio classification model.
5 EXPERIMENTAL SETUP
Models and Hyper-Parameters. For our T2A model, we choose the Stable Audio architecture (Evans
et al., 2024b). We train the model from scratch on Sound-VECaps (Yuan et al., 2024) (with ≈1.5
million weakly captioned audio-caption pairs) to avoid any data leakage. For training, we employ a
batch size of 64, an AdamW optimizer, a learning rate of 5e-4, and a weight decay of 1e-3 for 40
epochs. For DPO-based alignment tuning, we generate j = 2 losers and fine-tune with a batch size
of 32 and a learning rate of 5e-4 for 12 epochs. For our audio classification model, we employ the
Audio Spectrogram Transformer (AST) (Gong et al., 2021) (pre-trained on the AudioSet dataset) and
fine-tune it with a batch size of 24 and learning rate of 1e-4 for 50 epochs. For CLAP filtering we
employ p = 0.85. For prompting our diffusion model we use Text CFG=7.0. In each experiment, we
adjust the number of generated augmentations N (ranging from 1 to 5) based on performance on the
validation set. All results are averaged across 3 runs.
Datasets. We create small-scale datasets by downsampling commonly used audio classification
datasets to n samples. Our selected datasets include a mix of music, everyday sounds, and acoustic
scenes. For multi-class classification, we use NSynth Instruments, TUT Urban, ESC50 (Piczak),
USD8K (Salamon et al., 2014), GTZAN (Tzanetakis et al., 2001), Medley-solos-DB (Lostanlen
& Cella, 2017), MUSDB18 (Rafii et al., 2017), DCASE Task 4 (Mesaros et al., 2017), and Vocal
Sounds (VS) (Mesaros et al., 2017), evaluating them for accuracy. For multi-label classification, we
use the FSD50K (Fonseca et al., 2022) dataset and evaluate it using the F macro
metric. We exclude
AudioSet from evaluation as Sound-VECaps is derived from it. To ensure a downsampled dataset that
has a label distribution similar to that of the of the original dataset, we employ stratified sampling
based on categories. Our experiments are conducted with n = {50, 100, 200, 500} samples, and we
downsample the validation sets for training while evaluating all models on the original test splits.
1
Baselines. Our baselines include: (i) Gold-only (No Aug.): We employ only the small-scale
dataset for training and do not perform any augmentations. (ii) Traditional augmentation baselines:
SpecAugment, Noise Augmentation (we either add random Gaussian noise or background noise from
AudioSet and present averaged results), Pitch and Time Shift and Audiomentations (Jordal, 2021)
– a combination of the AddGaussianNoise, TimeStretch, PitchShift, Shift, SpecFrequencyMask,
TimeMask and TimeStretch – combination with the highest average score on 4 datasets and splits
and was selected after grid search over all possible combinations). (iii) Generative baselines: Vanilla
Synthetic Augmentation (Vanilla Syn. Aug.) – we prompt Tθ with template captions), Vanilla Syn.
Aug. + LLM Caps – we prompt Tθ with random captions generated with LLMs. (iv) Finally, inspired
by Burg et al. (2023), we also employ a retrieval baseline where instead of generating augmentations
from our T2A model trained on Da-c, we just retrieve the top-n instances (w.r.t. CLAP similarity)
from the AudioSet for each instance in Dsmall as our augmentations.
Ablations. We ablate Synthio with: (i) w/o Self-Reflection: We remove the repetitive self-reflection
module and iterate and filter only once; (ii) w/o DPO: We skip the tuning step and prompt the
un-alined T θ for augmentations; (iii) w/ ERM: We replace DPO tuning with standard Empirical
Risk Minimization(ERM)-based fine-tuning with diffusion loss; (iv) w/ Template Captions: We
remove MixCap and self-reflection modules and prompt T θ
aln with template captions; (v) w/o
MixCap: Similar to our Random Captions baseline, but we retain all other modules of Synthio.
6 RESULTS AND DISCUSSION
Main Results. Table 1 showcases the performance comparison between Synthio and the baseline
methods. Synthio consistently outperforms all baselines by 0.1%-39%, achieving notable improve-
7
Under review as a conference paper at ICLR 2025
Table 1: Result comparison of Synthio with baselines on 10 datasets and 4 small-scale settings. n refers to the
number of samples in the small-scale dataset augmented with synthetic data. Synthio outperforms our baselines
by 0.1% - 39%. We also highlight the relative improvements by Synthio compared to the Gold-only.
n
Method
ESC-50 USD8K GTZAN Medley
Gold-only (No Aug.)
22.25
55.09
47.05
47.23
TUT
37.60
NSynth
VS
MSDB
DCASE
FSD50K
33.32
77.49
56.85
12.09
13.21
12.93
12.81
13.28
10.53
15.89
13.07
7.16
8.06
10.04
7.93
10.17
7.28
10.63
10.70
17.23+42% 13.91+94%
14.15
13.28
15.63
14.82
14.53
12.50
13.35
12.19
14.17
16.93
10.93
15.73
16.32
13.06
13.79
13.74
12.52
10.13
10.53
13.71
13.11
14.80
13.55
10.05
12.63
13.25
19.38+55% 16.35+55%
16.32
17.21
15.89
16.77
14.83
23.15
13.62
14.52
12.14
13.62
12.53
13.59
Random Noise
Pitch Shifting
SpecAugment
Audiomentations
Retrieval
Vanilla Syn. Aug.
+ LLM Caps.
Synthio (ours)
50
w/ Template Captions
w/ ERM
w/o Self-Reflection
w/o MixCap
w/o DPO
57.42
59.32
58.36
60.13
37.14
63.54
65.84
45.20
46.80
46.00
47.25
42.55
55.35
63.74
18.50
20.55
19.50
20.35
19.20
40.75
36.80
35.86
37.22
36.73
38.24
35.80
41.50
40.90
46.55
48.17
47.18
48.30
43.65
47.23
55.36
49.50+122% 76.12+38% 68.20+44% 60.58+28% 43.84+17% 40.83+22% 80.67+4%
54.52
56.60
58.00
52.18
52.55
32.42
34.34
27.32
28.15
31.27
33.17
38.17
76.41
78.17
77.27
79.12
71.42
78.37
78.77
41.25
41.30
45.25
42.70
36.55
66.11
69.80
72.57
64.72
68.12
37.52
38.62
39.50
36.13
40.31
64.40
61.70
64.55
54.65
56.10
41.37
42.00
42.81
41.93
41.39
78.57
79.75
78.56
78.70
79.03
52.55
54.50
53.25
54.51
51.35
54.10
57.05
60.15+5%
59.60
57.75
57.25
58.80
57.55
Gold-only (No Aug.)
56.75
72.89
64.15
57.81
47.14
39.11
84.32
65.60
Random Noise
Pitch Shifting
SpecAugment
Audiomentations
Retrieval
Vanilla Syn. Aug.
+ LLM Caps.
Synthio (ours)
100
w/ Template Captions
w/ ERM
w/o Self-Reflection
w/o MixCap
w/o DPO
Gold-only (No Aug.)
Random Noise
Pitch Shifting
SpecAugment
Audiomentations
Retrieval
Vanilla Syn. Aug.
+ LLM Caps.
Synthio (ours)
200
w/ Template Captions
w/ ERM
w/o Self-Reflection
w/o MixCap
w/o DPO
Gold-only (No Aug.)
Random Noise
Pitch Shifting
SpecAugment
Audiomentations
Retrieval
Vanilla Syn. Aug.
+ LLM Caps.
Synthio (ours)
500
w/ Template Captions
w/ ERM
w/o Self-Reflection
w/o MixCap
w/o DPO
71.54
73.52
72.43
73.82
68.24
77.31
79.73
58.50
59.55
47.50
48.50
52.45
77.25
67.05
65.50
66.75
69.75
71.05
61.55
68.25
67.90
46.21
47.50
50.07
51.14
45.39
49.96
48.63
56.98
58.46
58.06
59.32
54.83
63.58
65.79
83.35+47% 85.00+17% 71.20+11% 71.23+23% 52.42+11% 44.92+15% 86.70+3%
64.20
66.57
68.52
66.52
60.81
38.20
39.53
41.96
42.15
37.84
42.31
41.83
83.33
85.07
85.14
85.24
83.27
84.78
84.83
78.00
73.20
77.65
73.50
66.75
80.32
81.81
82.38
78.30
75.46
42.76
43.74
44.38
42.27
40.31
68.15
67.25
69.55
68.50
66.15
49.95
51.11
51.75
50.63
48.78
85.11
84.73
82.53
83.52
84.67
66.15
68.25
66.40
68.40
58.55
63.55
65.95
68.80+5%
66.05
68.00
66.20
66.35
67.85
84.75
83.55
84.90
85.10
85.25
82.55
85.40
85.80
86.10+2%
85.95
85.35
84.85
84.95
84.80
90.75
89.55
88.50
89.50
89.95
85.50
91.50
89.90
92.10+2%
91.70
91.20
91.85
91.70
90.15
74.80
75.15
74.48
76.46
75.80
71.20
77.96
78.37
77.00
75.50
78.55
76.25
77.30
73.65
77.10
79.55
67.41
66.71
67.74
65.70
67.00
65.80
78.97
74.14
55.32
54.42
55.44
55.72
55.21
53.25
55.51
54.73
82.81+11% 82.05+7%
79.40+18% 56.83+3%
80.84
79.82
81.97
81.27
76.23
87.88
88.25
88.83
89.01
88.75
84.86
88.18
86.91
89.18+2%
88.93
88.25
88.72
87.93
88.21
79.25
80.20
78.25
79.55
75.30
79.25
78.90
79.75
80.25
81.25
77.25
79.35
79.55
82.25+4%
80.40
79.15
80.15
80.95
79.45
77.56
74.43
75.53
73.50
73.13
75.65
76.01
75.61
76.68
77.66
73.62
77.97
77.91
78.62+4%
76.64
77.38
78.57
76.61
76.03
55.99
55.76
56.39
55.27
55.99
65.72
65.10
64.93
66.74
66.92
62.73
65.93
65.95
67.81+3%
66.47
65.80
66.21
65.91
66.01
48.77
87.38
68.80
47.83
48.12
54.80
53.15
47.63
55.20
56.21
15.32
17.51
17.93
18.36
15.36
19.04
18.14
86.45
87.47
87.42
86.08
86.28
86.49
87.02
24.82
23.11
27.36
26.29
19.51
28.55
28.40
65.45
69.80
69.25
70.50
63.55
72.95
73.16
57.10+17% 87.52+0.2% 80.40+17% 32.81+42% 20.85+53%
74.55
74.40
75.55
78.55
73.15
19.04
18.22
17.28
19.42
17.17
56.33
56.15
56.76
55.54
52.73
87.25
86.92
86.22
85.78
86.52
29.12
29.81
31.13
28.35
26.79
63.47
89.33
72.05
34.30
20.19
64.15
64.59
64.43
65.21
61.44
64.52
64.39
65.40+3%
64.71
64.27
63.89
64.23
63.61
90.15
89.87
90.38
91.34
87.33
90.31
90.09
91.42+2%
90.97
88.74
90.17
90.23
89.83
73.25
72.15
72.95
73.65
70.20
73.25
73.05
74.70+3%
73.35
74.20
72.15
73.40
72.65
37.21
36.54
38.33
38.75
30.17
37.26
38.74
39.24+6%
38.28
38.03
37.97
39.11
37.04
19.49
21.24
21.46
23.11
14.17
23.52
22.67
23.89+18%
22.35
22.39
22.41
21.65
20.19
ments in overall classification accuracy compared to Gold-only. The highest gains are observed on
USD8K, while the least is on Vocal Sound, likely due to the T2A dataset’s heavy representation of
music compared to the more sparse vocal sounds. Performance gains tend to decrease as the number
of gold samples n in Dsmall grows, aligning with observed trends in prior studies. Detailed results on
the full non-down-sampled datasets can be found in Appendix A.4.1. Although Vanilla Synthetic
Augmentations emerge as the strongest baseline, they lag behind Synthio by an average of 3.5%.
Ablations. The most significant performance drop in Synthio is observed w/o DPO, resulting in
an average decline of 4.5%, highlighting the crucial role of consistency in generating effective
augmentations. Second to w/o DPO, the highest drop is seen in w/ Template Captions, with average
decline of 2.7%, thus highlighting the importance of MixCap.
8
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
Under review as a conference paper at ICLR 2025
Figure 4: Comparison of spectral and pitch features between generated audios in Dsyn and real audios in Dsmall
(for n = 100). Synthio-generated audios closely replicate the features of real data, demonstrating its ability to
produce augmentations that maintain consistency with the original dataset (also see FAD scores in Sec. A.4.3).
6.1 HOW CONSISTENT AND DIVERSE ARE AUGMENTATIONS GENERATED BY SYNTHIO?
Table 2: CLAP similarity score be-
tween real audios and generated data.
Lower scores show higher composi-
tional diversity among generated augs.
Fig. 4 compares the distributions of pitch and various spectral
features between generated audios in Dsyn and real audios in
Dsmall across different methods on the USD8K and NSynth
datasets. The features analyzed include Pitch Salience (clar-
ity of the main pitch) (Ricard, 2004), Spectral Flatness (tonal
vs. noise-like quality) (Peeters, 2004), Flux (rate of spectral
change) (Tzanetakis & Cook, 1999), and Complexity (level of
sound detail) (Laurier et al., 2010). Notably, Synthio-generated
audios closely replicate the spectral features of the original
audios, showing the best alignment among all methods and
demonstrating Synthio’s ability to generate consistent augmen-
tations. Table 2 presents CLAP similarity scores between ground-truth audios and their N generated
augmentations, averaged across all dataset instances. Audios generated with Synthio achieve the
highest compositional diversity for generated audios among all baselines. Table 8 shows that audios
generated using Synthio have the highest similarity with the ground-truth category label.
w/ Template Captions
w/ ERM
w/ Template Captions
w/ ERM
Vanilla Syn. Aug.
Synthio (ours)
Vanilla Syn. Aug.
Synthio (ours)
USD8K(↓) NSynth(↓)
47.22
34.58
46.84
52.54
45.17
35.09
46.82
50.01
31.76
22.97
33.00
42.33
33.81
23.03
37.16
43.98
Method
100
200
#
6.2 HOW GOOD ARE SYNTHETIC AUDIOS GENERATED BY SYNTHIO?
Consistent with prior findings in vision (He et al.,
2023), we observe that synthetic data alone performs
sub-optimally compared to human-annotated data.
However, our results show that enhancing the consis-
tency and diversity of synthetic data aided by a small-
scale version of the target dataset significantly im-
proves model performance. Table 3 compares models
trained exclusively on synthetic data with our base-
lines (i.e., only Dsyn is used for training AST). Syn-
thio outperforms all baselines by 0.1%-26.25%, with
DPO-based alignment driving the improvements.
Table 3: Performance comparison of Synthio with
baselines on synthetic-only audio classification.
n
Method
GTZAN
VS
TUT MSDB
Gold-only (No Aug.)
Vanilla Syn. Aug.
Synthio (ours)
100
w/ Template Captions
w/ ERM
w/o DPO
Gold-only (No Aug.)
Vanilla Syn. Aug.
Synthio (ours)
200
w/ Template Captions
w/ ERM
w/o DPO
64.15
29.05
33.10
24.50
25.65
17.60
77.00
32.35
35.15
29.90
28.10
19.85
84.32
47.14
34.13
39.20
30.99
32.76
21.57
21.69
24.51
21.73
24.40
20.39
87.38
55.32
41.96
48.14
35.53
36.29
26.85
24.23
27.00
23.61
25.71
21.40
65.60
35.60
56.45
40.40
42.85
30.20
68.80
39.25
61.45
41.20
46.70
36.75
6.3 CAN SYNTHIO BE EXTENDED TO THE MORE COMPLEX AUDIO CAPTIONING TASK?
involves describing the content of an audio sam-
Audio captioning, unlike classification,
ple using natural
To demonstrate Synthio’s
effectiveness for audio captioning, we evaluated it on down-sampled versions of Audio-
Caps.
For this task, we adapted Synthio by removing the audio captioning and CLAP
filtering stages and we extract acoustic features directly from the existing audio captions.
language, making it a more complex task.
9
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
NSynthUSD8KMethodMethodMethodMethodMethodMethodMethodMethodUnder review as a conference paper at ICLR 2025
Table 4: Performance comparison of Synthio with
baselines on audio captioning.
Additionally, we retrain our T2A model on a modified
version of Sound-VECaps, excluding any audio from
AudioCaps. Training and evaluation were conducted
using the EnCLAP framework (Kim et al., 2024), and
the dataset was expanded with 4× synthetic samples.
As shown in Table 4, Synthio significantly outper-
forms baseline settings, with improvements largely
due to better alignment w/ DPO. However, manual
inspection revealed that generated audios occasionally do not match their captions compositionally,
reflecting limitations of the current T2A model. While this issue does not affect classification, it
poses challenges for captioning. We will explore more advanced methods as part of future work.
Gold-only (No Aug.)
Vanilla Syn. Aug.
VECaps Retrieval
Synthio (ours)
Gold-only (No Aug.)
Vanilla Syn. Aug.
VECaps Retrieval
Synthio (ours)
0.0754
0.0741
0.0550
0.104
METEOR (↑) CIDEr (↑)
0.112
0.140
0.100
0.202
0.127
0.135
0.088
0.185
0.067
0.092
0.068
0.119
0.112
0.136
0.082
0.194
0.125
0.128
0.108
0.169
0.157
0.166
0.097
0.256
0.148
0.157
0.094
0.227
SPIDEr (↑)
SPICE (↑)
Method
1000
500
n
Table 5: Performance comparison of Synthio
with other baselines on different values of N .
6.4 HOW WELL DOES SYNTHIO SCALE?
Table 5 compares the performance of Synthio, SpecAug-
ment, and Vanilla Synthetic Augmentations across differ-
ent scaling factors N = {1, 2, 3, 4, 5}, where N represents
the number of synthetic samples generated per original
sample in the small-scale dataset (in this case we fix n =
100). As observed, SpecAugment, a traditional augmen-
tation method, cannot scale with increasing N , and the
performance of Vanilla plateaus at higher N . A similar
saturation occurs with Synthio when MixCap is not used.
Even without DPO, Synthio maintains better scalability,
though with reduced overall performance. These results
highlight that MixCap’s ability to generate diverse captions is crucial for Synthio’s scalability.
SpecAugment
Vanilla Syn. Aug.
Synthio (ours)
w/o MixCap
w/o DPO
SpecAugment
Vanilla Syn. Aug.
Synthio (ours)
w/o MixCap
w/o DPO
41.96
33.13
35.28
40.41
39.23
47.50
67.90
77.45
64.30
61.55
47.50
77.25
81.75
68.45
64.25
41.96
35.28
36.37
41.08
39.42
41.96
42.31
43.56
41.95
40.17
47.50
76.75
82.55
71.55
65.95
41.96
41.54
44.92
42.27
40.31
47.50
75.60
83.15
72.85
66.60
Dataset Method
Scaling Factor N
NSynth
ESC50
5x
47.50
71.25
83.35
73.50
66.75
41.96
38.27
44.81
42.15
39.82
4x
3x
1x
2x
6.5 DOES SYNTHIO HELP LONG-TAILED CATEGORIES?
Figure 5 shows the classification accuracy
on four underrepresented categories in the
NSynth dataset, comparing performance
before and after applying Synthio aug-
mentations. We selected categories with
the lowest frequency in the downsampled
dataset, such as flute and guitar, which ap-
pear only once in the down-sampled sets.
Synthio significantly boosts accuracy, with
improvements up to 48%. Notably, cat-
egory labels like flute and guitar, which
originally had 0% accuracy, show substan-
tial gains with Synthio augmentation. This
demonstrates Synthio’s effectiveness in boosting performance on long-tail labels, a common challenge
in real-world datasets (Zhang et al., 2023).
Figure 5: Category-wise improvement in performance with
Synthio augmentations for long-tailed categories.
7 CONCLUSION, LIMITATIONS, AND FUTURE WORK
We introduced Synthio, a novel approach for augmenting small-scale audio classification datasets
with synthetic data. Synthio incorporates several innovative components to generate augmentations
that are both consistent with and diverse from the small-scale dataset. Our extensive experiments
demonstrate that even when using a T2A model trained on a weakly-captioned AudioSet, Synthio
significantly outperforms multiple baselines.
However, Synthio has some limitations: (i) Its performance is influenced by the capabilities of the T2A
model and the quality of its training data. As T2A models continue to improve, we expect Synthio’s
performance to benefit accordingly. (ii) The process of generating audio captions using LLMs may
introduce biases inherent in the LLMs into the training process. (iii) Synthio is computationally
more intensive than traditional augmentation methods due to the need for prompting LLMs and
T2A models. We anticipate that ongoing advancements in model efficiency will help mitigate these
computational challenges.
10
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
44.0031.4622.8853.525.5364.3639.0048.0011.7617.9921.1114.62015.34CategoriesClassification Accuracy (%)020406080basskeyboardstringorganfluteguitarreedmalletImproved w/ SynthioGold-onlyUnder review as a conference paper at ICLR 2025
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
8 REPRODUCIBILITY STATEMENT
We provide our code in the supplementary material with this submission. All codes will be open-
sourced upon paper acceptance, including all T2A checkpoints. All experimental details, including
training parameters and hyper-parameters, are provided in Section 5.
REFERENCES
Haider Al-Tahan and Yalda Mohsenzadeh. Clar: Contrastive learning of auditory representations. In
International Conference on Artificial Intelligence and Statistics, pp. 2530–2538. PMLR, 2021.
Shekoofeh Azizi, Simon Kornblith, Chitwan Saharia, Mohammad Norouzi, and David J. Fleet.
Synthetic data from diffusion models improves imagenet classification. Transactions on Machine
Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?
id=DlRsoxjyPm.
Max F Burg, Florian Wenzel, Dominik Zietlow, Max Horn, Osama Makansi, Francesco Locatello, and
Chris Russell. Image retrieval outperforms diffusion models on data augmentation. Transactions
on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/
forum?id=xflYdGZMpv.
Jiaao Chen, Derek Tam, Colin Raffel, Mohit Bansal, and Diyi Yang. An empirical survey of data
augmentation for limited data learning in nlp. Transactions of the Association for Computational
Linguistics, 11:191–211, 2021. URL https://api.semanticscholar.org/CorpusID:
235422524.
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep
reinforcement learning from human preferences. Advances in neural information processing
systems, 30, 2017.
Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, and
Alexandre D´efossez. Simple and controllable music generation. Advances in Neural Information
Processing Systems, 36, 2024.
Konstantinos Drossos, Samuel Lipping, and Tuomas Virtanen. Clotho: An audio captioning dataset.
In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), pp. 736–740. IEEE, 2020.
Benjamin Elizalde, Soham Deshmukh, Mahmoud Al Ismail, and Huaming Wang. Clap learning
audio concepts from natural language supervision. In ICASSP 2023-2023 IEEE International
Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1–5. IEEE, 2023.
Zach Evans, CJ Carr, Josiah Taylor, Scott H Hawley, and Jordi Pons. Fast timing-conditioned latent
audio diffusion. arXiv preprint arXiv:2402.04825, 2024a.
Zach Evans, Julian D Parker, CJ Carr, Zack Zukowski, Josiah Taylor, and Jordi Pons. Stable audio
open. arXiv preprint arXiv:2407.14358, 2024b.
Tiantian Feng, Dimitrios Dimitriadis, and Shrikanth Narayanan. Can synthetic audio from generative
foundation models assist audio recognition and speech modeling? arXiv preprint arXiv:2406.08800,
2024.
Eduardo Fonseca, Xavier Favory, Jordi Pons, Frederic Font, and Xavier Serra. FSD50K: an open
dataset of human-labeled sound events. IEEE/ACM Transactions on Audio, Speech, and Language
Processing, 30:829–852, 2022.
Jonas Geiping, Micah Goldblum, Gowthami Somepalli, Ravid Shwartz-Ziv, Tom Goldstein, and
Andrew Gordon Wilson. How much data are augmentations worth? an investigation into scaling
laws, invariance, and implicit regularization. arXiv preprint arXiv:2210.06441, 2022.
Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing
Moore, Manoj Plakal, and Marvin Ritter. Audio set: An ontology and human-labeled dataset for
audio events. In 2017 IEEE international conference on acoustics, speech and signal processing
(ICASSP), pp. 776–780. IEEE, 2017.
11
Under review as a conference paper at ICLR 2025
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
Deepanway Ghosal, Navonil Majumder, Ambuj Mehrish, and Soujanya Poria. Text-to-audio gener-
ation using instruction tuned llm and latent diffusion model. arXiv preprint arXiv:2304.13731,
2023.
Sreyan Ghosh, Ashish Seth, and S Umesh. Decorrelating feature spaces for learning general-purpose
audio representations. IEEE Journal of Selected Topics in Signal Processing, 16(6):1402–1414,
2022. doi: 10.1109/JSTSP.2022.3202093.
Sreyan Ghosh, Chandra Kiran Reddy Evuru, Sonal Kumar, S Ramaneswaran, S Sakshi, Utkarsh
Tyagi, and Dinesh Manocha. DALE: Generative data augmentation for low-resource legal NLP.
In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference
on Empirical Methods in Natural Language Processing, pp. 8511–8565, Singapore, December
2023a. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.528. URL
https://aclanthology.org/2023.emnlp-main.528.
Sreyan Ghosh, Ashish Seth, Srinivasan Umesh, and Dinesh Manocha. Mast: Multiscale audio
spectrogram transformers. In ICASSP 2023-2023 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), pp. 1–5. IEEE, 2023b.
Sreyan Ghosh, Sonal Kumar, Ashish Seth, Chandra Kiran Reddy Evuru, Utkarsh Tyagi, S Sakshi,
Oriol Nieto, Ramani Duraiswami, and Dinesh Manocha. Gama: A large audio-language model
with advanced audio understanding and complex reasoning abilities, 2024a. URL https://
arxiv.org/abs/2406.11768.
Sreyan Ghosh, Ashish Seth, Sonal Kumar, Utkarsh Tyagi, Chandra Kiran Reddy Evuru, Ra-
maneswaran S, S Sakshi, Oriol Nieto, Ramani Duraiswami, and Dinesh Manocha. Compa:
Addressing the gap in compositional reasoning in audio-language models. In The Twelfth Interna-
tional Conference on Learning Representations, 2024b. URL https://openreview.net/
forum?id=86NGO8qeWs.
Sreyan Ghosh, Utkarsh Tyagi, Sonal Kumar, Chandra Kiran Reddy Evuru, , Ramaneswaran S,
S Sakshi, and Dinesh Manocha. ABEX: Data augmentation for low-resource NLU via expanding
abstract descriptions. In The 62nd Annual Meeting of the Association for Computational Linguistics,
2024c.
Yuan Gong, Yu-An Chung, and James Glass. Ast: Audio spectrogram transformer. arXiv preprint
arXiv:2104.01778, 2021.
Ruifei He, Shuyang Sun, Xin Yu, Chuhui Xue, Wenqing Zhang, Philip Torr, Song Bai, and XI-
AOJUAN QI. IS SYNTHETIC DATA FROM GENERATIVE MODELS READY FOR IMAGE
RECOGNITION? In The Eleventh International Conference on Learning Representations, 2023.
URL https://openreview.net/forum?id=nUmCcZ5RKF.
Calum Heggan, Sam Budgett, Timothy Hospedales, and Mehrdad Yaghoobi. Metaaudio: A few-shot
audio classification benchmark. In Artificial Neural Networks and Machine Learning – ICANN
2022, pp. 219–230, Cham, 2022. Springer International Publishing. ISBN 978-3-031-15919-0.
I Jordal. audiomentations, 2021. URL https://zenodo.org/record/13639627.
Akbar Karimi, Leonardo Rossi, and Andrea Prati. AEDA: An easier data augmentation technique
for text classification. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-
tau Yih (eds.), Findings of the Association for Computational Linguistics: EMNLP 2021, pp.
2748–2754, Punta Cana, Dominican Republic, November 2021. Association for Computational
Linguistics. doi: 10.18653/v1/2021.findings-emnlp.234. URL https://aclanthology.
org/2021.findings-emnlp.234.
Kevin Kilgour, Mauricio Zuluaga, Dominik Roblek, and Matthew Sharifi. Fr\’echet audio distance:
A metric for evaluating music enhancement algorithms. arXiv preprint arXiv:1812.08466, 2018.
Chris Dongjoo Kim, Byeongchang Kim, Hyunmin Lee, and Gunhee Kim. Audiocaps: Generating
captions for audios in the wild. In Proceedings of the 2019 Conference of the North American
Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume
1 (Long and Short Papers), pp. 119–132, 2019.
12
Under review as a conference paper at ICLR 2025
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
Jaeyeon Kim, Jaeyoon Jung, Jinjoo Lee, and Sang Hoon Woo. Enclap: Combining neural audio codec
and audio-text joint embedding for automated audio captioning. In ICASSP 2024 - 2024 IEEE
International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6735–6739,
2024. doi: 10.1109/ICASSP48485.2024.10446672.
Zhifeng Kong, Sang-gil Lee, Deepanway Ghosal, Navonil Majumder, Ambuj Mehrish, Rafael Valle,
Soujanya Poria, and Bryan Catanzaro. Improving text-to-audio models with synthetic captions.
arXiv preprint arXiv:2406.15487, 2024.
Felix Kreuk, Gabriel Synnaeve, Adam Polyak, Uriel Singer, Alexandre D´efossez, Jade Copet,
Devi Parikh, Yaniv Taigman, and Yossi Adi. Audiogen: Textually guided audio generation.
In The Eleventh International Conference on Learning Representations, 2023. URL https:
//openreview.net/forum?id=CYK7RfcOzQ4.
Cyril Laurier, Owen Meyers, Joan Serra, Martin Blech, Perfecto Herrera, and Xavier Serra. Indexing
music by mood: design and integration of an automatic content-based annotator. Multimedia Tools
and Applications, 48:161–184, 2010.
Haohe Liu, Zehua Chen, Yi Yuan, Xinhao Mei, Xubo Liu, Danilo Mandic, Wenwu Wang, and
Mark D Plumbley. Audioldm: Text-to-audio generation with latent diffusion models. arXiv
preprint arXiv:2301.12503, 2023.
Lin Long, Rui Wang, Ruixuan Xiao, Junbo Zhao, Xiao Ding, Gang Chen, and Haobo Wang.
On llms-driven synthetic data generation, curation, and evaluation: A survey. arXiv preprint
arXiv:2406.15126, 2024.
Vincent Lostanlen and Carmine-Emanuele Cella. Deep convolutional networks on the pitch spiral for
musical instrument recognition, 2017. URL https://arxiv.org/abs/1605.06644.
Navonil Majumder, Chia-Yu Hung, Deepanway Ghosal, Wei-Ning Hsu, Rada Mihalcea, and Soujanya
Poria. Tango 2: Aligning diffusion-based text-to-audio generations through direct preference
optimization. arXiv preprint arXiv:2404.09956, 2024.
Pranay Manocha, Zeyu Jin, Richard Zhang, and Adam Finkelstein. Cdpam: Contrastive learning for
perceptual audio similarity. In ICASSP 2021-2021 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), pp. 196–200. IEEE, 2021.
Irene Mart´ın-Morat´o and Annamaria Mesaros. What is the ground truth? reliability of multi-annotator
data for audio tagging. In 2021 29th European Signal Processing Conference (EUSIPCO), pp.
76–80. IEEE, 2021.
Annamaria Mesaros, Toni Heittola, Aleksandr Diment, Benjamin Elizalde, Ankit Shah, Emmanuel
Vincent, Bhiksha Raj, and Tuomas Virtanen. Dcase 2017 challenge setup: Tasks, datasets and
baseline system. In DCASE 2017-Workshop on Detection and Classification of Acoustic Scenes
and Events, 2017.
Loris Nanni, Gianluca Maguolo, and Michelangelo Paci. Data augmentation approaches for improving
animal audio classification. Ecological Informatics, 57:101084, 2020.
An Thanh Nguyen, Byron Wallace, Junyi Jessy Li, Ani Nenkova, and Matthew Lease. Aggregating
and predicting sequence labels from crowd annotations. In Regina Barzilay and Min-Yen Kan (eds.),
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume
1: Long Papers), pp. 299–309, Vancouver, Canada, July 2017. Association for Computational
Linguistics. doi: 10.18653/v1/P17-1028. URL https://aclanthology.org/P17-1028.
Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Noboru Harada, and Kunio Kashino. Byol for
audio: Self-supervised learning for general-purpose audio representation. In 2021 International
Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE, 2021.
Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and
Quoc V Le. Specaugment: A simple data augmentation method for automatic speech recognition.
arXiv preprint arXiv:1904.08779, 2019.
13
Under review as a conference paper at ICLR 2025
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
Geoffroy Peeters. A large set of audio features for sound description (similarity and classification) in
the cuidado project. CUIDADO Ist Project Report, 54(0):1–25, 2004.
Karol J. Piczak. ESC: Dataset for Environmental Sound Classification. In Proceedings of the 23rd
Annual ACM Conference on Multimedia, pp. 1015–1018. ACM Press. ISBN 978-1-4503-3459-
4. doi: 10.1145/2733373.2806390. URL http://dl.acm.org/citation.cfm?doid=
2733373.2806390.
Zafar Rafii, Antoine Liutkus, Fabian-Robert St¨oter, Stylianos Ioannis Mimilakis, and Rachel Bittner.
The MUSDB18 corpus for music separation, December 2017. URL https://doi.org/10.
5281/zenodo.1117372.
Zhao Ren, Kun Qian, Tanja Schultz, and Bj¨orn W. Schuller. An overview of the icassp special
session on ai security and privacy in speech and audio processing. In Proceedings of the 5th
ACM International Conference on Multimedia in Asia Workshops, MMAsia ’23 Workshops, New
York, NY, USA, 2023. Association for Computing Machinery.
ISBN 9798400703263. doi:
10.1145/3611380.3628563. URL https://doi.org/10.1145/3611380.3628563.
Julien Ricard. Towards computational morphological description of sound. DEA pre-thesis research
work, Universitat Pompeu Fabra, Barcelona, 2004.
Francesca Ronchini, Luca Comanducci, and Fabio Antonacci. Synthesizing soundscapes: Leveraging
text-to-audio models for environmental sound classification. arXiv preprint arXiv:2403.17864,
2024.
Aaqib Saeed, David Grangier, and Neil Zeghidour. Contrastive learning of general-purpose audio
representations. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP), pp. 3875–3879. IEEE, 2021.
J. Salamon, C. Jacoby, and J. P. Bello. A dataset and taxonomy for urban sound research. In 22nd
ACM International Conference on Multimedia (ACM-MM’14), pp. 1041–1044, Orlando, FL, USA,
Nov. 2014.
Ashish Seth, Sreyan Ghosh, Srinivasan Umesh, and Dinesh Manocha. Slicer: Learning universal
audio representations using low-resource self-supervised pre-training. In ICASSP 2023-2023 IEEE
International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1–5. IEEE,
2023.
Connor Shorten and Taghi M Khoshgoftaar. A survey on image data augmentation for deep learning.
Journal of big data, 6(1):1–48, 2019.
Janne Spijkervet. Spijkervet/torchaudio-augmentations, 2021. URL https://zenodo.org/
record/4748582.
Jinchuan Tian, Chunlei Zhang, Jiatong Shi, Hao Zhang, Jianwei Yu, Shinji Watanabe, and Dong Yu.
Preference alignment improves language model-based tts. arXiv preprint arXiv:2409.12403, 2024.
Brandon Trabucco, Kyle Doherty, Max A Gurinas, and Ruslan Salakhutdinov. Effective data
In The Twelfth International Conference on Learning
augmentation with diffusion models.
Representations, 2024. URL https://openreview.net/forum?id=ZWzUA9zeAg.
George Tzanetakis and Perry Cook. Multifeature audio segmentation for browsing and annotation.
In Proceedings of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and
Acoustics. WASPAA’99 (Cat. No. 99TH8452), pp. 103–106. IEEE, 1999.
George Tzanetakis, Georg Essl, and Perry Cook. Automatic musical genre classification of audio
signals, 2001. URL http://ismir2001.ismir.net/pdf/tzanetakis.pdf.
Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam,
Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. Diffusion model alignment using
direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pp. 8228–8238, 2024.
14
Under review as a conference paper at ICLR 2025
Jason Wang, Luis Perez, et al. The effectiveness of data augmentation in image classification using
deep learning. Convolutional Neural Networks Vis. Recognit, 11(2017):1–8, 2017.
Yu Wang, Nicholas J. Bryan, Mark Cartwright, Juan Pablo Bello, and Justin Salamon. Few-
In ICASSP 2021 - 2021 IEEE International
shot continual learning for audio classification.
Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 321–325, 2021. doi:
10.1109/ICASSP39728.2021.9413584.
Yuanyuan Wang, Hangting Chen, Dongchao Yang, Zhiyong Wu, Helen Meng, and Xixin Wu.
Audiocomposer: Towards fine-grained audio generation with natural language descriptions. arXiv
preprint arXiv:2409.12560, 2024.
Jason Wei and Kai Zou. EDA: Easy data augmentation techniques for boosting performance on text
classification tasks. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan (eds.), Proceedings
of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th
International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 6382–
6388, Hong Kong, China, November 2019. Association for Computational Linguistics. doi:
10.18653/v1/D19-1670. URL https://aclanthology.org/D19-1670.
Zeng Weili, Yichao Yan, Qi Zhu, Zhuo Chen, Pengzhi Chu, Weiming Zhao, and Xiaokang Yang.
Infusion: Preventing customized text-to-image diffusion from overfitting. In ACM Multimedia
2024, 2024.
Huang Xie and Tuomas Virtanen. Zero-shot audio classification via semantic embeddings. IEEE/ACM
Transactions on Audio, Speech, and Language Processing, 29:1233–1242, 2021.
Xuenan Xu, Zhiling Zhang, Zelin Zhou, Pingyue Zhang, Zeyu Xie, Mengyue Wu, and Kenny Q Zhu.
Blat: Bootstrapping language-audio pre-training based on audioset tag-guided synthetic data. In
Proceedings of the 31st ACM International Conference on Multimedia, pp. 2756–2764, 2023.
Yi Yuan, Dongya Jia, Xiaobin Zhuang, Yuanzhe Chen, Zhengxi Liu, Zhuo Chen, Yuping Wang,
Improving audio generation with visual
Yuxuan Wang, Xubo Liu, Mark D Plumbley, et al.
enhanced caption. arXiv preprint arXiv:2407.04416, 2024.
Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo.
Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings
of the IEEE/CVF international conference on computer vision, pp. 6023–6032, 2019.
Shilei Zhang, Yong Qin, Kewei Sun, and Yonghua Lin. Few-shot audio classification with attentional
graph neural networks. In Interspeech, pp. 3649–3653, 2019.
Yifan Zhang, Bingyi Kang, Bryan Hooi, Shuicheng Yan, and Jiashi Feng. Deep long-tailed learning:
A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):10795–10816,
2023.
Qihao Zhao, Yalun Dai, Hao Li, Wei Hu, Fan Zhang, and Jun Liu. Ltgc: Long-tail recognition
via leveraging llms-driven generated content. In Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR), pp. 19510–19520, June 2024.
A APPENDIX
Table of Contents:
• A.1 Background on Diffusion Models
• A.2 Prompts
• A.3 Examples
• A.4 Extra Results
• A.5 Dataset Details
• A.6 Algorithm
15
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
Under review as a conference paper at ICLR 2025
A.1 DIFFUSION MODELS
Diffusion models consist of two main processes: a forward process and a reverse process. Given
a data point x0 with probability distribution p(x0), the forward diffusion process gradually adds
Gaussian noise to x0 according to a pre-set variance schedule β1, · · · , βT and degrades the structure
of the data. At the time step t, the latent variable xt is only determined by the xt−1 due to its
discrete-time Markov process nature, and can be expressed as:
p(xt | xt−1) = N (xt; (cid:112)1 − βtxt−1, βtI),
(12)
As t increases over several diffusion steps, p(xT ) approaches a unit spherical Gaussian distribution.
The marginal distribution of xt at any given step can be expressed analytically as:
p(xt | x0) = N (xt;
(13)
where αt = (cid:81)t
s=1(1 − βs). The reverse process aims to reconstruct the original data from the
noise-corrupted version by learning a series of conditional distributions. The transition from xt to
xt−1 is modeled as:
αtx0, (1 − αt)I),
√
pθ(xt−1 | xt) = N (xt−1; µt−1
θ
, σt−1
θ
),
µt−1
θ =
(cid:18)
xt −
1
√
αt
βt√
1 − ¯αt
(cid:19)
ϵθ (xt, t)
,
(14)
(15)
1 − ¯αt−1
1 − ¯αt
i=1 αi, θ represents the learnable parameters, µt−1
is the mean estimate,
is the standard deviation estimate, and ϵθ(xt, t) is the noise estimated by the neural network.
where αt = 1 − βt, ¯αt = (cid:81)t
σt−12
θ
The reverse process estimates the data distribution p(x0) by integrating over all possible paths:
σt−12
θ
· βt,
(16)
=
θ
pθ(x0) =
(cid:90)
T
(cid:89)
pθ(xT )
pθ(xt−1 | xt) dx1 : T
(17)
t=1
where pθ(xT ) = N (xT ; 0, I). At inference time, the diffusion model iteratively executes the reverse
process (Eq. 17) T times starting from a randomly sampled Gaussian Noise (ϵ ∼ N (0, I)).
A.2 PROMPTS
Fig. 6, 7, 8 and 9 illustrate all the prompts used in our experiments. For all experiments, we prompt
GPT-4-Turbo (GPT-4-turbo-2024-04-09) with top-p=0.5 and temperature=0.7.
Figure 6: LLM prompt (Prompt 1) for extracting components from audio captions.
A.3 EXAMPLES
Table 6 presents examples of captions generated by the Synthio framework, along with their revised
versions for captions that were initially rejected.
16
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
I will provide you with a caption of an audio that describes the events taking place in theaudio. Additionally, I will also provide you with a label for the audio. Extract the phrasesthat correspond to the distinctive features of the audio. There are 3 types of features you needto extract:1) the unique foreground events in the caption,2) the broader background scene or background events in the or audio and3) any other features related to the audio. Return a JSON with key 3 keys, one as named as‘events’, the other as named as ‘scenes’, and the other named as ‘other features’, where thevalues of these keys correspond to a comma-separated pythonic list where each item in the listis a string corresponding to the extracted phrases. Please ignore any phrase that (exactly orsemantically) corresponds to the label of the audio. If you think there is no information foreither of the keys, leave them empty. Here is the caption:{}Here is the label:{}Under review as a conference paper at ICLR 2025
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
Figure 7: LLM prompt (Prompt 2) for generating new audio captions given elements from existing captions.
Figure 8: LLM prompt for generating random captions for Random Captions baselines in Table 1.
A.4 EXTRA RESULTS
A.4.1 RESULTS ON THE FULL TRAINING SPLITS
Table 7 presents the performance comparison of Synthio on the full original dataset splits (where
the entire training set is used without any downsampling). While Synthio outperforms all baselines,
traditional augmentation methods prove to be much more competitive in this scenario. This contrasts
with the results in Table 1 where traditional augmentations showed minimal improvements in
performance.
Additional Discussion on Results. As we see in Table 1 (and Table 7), performance gains with
Synthio as the number of Gold samples increase (highest absolute gains with n = 100 and lowest
with full dataset). This phenomenon is consistent across prior work in vision (Trabucco et al., 2024),
text (Ghosh et al., 2023a; 2024c), and audio (Ronchini et al., 2024). Most synthetic data augmentation
methods demonstrate substantial gains in low-resource regimes, but these gains naturally diminish as
the quantity of high-quality labeled data increases (for example, Azizi et al. just show over ImageNet
only a modest improvement of just over 1%, where the authors reported when augmenting this
large-scale dataset).
17
I will provide you with a caption for an audio. The label generally describes the audio in anabstract fashion and mentions the broader scene or event that I need to teach an audio modelabout from the audio, i.e., the audio and its label is part of the training set for training anaudio classification model. I will also provide you with the domain of the audio which will helpyou identify the true sound conveyed in the label. I need you to rewrite the caption for meaccording to this set of rules:1. I will provide you with lists of various audio features corresponding to events, backgroundsor other features. You should rewrite the given caption such that it has has features inspiredfrom the features provided to you, i.e., you should try to describe a scene for the label with events, backgrounds and features similar but unique from the ones given.2. After re-writing, the caption should still obey the audio event label.Here is the label:{}. Here is the domain of the audio:{}.Here is the list of events:{}Here is the list of backgrounds:{}Here is the list of other features:{}Just output the rewritten caption and nothing else. Output 'None' if you did not rewrite.I will provide you with a label for an audio. The label generally describes the audio in anabstract fashion and mentions the broader scene or event that I need to teach an audio modelabout from the audio, i.e., the audio and its label is part of the training set for training anaudio classification model. I will also provide you with the domain of the audio which will helpyou identify the true sound conveyed in the label. I would like you to generate 5 new captionsthat describe the event or source in the label in diverse fashions. I will use these captions togenerate new audios that can augment my training set. Generate the new captions with thefollowing requirements:1. All the captions need to include new and diverse events and contexts beyond the actual eventconveyed by the label.2. Only add new events and context by understanding the broader context of the occurrence of theaudio and the target label. Do not add random events or contexts.3. The new caption should be not more than 20-25 words.4. However, after all these constraints and adding new events or contexts, the caption stillneeds to obey the event conveyed by the original label, i.e., the new caption may not lead to anaudio generation that defies the audio label.6. Finally, use the original label as a phrase in your caption.Here is the label:{}.Here is the domain of the audio:{}. Output a JSON with the key as the original label and a valueas the list of comma separated new captions. Only output the JSON and nothing elseUnder review as a conference paper at ICLR 2025
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
Figure 9: LLM prompt (Prompt 3) for rewriting captions of rejected audios.
We hypothesize that this trend is rooted in the inherent diversity and richness of gold data. Gold
datasets typically capture nuanced variations and complex real-world distributions, including subtle
contextual and environmental factors that synthetic data struggles to replicate. Synthetic data, while
effective at filling gaps and addressing low-resource scenarios, often lacks the granularity necessary
to represent long-tail or edge-case instances. As the size of the gold dataset increases, the model
increasingly benefits from the inherent diversity of these high-quality examples, reducing the need
for synthetic data and its relative impact on performance.
Additionally, in Fig. 6 of their paper, Azizi et al. also how an increasing number of synthetic
augmentations leads to plateauing and even diminishing performance. We hypothesize that this is
due to over-fitting caused by lack of diversity in generated augmentations.
A.4.2 AUDIO GENERATION RESULTS FOR OUR TRAINED STABLE DIFFUSION
Table 9 presents a comparison of audio generation results across several evaluation metrics. We
evaluate our trained Stable Diffusion model (used in our experiments, including a version further
fine-tuned on AudioCaps) against other available models and baselines from the literature. Notably,
our model performs competitively with other fully open-source models across most metrics.
A.4.3 FAD SCORES FOR GENERATED AUGMENTATIONS
To offer an alternative perspective on the distributional consistency between the generated augmen-
tations and the ground-truth small-scale dataset, we compare the Fr´echet Audio Distance (FAD)
scores (Kilgour et al., 2018). For this experiment, we use Synthio with Template Captions. Table 10
presents a comparison of FAD scores between Synthio and other baselines. Synthio achieves the
highest FAD score, indicating that it produces the most consistent audio augmentations.
18
I will provide you with a label for an audio. The label generally describes the audio in anabstract fashion and mentions the broader scene or event that I need to teach an audio modelabout from the audio, i.e., the audio and its label is part of the training set for training anaudio classification model. I will also provide you with the domain of the audio which will helpyou identify the true sound conveyed in the label. I would like you to generate 5 new captionsthat describe the event or source in the label in diverse fashions. I will use these captions togenerate new audios that can augment my training set. Generate the new captions with thefollowing requirements:1. Each caption should have a diverse added events (beyond the event of the original label) andcontexts.2. Only add new events and context by understanding the broader context of the occurrence of theaudio and the target label. For adding events and contexts, please follow the next requirement.3. I will also provide you with a list of features extracted from an existing set of audios. Youshould try such that the new captions you generate for the label have a mix of events and scenessimilar to the events and background scenes that are given and new scenes, i.e., you should tryto describe a scene for the caption with the events and backgrounds provided to you in the givenlists but you should also add novel features (events, backgrounds or other features) beyond theones given.4. The new caption should be not more than 20-25 words.5. However, after all these constraints and adding new events or contexts, the caption stillneeds to obey the event label, i.e., the new caption may not lead to an audio generation thatdefies the audio label.6. Finally, use the original label as a phrase in your caption.Here is the label:{}.Here is the domain of the audio:{}.Here is the list of events:{}Here is the list of backgrounds:{}Here is the list of other features:{}Output a JSON with the key as the original caption and a value as the list of comma separatednew captions. Only output the JSON and nothing else.Under review as a conference paper at ICLR 2025
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
Dataset
Label
USD8k
children playing
USD8k
children playing
USD8k
street music
USD8k
street music
TUT
TUT
TUT
TUT
airport
airport
bus
bus
NSynth
keyboard
NSynth
keyboard
NSynth
organ
NSynth
organ
Medley
Violin
Medley
Violin
Medley
Flute
Medley
Flute
AudioCaps -
AudioCaps -
Generated Caption
Revised Caption
Children playing in a bustling city park
with distant traffic noise
Children playing in a schoolyard during
recess with teacher’s whistle
Street music playing near a busy in-
tersection filled with honking cars and
pedestrians.
Street music from a bustling market as
people chatter and vendors shout
airport with people talking and walking
around in an empty hallway
In the airport, people are talking with
the sound of a crowd of people in the
background, as announcements play.
Bus passing by on a road while people
are chatting at a nearby cafe.
bus passing by on a road as it continues
to blow into the microphone
keyboard accompaniment to a live band
performance at a bustling cafe.
a man typing on a keyboard at office
A serene church service with an organ
playing a melody and soft brass are
playing.
An organ plays as guitars are playing
together in the background.
violin being played during a classical
symphony orchestra performance
violin performing a lively jig at a
bustling street fair
flute playing in a tranquil forest during
the early morning
Flute performance in a bustling city
park during a sunny afternoon.
A dog barks repeatedly in the back-
ground while a car engine starts
In the distance, a faint thunder rumble
is audible, accompanied by the gentle
rustling of leaves in the wind.
NA
Children playing in a neighborhood al-
ley with sound of distant construction
NA
Street music echoing through an alley-
way during a lively street festival.
NA
airport ambiance with people talking
and children running around
NA
bus idling on a road with birds chirping
nearby
NA
keyboard rhythms echoing in an empty
auditorium during a rehearsal break
NA
An organ plays during a lively music
festival with various instruments.
NA
Violin solo during a quiet candlelight
dinner in a fancy restaurant.
NA
Flute music echoing in an ancient stone
cathedral.
-
Soft rain falls on a metal roof, creating
a rhythmic tapping sound.
Table 6: Examples of generated and revised captions from the Synthio methodology.
Table 7: Comparison of Synthio and other baselines on the full original dataset splits (using all samples from
the original training set as Dsmall).
Method
USD8K GTZAN Medley VS
MSDB
Gold-only
Random Noise
Pitch Shift
Spec. Aug.
Audiomentations
Retrieval
Vanilla Syn. Aug.
Synthio (ours)
88.23
86.17
87.58
87.92
88.01
78.27
89.57
89.57
82.00
82.35
83.02
82.50
82.75
69.25
82.85
82.85
80.99
79.72
79.63
79.14
81.26
73.24
81.79
81.79
92.73
92.94
92.17
92.42
92.47
80.43
93.15
93.01
73.9
74.55
74.6
74.5
75.05
69.95
75.85
74.24
A.4.4 EFFECT OF CLAP FILTERING
In this section, we provide additional experiments to show the effect of CLAP filtering on the Synthio
pipeline. Table 11 compares the performance of Synthio with and without CLAP. As we can see,
19
Under review as a conference paper at ICLR 2025
Table 8: CLAP score between generated audios and the label.
n
Method
USD8K NSynth
Real
Vanilla Syn. Aug.
Synthio
100
w/ Template Captions
w/ ERM
Real
Vanilla Syn. Aug.
Synthio
200
w/ Template Captions
w/ ERM
12.67
14.34
31.26
29.31
24.15
10.13
12.55
21.87
20.31
17.14
14.46
17.54
27.32
26.62
21.54
9.4
12.91
16.16
15.82
13.04
Table 9: Comparison of our trained Stable Diffusion model on AudioCaps test set
Model
FAD PANN (↓)
FAD VGG (↓)
IS PANN (↑) CLAP LAION (↑)
AudioLDM2-large
Tango-Full0FT-AC
Tango 2
Make-an-Audio 2
Stable Audio VECaps (ours)
Stable Audio VECaps + AudioCaps-FT (ours)
32.50
18.47
17.19
11.75
15.12
14.93
1.89
2.19
2.54
1.80
2.21
2.19
8.55
8.80
11.04
-
15.07
15.42
0.45
0.57
0.52
0.60
0.57
0.56
Table 12 compares the performance of various values of p on 5 datasets and 2 values of n (500 and
100). As we see, higher or lower values of p do not affect the final performance significantly.
Our T2A model uses the same CLAP text encoder for generating audio. Consequently, most generated
audios are already highly aligned with the intended category label. However, the purpose of CLAP
filtering is to safeguard against cases where the LLM hallucinates and generates a caption that deviates
significantly from the intended label. In such cases, CLAP filtering ensures that audios generated
from hallucinated captions are discarded, preventing them from negatively impacting the learning
process.
A.4.5 EFFECT OF TRAINING DATA AND MODEL ARCHITECTURE FOR THE TEX-TO-AUDIO
MODEL
In this section, we train our T2A model using 1) a different model architecture (we replace Stable
Diffusion with Tango Ghosal et al. (2023)) different training data (we replaced Sound-VECaps with
AudioCaps). Table 13 compares thee results. As we can clearly see, while the model architecture
of the T2A model does not affect the performance, replacing the training data with a small and less
diverse dataset leads to significant drop in performance.
A.4.6 SYNTHIO AS A COMPLIMENTARY APPROACH TO TRADITIONAL AUGMENTATIONS
Table 14 compares results of Synthio augmentations when combined with traditional augmentations.
As we can see, Synthio boosts performance of all methods and combining traditional augmentations
with Synthio boosts Synthios overall performance. This shows that Synthio can act as a complimentary
step for traditional augmentations.
Additional Discussion. Across all datasets, we noticed that CLAP filtering removed at most 10%
of the generated samples. This confirms that the majority of the synthetic data is already well-
aligned with the target categories, and filtering primarily handles rare cases of misalignment. Thus
we emphasize on the point that while most generated audios align with the target label, the CLAP
filtering stage acts as a safeguard against hallucinations by the LLM, which may occasionally generate
captions that deviate significantly from the intended category. In such cases, filtering ensures that
misaligned audios are discarded, preventing them from negatively impacting model training.
20
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
Under review as a conference paper at ICLR 2025
Table 10: Comparison of FAD score of Vaniall Syn. Aug. and Stable Audio VECaps (ours).
n
Dataset Model
FAD VGG (↓)
100 NSynth
200 TUT
Vanilla Syn. Aug.
Stable Audio VECaps (ours)
Vanilla Syn. Aug.
Stable Audio VECaps (ours)
1.83
1.42
1.71
1.45
Table 11: Ablation study evaluating the impact of CLAP filtering on Synthio’s performance.
n
50
100
200
500
Method
ESC-50 USD8K GTZAN TUT VS
Synthio
Synthio w/o CLAP
Syhtio
Synthio w/o CLAP
Syhtio
Synthio w/o CLAP
Syhtio
Synthio w/o CLAP
49.50
47.25
83.35
82.55
86.10
85.25
92.10
90.25
76.12
74.34
85.00
84.64
82.81
79.94
89.18
88.42
68.20
66.35
71.20
69.30
82.05
80.54
82.25
89.70
43.84
40.28
71.23
70.41
56.83
55.22
67.81
65.42
80.67
77.29
86.70
84.93
87.52
86.31
91.42
89.67
A.5 DATASET DETAILS
NSynth Instruments: NSynth is a large-scale dataset consisting of musical notes played by a variety
of instruments. It includes a rich set of acoustic features from instruments like guitars, flutes, and
more, providing diverse sound textures for classification tasks.
TUT Urban: The TUT Urban dataset captures everyday sounds from urban environments, including
noises like traffic, human activities, and construction.
It is commonly used for acoustic scene
classification and environmental sound recognition.
ESC-50: ESC-50 is a well-known dataset for environmental sound classification, containing 50
categories of everyday sounds such as animal noises, natural elements, and human activities, making
it suitable for multi-class classification challenges.
UrbanSound8K (USD8K): USD8K is a curated collection of urban sounds divided into ten classes,
including sirens, street music, and car horns. It is used widely for evaluating models on sound event
detection in real-world scenarios.
GTZAN: GTZAN is a music genre classification dataset that includes ten music genres such as pop,
rock, and jazz. It is a standard benchmark for evaluating music classification models, although it has
known data quality issues.
Medley-solos-DB: This dataset consists of solo recordings of different musical instruments, making it
valuable for studying isolated instrument sounds and training models for music instrument recognition.
MUSDB18: MUSDB18 is used primarily for music source separation tasks. It contains full-track
recordings of different music styles, providing a mix of vocals, drums, bass, and other instruments,
useful for multi-class classification.
DCASE Task 4: Part of the DCASE challenge, this dataset focuses on domestic sound scene and
event classification. It includes various audio clips recorded in home environments, often used for
anomaly detection and sound event classification.
Vocal Sounds (VS): This dataset includes various vocal sounds such as singing, speech, and vocal
effects, providing rich data for studying voice classification and enhancing models for vocal audio
recognition tasks.
21
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
Under review as a conference paper at ICLR 2025
Table 12: Comparison of Synthio’s performance with different CLAP threshold levels.
n
p
ESC-50 USD8K GTZAN TUT
VS
50
100
200
500
0.85
0.3
0.5
0.85
0.3
0.5
0.85
0.3
0.5
0.85
0.3
0.5
49.50
47.10
48.25
83.35
82.55
82.70
86.10
85.25
85.70
92.10
90.25
91.65
76.12
74.14
75.39
85.00
84.64
84.73
82.81
79.94
80.30
89.18
88.42
89.07
68.20
67.50
67.75
71.20
69.30
70.25
82.05
80.55
81.30
82.25
80.70
81.05
43.84
41.17
41.93
71.23
70.41
70.86
56.83
55.22
56.19
67.81
65.42
66.35
80.67
79.32
79.48
86.70
84.93
85.22
87.52
86.31
87.11
91.42
89.67
90.02
Table 13: Comparison of Synthio with Synthio’s Stable Audio trained only wiht AudioCaps and Tango trained
with Sound-VECaps
n
50
100
Method
ESC-50 USD8K GTZAN Medley TUT
Synthio (ours)
Synthio w/ AudioCaps
Synthio w/ Tango
Synthio (ours)
Synthio w/ AudioCaps
Synthio w/ Tango
49.50
29.20
48.55
83.35
58.20
81.50
76.12
60.15
75.05
85.00
74.27
84.13
68.20
50.15
66.19
71.20
66.55
70.95
60.58
49.19
59.12
71.23
67.93
69.97
43.84
38.62
42.59
52.42
48.23
51.47
Table 14: Performance comparison of Synthio when paired with traditional augmentation techniques
n
Method
ESC-50 USD8K GTZAN Medley
50
100
Synthio (ours)
w/ Random Noise
w/ Pitch Shift
w/ Spec Aug
w/ Audiomentations
Synthio (ours)
w/ Random Noise
w/ Pitch Shift
w/ Spec Aug
w/ Audiomentations
49.50
49.65
49.80
50.95
50.35
83.35
83.85
83.60
84.25
84.10
76.12
77.31
78.52
77.93
77.24
85.00
86.59
86.32
86.17
85.95
68.20
70.15
69.50
70.35
69.50
71.20
71.60
72.95
72.75
72.85
60.58
61.54
60.29
61.17
61.53
71.23
72.35
72.50
73.05
72.87
A.6 ALGORITHM
Algorithm 1 algorithmically illustrated Synthio.
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
22
Under review as a conference paper at ICLR 2025
Algorithm 1 Synthio Framework for Audio Classification Augmentation
Require: Small human-annotated dataset Dsmall;
Noisy audio-caption paired dataset Da-c;
Number of generations per instance j;
Similarity threshold p%;
Maximum self-reflection iterations imax.
## Initial Training of T2A Model
Train T2A model T θ on Da-c.
## Construction of Preference Dataset Dpref
for each audio instance dk in Dsmall do
Create caption ck = “Sound of a labelk”.
for l = 1 to j do
Generate audio ˜ak,l = T θ(ck) starting from random noise.
Pair (˜ak,l, ak) where ak is the ground-truth audio.
Add pair to Dpref with ˜ak,l as loser and ak as winner.
end for
end for
## Preference Optimization Using DPO
Fine-tune T θ on Dpref using DPO methodology.
## Generating Diverse Prompts with MixCap
Use audio captioning model to generate captions for all ak in Dsmall.
Prompt LLM to extract acoustic components (backgrounds, events, their attributes and relations) from captions.
for each label labelk in Dsmall do
Using extracted acoustic elments, prompt LLM to generate n diverse captions {ck,1, ck,2, . . . , ck,n}.
end for
## Generation of Synthetic Data Dsyn
syn ← ∅, Drej
Initialize Dacc
for each caption ck,m do
syn ← ∅.
Generate audio ˜ak,m = T θ(ck,m).
Evaluate similarity sk,m = CLAP(˜ak,m, labelk).
if sk,m ≥ p% then
Add (˜ak,m, labelk) to Dacc
syn.
else
Add (ck,m, labelk) to Drej
syn.
end if
end for
## Self-Reflection and Caption Revision
Set iteration count i ← 0.
while Drej
syn ̸= ∅ and i < imax do
i ← i + 1.
for each rejected caption ck,m in Drej
syn do
k,m.
k,m = T θ(c′
Provide LLM with ck,m and insights from Dacc
syn.
Obtain revised caption c′
Generate audio ˜a′
Evaluate similarity s′
if s′
k,m ≥ p% then
Add (˜a′
Remove ck,m from Drej
syn.
k,m, labelk) to Dacc
syn.
k,m = CLAP(˜a′
k,m).
k,m, labelk).
Update ck,m ← c′
k,m in Drej
syn.
else
end if
end for
end while
## Final Training Dataset and Classification Model
Combine Dsyn with ground-truth data Dsmall to form Dtrain.
Train audio classification model on Dtrain.
23
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
|
9QPH1YQCMn | Infilling Score: A Pretraining Data Detection Algorithm for Large Language Models | [
3,
8,
8,
6
] | Under review as a conference paper at ICLR 2025
INFILLING SCORE ✼ A PRETRAINING DATA DETECTION
ALGORITHM FOR LARGE LANGUAGE MODELS
Anonymous authors
Paper under double-blind review
ABSTRACT
In pretraining data detection, the goal is to detect whether a given sentence is in the
dataset used for training a Large Language Model (LLM). Recent methods (such as
Min-K% and Min-K%++) reveal that most training corpora are likely contaminated
with both sensitive content and evaluation benchmarks, leading to inflated test set
performance. These methods sometimes fail to detect samples from the pretraining
data, primarily because they depend on statistics composed of causal token likeli-
hoods. We introduce Infilling Score, a new test-statistic based on non-causal token
likelihoods. Infilling Score can be computed for autoregressive models without
re-training using Bayes rule. A naive application of Bayes rule scales linearly with
the vocabulary size. However, we propose a ratio test-statistic whose computation
is invariant to vocabulary size. Empirically, our method achieves a significant accu-
racy gain over state-of-the-art methods including Min-K%, and Min-K%++ on the
WikiMIA benchmark across seven models with different parameter sizes. Further,
we achieve higher AUC compared to reference-free methods on the challenging
MIMIR benchmark. Finally, we create a benchmark dataset consisting of recent
data sources published after the release of Llama-3; this benchmark provides a
statistical baseline to indicate potential corpora used for Llama-3 training.
1
INTRODUCTION
The significant progress in language modeling can largely be attributed to development and deploy-
ment of large-scale models that utilize extensive training corpora, often encompassing trillions of
tokens (Li et al., 2024; Dubey et al., 2024). The selection and curation of data for training such
Large Language Models (LLMs) is very complex and expensive. Further, recent developers of LLMs
withhold details regarding the sources of their pretraining datasets (Dubey et al., 2024; OpenAI
et al., 2024; Touvron et al., 2023b). This lack of transparency has raised concerns regarding the
inadvertent inclusion of copyrighted content (Chang et al., 2023; Min et al., 2023; Meeus et al., 2023)
or personally identifiable information (Mozes et al., 2023; Panda et al., 2024), potentially leading to
ethical and legal challenges (Grynbaum & Mac, 2023). Furthermore, the inclusion of benchmark
datasets within the training corpora itself can compromise the integrity of model evaluations. This
practice may inflate test performance metrics without accurately reflecting the model’s capabilities
(Oren et al., 2023; Zhou et al., 2023).
Recent work has focused on the problem of determining whether specific sequences of tokens have
been previously seen by a language model (Shi et al., 2024; Zhang et al., 2024; Duan et al., 2024).
These investigations are categorized under a growing field of attacks on LLMs known as Membership
Inference Attacks (MIA) (Shokri et al., 2017; Mattern et al., 2023b; Carlini et al., 2022). Many studies
in this area focus on fine-tuning data detection (Song & Shmatikov, 2019; Shejwalkar et al., 2021;
Mahloujifar et al., 2021). However, pretraining data detection attacks are becoming increasingly
important as they can reveal whether a model has been trained on potentially sensitive data and
prevent evaluation data contamination (Jiang et al., 2024; Yang et al., 2023).
We introduce a novel method for identifying whether a given text sequence was part of a language
model’s pretraining data. Our method uses a new test-statistic that we call the Infilling Score. Our
approach performs a non-causal test to compute the infilling probability of a token, based on the
tokens that appear before and after this token in the sentence. An autoregressive language model
1
000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
Under review as a conference paper at ICLR 2025
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
generates causal likelihoods (i.e. the probability of a word appearing after some context). We find
that non-causal likelihoods lead to more accurate tests for membership inference. These likelihoods
can be computed using a causal autoregressive model.The computation involves applying Bayes’
rule and the law of total probability, and needs a marginalization over the vocabulary to compute a
partition function. Unfortunately, computing this partition function requires calling the autoregressive
LM many times, one for each vocabulary entry. This would require tens of thousands of calls to
the autoregressive LLM to compute a single non-causal probability for one token, and hence is not
practical. Our central idea is to propose an approximate test-statistic whose computation is much
faster, does not require an exact computation of this partition function and does not depend on the
vocabulary size.
Our method achieves a significant accuracy gain over state-of-the-art methods including Min-K%, and
Min-K%++ on the WikiMIA benchmark across seven models. On WikiMIA, our method outperforms
the previous state of the art in AUC. It achieves up to 10% improvement on Llama models when
testing long sequences (256 tokens). Further, we achieve higher AUC compared to reference-free
methods on the challenging MIMIR benchmark.
Our main contributions are summarized below:
(1) We introduce the Infilling Score, a new reference-free method for detecting pretraining
data using infilling likelihood of tokens within the candidate sentence (Section 3). While
SoTA methods: MIN-K% and MIN-K%++ rely on a statistic based on past tokens only, our
method computes a new test statistic considering both past and future tokens in the sentence.
(2) We develop an efficient algorithm for computing this new score. Though our method
conceptually shares similarities with a likelihood computed via Bayes rule, computationally
it is much different: whereas any natural approach for computing a Bayes rule calculation
scales with vocabulary size, our algorithm has computation invariant to vocabulary size.
(3) We conduct extensive experiments on the standard (a) WikiMIA (Shi et al., 2024) and,
(b) MIMIR (Duan et al., 2024) to verify the efficacy of our method (Section 4). On these
benchmarks, we compare our method with state-of-the-art MIA methods including MIN-
K% (Shi et al., 2024) and MIN-K%++ (Zhang et al., 2024). On WikiMIA, our method
achieves 11% improvement over MIN-K% and 5% improvement over MIN-K%++ in terms
of AUROC on average. We attribute the notable performance gain of our method to infilling
probability (Section 3).
(4) We curate a dataset of book excerpts that have not been seen by the LLMs released before
April 2024 (Section 4.1). Employing our Infilling Score, we detect a list of books which
have (likely) been used for training Llama-3-8B (Dubey et al., 2024) (4.4.3).
2 BACKGROUND
In this section, we discuss the standard definition of Membership Inference Attack (MIA) and recent
advances along this line of research.
Problem setup. Given a sentence x = {xi}N
i=0 and a Large Language Model (LLM) denoted by M,
the goal of MIA is to build a detector h (x, M) → {0, 1} that can infer the membership of x in the
training corpus D = {xj}j∈[n] of M. Existing MIA methods for LLMs (Shi et al., 2024; Zhang
et al., 2024; Carlini et al., 2021; Mattern et al., 2023a) assign a score to each sample x and use a
binary threshold to determine its membership class, with 1 indicating x ∈ D and 0 otherwise.
2.1 CHALLENGES IN PRETRAINING DATA DETECTION USING MIA METHODS
2.1.1 DETECTION DIFFICULTY
Prior works (Hardt et al., 2016; Bassily et al., 2020) have shown that the total variation (TV) distance
between the distribution of seen and unseen data is proportional to the learning rate, size of the
dataset |D| and the frequency of the test sentence x. Since TV captures the separability between
these distributions, low TV makes it difficult to infer the membership class of a given x.
2
Under review as a conference paper at ICLR 2025
2.1.2 ARCHITECTURE AND PRETRAINING DISTRIBUTION
Membership inference attacks for LLM pretraining data detection are broadly categorized into two
classes: (a) reference-based methods and (b) reference-free methods. Reference-based methods
such as Reference (Carlini et al., 2021) infer the membership of a sentence x by computing the
likelihood of x using two different LLMs. They compare the perplexity of x under the target LLM
with the perplexity of x under a smaller language model. The smaller model M shares the same
architecture as M, and is trained on a subset of samples, D, collected from the same underlying
distribution of D. The intuition is that smaller networks have less capacity to memorize sentences
from the pretraining dataset. One crucial limitation of these methods is that reference model may not
always exist. Although LLM developers often do not disclose information about the distribution of
pretraining data, reference-based MIAs (Carlini et al., 2021) assume the knowledge of the architecture
and underlying pretraining distribution, making these methods less practical.
Among reference-free methods, Min-K% (Shi et al., 2024) hypothesizes that when a sentence is
seen by the model, i.e., x ∈ D, it usually contains a number of tokens with low causal probabilities
(outliers). Formally, given a sequence of tokens x = {xi}N
i=0, Min-K% score is given by:
Min-K%(x) =
1
|min-k%|
(cid:88)
xi∈min-k%
Min-K%token(xi),
where Min-K%token(xi) = log p (xi|x<i) .
(1)
(2)
Here, Min-K%token(xi) denotes the score for each token xi. The set min-k% contains k% of the
input tokens which correspond to the bottom k% scores within the sequence. If the average score for
this set is less than τ (k), where τ (k) denotes the binary threshold for a fixed k, then Min-K% detects
the sequence x as “unseen”. Note that the classification threshold τ (k) is determined empirically
using a validation dataset.
A recently proposed method, Min-K%++ (Zhang et al., 2024), improves the detection accuracy of
Min-K% by normalizing the next-tokens log likelihood log p(xi|x<i) as follows:
Min-K%++(x) =
1
|min-k%|
(cid:88)
Min-K%++token(xi),
where Min-K%++token(xi) =
xi∈min-k%
log p (xi|x<i) − µx<i
σx<i
(3)
(4)
µx<i = Ez∼p(.|x<i)[log p(z|x<i)] and σx<i = (cid:112)Ez∼p(.|x<i)[(log p(z|x<i) − µx<i )2] are the mean
and standard deviation of the next-token likelihood.
Both Min-K% and Min-K%++ rely on the “causal” likelihood predictions of the model. However,
the causal likelihood of xi does not consider the information from the entire sentence context, as it
only depends on the preceding tokens x<i. We propose that sentences seen during training (x ∈ D)
typically have a number of tokens with low infilling probabilities. By using the non-causal token
likelihoods which depend on both preceding and succeeding tokens (x<i and x>i), we achieve a
more accurate statistic than causal likelihoods alone. This enables our Infilling Score method to
outperform previous pretraining data detection approaches on standard benchmarks.
3 METHOD
We describe our method in this section. First we describe the computation of our new ratio statistic,
and explain why it offers computational scalability compared to a straightforward application of
Bayes Rule. Next, we describe how this score is used to detect data in the pretraining set. Finally, we
explain how we employ our method to detect pretraining samples in Llama-3.
3
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
Under review as a conference paper at ICLR 2025
Ground truth:
Masked input:
She
She
x1
pasta
Italian
ate
ate <MASKED> pasta
m3
x2
x4
3.1 COMPUTING THE INFILLING LIKELIHOOD
In this setting, we search for the most likely token to infill m3 using other tokens in the sentence, i.e.,
{x1, x2, x4}. Using the law of total probability, we get:
p(x3|x1, x2, x4) =
p(x4|x1, x2, x3)p(x3|x1, x2)
p(x4|x1, x2)
=
(cid:80)
p(x4|x1, x2, x3)p(x3|x1, x2)
3∈V p(x4|x1, x2, x′
x′
3)p(x′
3|x1, x2)
.
(5)
Observe that the partition function in the denominator of equation 5 is expensive to compute as it
requires summation over all the tokens in the vocabulary V. In the naive case, the number of LLM
calls required to compute the infilling likelihood scales linearly both with vocabulary size and the
sequence length. This is because for each token, the denominator in equation 5 scales linearly in the
vocabulary size, and this computation needs to be repeated for each token. The vocabulary size can
be as large as 128K in recent LLMs (Dubey et al., 2024).
To address the scalability challenge, we introduce a ratio test-statistic. Our main idea is to compute the
ratio of the infilling probability of the ground-truth token and the maximum causal likelihood token.
Using this proposed statistic, we circumvent the need to compute the computationally expensive
partition function. In the above setting, we define the ratio test-statistic of token x3 as:
p(x3|x1, x2, x4)
p(x∗
3|x1, x2, x4)
, where x∗
3 = arg max
x′
3∈V
p(x′
3|x1, x2).
(6)
This ratio compares the infilling likelihood of the ground-truth token to that of the model’s causal
prediction. If x3 is an outlier the ratio is closer to 0, and when x3 is among the model’s top predictions,
this ratio is closer to 1. Since the partition function in equation 5 is the same for p(x3|x1, x2, x4) and
p(x∗
3|x1, x2, x4), it gets cancelled in the ratio test-statistic. This drastically reduces the number of
LLM calls from O(N |V|) to O(N ), making our test-statistic independent of the size of the vocabulary
(details in 4.5). Interestingly, we can exactly compute this ratio analytically using auto-regressive
models without re-training. We then compute the log of this ratio and normalize the probabilities to
capture the relative significance of each token in the vocabulary. First, we derive
log
p(x3|x1, x2, x4)
p(x∗
3|x1, x2, x4)
= log
p(x4|x1, x2, x3)p(x3|x1, x2)
p(x4|x1, x2, x∗
3|x1, x2)
3)p(x∗
= log p(x4|x1, x2, x3) + log p(x3|x1, x2) − log p(x4|x1, x2, x∗
3) − log p(x∗
3|x1, x2),
(7)
(8)
Generalizing (equation 7) to use m future tokens for calculating the infilling ratio of token i, we get:
log
p(xi|x1:i−1, xi+1:n)
p(x∗
i |x1:i−1, xi+1:i+m)
=
i+m
(cid:88)
j=i+1
i+m
(cid:88)
j=i+1
log p(xj|x1, x2, ..., xi, ...xj−1) + log p(xi|x1:i−1)−
(9)
log p(xj|x1, x2, ..., x∗
i , ...xj−1) − log p(x∗
i |x1:i−1),
where x1:i denotes the sequence x1, x2, ...xi, and x∗
i = arg maxx′
i∈V p(x′
i|x1:i−1).
4
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
Under review as a conference paper at ICLR 2025
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
As suggested in Zhang et al. (2024), we normalize the terms to compute our infilling score for a given
token x3:
InfillingScoretoken(xi) =
−
i+m
(cid:88)
j=i+1
i+m
(cid:88)
j=i+1
log p(xj|x1, x2, ..., xi, ...xj−1) − µx1:j
σx1:j
+
log p(xi|x1:i−1) − µx1:i
σx1:i
log p(xj|x1, x2, ..., x∗
σx1:j
i , ...xj−1) − µx1:j
−
log p(x∗
i |x1:i−1) − µx1:i
σx1:i
(10)
(cid:113)Ez∼p(.|x1:j )[(log p(z|x1:j) − µx1:j )2], are
where µx1:j = Ez∼p(.|x1:j )[log p(z|x1:j)], and σx1:j =
the mean and standard deviation of the next token log probability, log p(xj|x1, ..., xj−1), over the
whole vocabulary. In contrast to equation 5, there is no normalization in the denominator needed in
equation 10. Note that the non-causal terms in equation 6 are all replaced by causal terms which can
be computed through LLM logits. To implement, we need two calls to the LLM – the first with input
as the sequence x1, ..., xi, ..., xN and the second call with input as x1, ..., x∗
i , ..., xN . Note that the
means and standard deviations can be computed from these logits. Thus, equation 10 requires only
two calls to the LLM per token. Hence with N tokens, the total number of calls to the LLM scales
as 2N , in contrast to the naive approach where the scaling is N |V|. We will see in our experiments
(see Section 4.5) that this leads to a dramatic decrease in runtime, with two orders of magnitude
improvement.
3.2 PRETRAINING DATA DETECTION
To detect the membership of a given sentence x, we find the set of min-k% tokens with low Infilling
Scores in the sentence, and compute the average score over this subset. Our final test-statistic
becomes:
InfillingScore(x) =
1
|min-k%|
(cid:88)
xi∈min-k%
InfillingScoretoken(xi).
(11)
Our experiments suggest that InfillingScore(x) is higher for a given sentence x which was seen by
the model during pretraining. Thus, the infilling score enables us to build a detector h(·, M) for an
LLM M to infer the membership class of x as:
h(x, M) =
(cid:26)0
1
InfillingScore(x) < τ
otherwise
,
(12)
where τ denotes the binary threshold that is applied on the soft scores.
4 EXPERIMENTS
4.1 BENCHMARKS
We conduct comprehensive tests to evaluate the performance of our newly proposed test-statistic
against state-of-the-art reference-based and reference-free methods. We experiment with various
models and different parameter sizes. Initially, we examine the established pretraining data detection
benchmarks: WikiMIA (Shi et al., 2024) and MIMIR (Duan et al., 2024). WikiMIA is a temporal
MIA dataset commonly used for evaluating pretraining data detection methods. This benchmark
contains excerpts from Wikipedia event articles, and classifies samples based on the timestamp of
the articles. Samples from articles published before the training of an LLM are classified as “seen”,
and samples after the training are classified as “unseen”. Hence, this benchmark applies only to a
subset of LLMs, depending on their training and release time. WikiMIA has four different subsets
with sequence lengths of 32, 64, 128, and 256. Zhang et al. (2024) also published a “Paraphrased”
version of WikiMIA which uses ChatGPT to paraphrase the samples.
A more challenging benchmark, MIMIR (Duan et al., 2024), aims to evaluate pretraining data
detection methods when the distributions of “seen” and “unseen” text samples have high n-gram
overlap. MIMIR consists of samples from the Pile (Gao et al., 2020) across seven domains: English
5
Under review as a conference paper at ICLR 2025
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
Wikipedia, ArXiv, Github, Pile CC, PubMed Central, DM Mathematics, and HackerNews. Parts from
the train subset of the Pile are labeled as “seen” while parts of the test set are labeled as “unseen”.
These seen and unseen samples are selected to have very high n-gram overlaps, making it significantly
more challenging to infer training data membership.
Previous membership inference benchmarks such as WikiMIA, BookMIA (Shi et al., 2024), and
BookTection (Duarte et al., 2024) cannot be reliably used for Llama-3 because the model was trained
more recently. To address this, we curate a new dataset consisting of book excerpts published after
the release of Llama-3 labeled as “unseen” data. In this new dataset the “seen” data comes from
classical fiction books published before 1965. We sample a set of 100 excerpts, with each excerpt
containing 200 tokens. The “unseen” data consists of excerpts from books published after April 2024,
similarly having size of 200 tokens.
4.2 MODELS AND METRICS
We use the WikiMIA benchmark to evaluate our Infilling Score method on Llama (7B, 13B, 30B)
(Touvron et al., 2023a), Pythia (2.8B, 6.9B) (Biderman et al., 2023), GPT-NeoX-20B (Black et al.,
2022), and Mamba-1.4B (Gu & Dao, 2023) models. WikiMIA is applicable to models released
between 2017 and 2023. Samples from the Wikipedia event articles published in and after 2023 are
labeled as “unseen”, and samples from articles published before 2017 are labeled as “unseen”.
For experiments on the MIMIR benchmark, we evaluate our method using Pythia (160M and 1.4B) on
a subset of the Pile (Gao et al., 2020) dataset sampled across seven different domains. Pythia model
has been pretrained on the training set of the Pile dataset (Biderman et al., 2023). Therefore, MIMIR
benchmark has labeled samples from the train/test of the Pile as “seen”/“unseen”, respectively.
We evaluate Infilling Score for membership classification against the state-of-the-art methods using
the area under the ROC curve (AUROC) metric. As suggested in prior studies (Carlini et al.,
2022; Mireshghallah et al., 2022), we also report the True Positive rate at low False Positive rate
(TPR@5%FPR).
4.3 BASELINES
We compare our proposed method with multiple state-of-the-art methods as our baselines. Reference
method (Carlini et al., 2021) relies on the ratio of the sample perplexity (e.g. next token likelihood)
estimated by the target model to the sample perplexity estimated by a smaller reference model. Zlib
is another reference-based method which uses the Zlib compression entropy for calibrating the score
(Carlini et al., 2021). Neighbor method (Mattern et al., 2023a) replaces tokens within a sequence
using a pretrained masked language model to generate similar sentences. The method identifies if a
sample belongs to the training data by comparing the loss of the original sample with the average loss
of its neighboring sentences. The same algorithm is also used for detecting machine generated text in
(Mitchell et al., 2023). We compare our results with both Min-K%(Shi et al., 2024) and Min-K%++
methods (Zhang et al., 2024) extensively for performance evaluations because both methods are the
current state-of-the-art reference-free baselines, falling under the same category as our Infilling Score.
4.4 RESULTS
4.4.1 EVALUATION ON WIKIMIA
Table 1 presents the results comparing our Infilling Score method with state-of-the-art methods
evaluated on the WikiMIA benchmark. In addition, we evaluate the effectiveness of our method using
TPR at low FPR in Table 2. Our experimental setup is consistent with prior work such as Min-K%++
and Min-K%. For 32-token sequences we only use one future token, and for longer sequences we use
5 future tokens. We fix k = 20% across all experiments.
On average, our method shows a 5% improvement in AUC over Min-K%++ across various model
sizes and different inputs sequence lengths. As hypothesized in Section 3, Infilling Score consistently
outperforms existing reference-based and reference-free methods in detecting Llama pretraining data.
We empirically show that predicting the token-level likelihoods, using the information in both the
past and future tokens is more accurate for pretraining data detection. For longer sequences. This is
specially helpful for samples with longer sequence lengths where there are more tokens in the context
6
Under review as a conference paper at ICLR 2025
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
Seq.
Method
length
32
64
128
256
Infilling Score (Ours)
Min-K%++ (Zhang et al., 2024)
Min-K%(Shi et al., 2024)
Neighbor (Mattern et al., 2023a)
Zlib (Carlini et al., 2021)
Ref (Carlini et al., 2021)
Infilling Score (Ours)
Min-K%++ (Zhang et al., 2024)
Min-K%(Shi et al., 2024)
Neighbor (Mattern et al., 2023a)
Zlib (Carlini et al., 2021)
Ref (Carlini et al., 2021)
Infilling Score (Ours)
Min-K%++ (Zhang et al., 2024)
Min-K%(Shi et al., 2024)
Neighbor (Mattern et al., 2023a)
Zlib (Carlini et al., 2021)
Ref (Carlini et al., 2021)
Infilling Score (Ours)
Min-K%++ (Zhang et al., 2024)
Min-K%(Shi et al., 2024)
Zlib (Carlini et al., 2021)
Mamba-1.4B
NeoX-20B
Pythia-2.8B
Pythia-6.9B
Llama-7B
Llama-13B
Llama-30B Average
Ori.
66.6
66.8
63.2
64.1
61.9
62.2
67.3
67.2
62.2
60.6
60.4
60.6
69.6
68.8
66.8
64.8
65.6
65.2
70.1
65.5
69.8
67.6
Para.
66.1
66.1
62.9
63.6
62.3
62.3
62.9
63.3
58.0
60.6
59.1
59.6
66.6
65.6
64.5
62.6
65.3
61.1
-
-
-
-
Ori.
75.6
75.0
71.8
70.2
69.0
67.2
76.8
76.0
72.2
67.1
67.6
65.7
78.1
75.9
75.0
71.6
71.8
67.8
77.0
71.9
78.0
73.2
Para. Ori.
Para. Ori.
Para. Ori.
Para. Ori.
Para. Ori.
Para.
73.1
69.6
69.7
68.3
68.2
66.3
73.1
67.5
66.1
67.4
66.4
65.9
74.9
72.2
72.6
69.6
71.8
67.8
-
-
-
-
65.0
64.4
61.8
62.1
62.1
61.3
65.7
65.0
61.2
61.3
60.5
59.6
67.1
66.8
66.9
65.2
65.0
59.6
73.6
63.9
70.0
69.3
63.9
62.4
61.7
64.5
62.3
61.2
58.9
58.5
56.8
59.6
59.0
59.2
64.1
63.4
64.7
61.9
65.0
59.5
-
-
-
-
69.7
70.3
66.3
65.8
64.3
63.6
71.4
71.6
65.0
63.2
62.6
62.4
70.4
70.4
69.5
67.5
67.6
63.3
70.5
65.5
71.1
69.8
68.2
68.0
65.2
65.5
64.2
63.5
64.2
64.8
61.1
63.1
61.6
62.9
67.5
66.8
67.0
64.3
67.4
62.9
-
-
-
-
88.1
85.1
66.3
-
66.7
-
89.7
85.7
63.3
-
63.4
-
87.6
85.7
70.1
-
68.3
-
96.6
82.5
72.4
71.2
88.0
84.0
67.0
-
67.3
-
86.8
80.8
61.8
-
63.6
-
83.4
82.2
68.1
-
68.4
-
-
-
-
-
88.6
84.8
68.0
65.8
67.8
57.9
90.1
86.7
66.0
64.1
65.3
63.4
88.3
83.9
71.5
68.3
69.7
62.6
95.3
82.3
72.9
73.1
87.0
82.7
68.4
65.0
68.3
56.2
84.5
78.8
64.0
64.7
65.3
60.9
83.5
76.3
68.7
64.0
69.6
59.7
-
-
-
-
87.3
84.3
70.1
67.6
69.8
63.5
88.3
84.7
68.5
67.1
67.5
69.0
86.7
82.6
73.9
72.2
71.8
71.9
89.8
77.3
72.1
72.8
84.7
81.2
70.7
66.3
70.4
62.4
81.2
74.9
65.7
66.7
67.4
65.4
79.5
73.8
70.2
67.2
71.5
70.0
-
-
-
-
76.56
74.62
66.65
65.73
66.04
62.3
75.78
73.25
63.71
63.79
63.55
63.88
76.23
73.88
69.25
66.60
68.48
64.28
81.84
72.70
72.33
71.00
Table 1: AUROC results on the Original and Paraphrased subsets of the WikiMIA benchmark (Shi
et al., 2024). Note that the paraphrased version of the 256-token subset of WikiMIA is not published
on HuggingFace which is why some results are missing for 256 tokens. Bold shows the best result
and underline shows the second best results in each section. As seen, our Infilling Score method
outperforms previous work for detecting pretraining samples for EleutherAI’s Pythia (Biderman et al.,
2023) and GPT-NeoX (Black et al., 2022), Mamba (Gu & Dao, 2023), and Meta’s Llama (Touvron
et al., 2023a) models across various model sizes.
Seq.
Method
length
32
64
128
256
Infilling Score (Ours)
Min-K%++ (Zhang et al., 2024)
Min-K%(Shi et al., 2024)
Neighbor (Mattern et al., 2023a)
Zlib (Carlini et al., 2021)
Ref (Carlini et al., 2021)
Infilling Score (Ours)
Min-K%++ (Zhang et al., 2024)
Min-K%(Shi et al., 2024)
Neighbor (Mattern et al., 2023a)
Zlib (Carlini et al., 2021)
Ref (Carlini et al., 2021)
Infilling Score (Ours)
Min-K%++ (Zhang et al., 2024)
Min-K%(Shi et al., 2024)
Neighbor (Mattern et al., 2023a)
Zlib (Carlini et al., 2021)
Ref (Carlini et al., 2021)
Infilling Score (Ours)
Min-K%++ (Zhang et al., 2024)
Min-K%(Shi et al., 2024)
Zlib (Carlini et al., 2021)
Mamba-1.4B
NeoX-20B
Pythia-2.8B
Pythia-6.9B
Llama-7B
Llama-13B
Llama-30B Average
Ori.
14.0
12.9
14.7
11.9
15.5
7.8
19.4
16.6
19.4
8.8
14.1
4.6
16.6
16.6
16.6
15.8
19.4
10.1
25.5
15.7
13.7
23.5
Para.
16.5
10.6
15.2
7.2
13.2
5.9
10.2
7.0
8.4
9.5
15.1
8.1
15.8
10.1
14.4
13.7
17.3
11.5
-
-
-
-
Ori.
27.6
19.4
27.9
22.2
19.9
1.5
27.8
20.4
20.4
13.0
16.6
15.5
25.9
23.0
25.2
15.8
23.0
15.8
29.4
13.7
21.6
23.5
Para. Ori.
Para. Ori.
Para. Ori.
Para. Ori.
Para. Ori.
Para.
23.0
12.9
19.6
15.2
18.6
15.2
21.8
13.0
17.6
18.3
19.4
14.1
33.1
19.4
22.3
18.7
21.6
19.4
-
-
-
-
13.7
14.2
17.1
15.0
15.8
6.2
18.0
16.2
18.3
10.2
14.4
10.6
15.8
17.3
13.7
8.6
18.7
10.1
19.6
13.7
13.7
19.6
13.7
13.9
16.5
8.5
14.5
7.2
13.4
9.9
11.3
11.3
16.6
13.0
13.4
14.4
14.4
12.2
16.6
7.2
-
-
-
-
17.3
17.1
17.8
16.5
16.3
6.7
21.1
26.1
19.0
10.9
16.2
12.0
20.9
22.3
18.0
10.8
20.9
13.7
29.4
11.8
15.7
27.5
20.7
17.1
21.7
9.6
12.7
6.2
14.8
14.1
12.7
12.7
15.8
16.2
21.6
21.6
17.3
17.3
20.9
8.6
-
-
-
-
34.1
33.6
15.2
-
13.7
-
50.7
39.4
14.4
-
11.3
-
38.1
46.8
19.4
-
14.4
-
80.4
47.1
17.6
21.6
35.9
31.5
16.0
-
14.2
-
28.5
26.8
13.7
-
14.8
-
33.8
38.8
21.6
-
18.7
-
-
-
-
-
30.5
38.5
18.9
11.6
11.6
4.7
53.5
34.1
17.2
10.2
12.7
4.2
41.0
41.0
25.9
12.9
18.7
10.8
80.4
37.3
19.6
27.5
29.7
35.9
17.6
8.5
15.0
5.4
34.9
26.4
13.4
14.4
13.4
4.6
30.9
21.5
14.4
11.6
16.9
8.1
-
-
-
-
33.1
31.3
21.2
9.3
14.5
9.8
44.0
36.3
17.6
9.9
15.5
11.3
24.5
38.1
23.7
15.1
18.0
10.8
72.5
19.6
13.7
29.4
38.2
27.4
18.1
9.3
15.0
7.5
27.8
21.5
14.4
11.6
16.9
8.1
31.7
21.6
18.7
14.4
19.4
18.7
-
-
-
-
24.85
22.59
18.39
12.07
15.03
7.01
27.56
21.98
15.55
11.73
15.20
10.19
25.93
25.18
18.97
13.91
18.89
12.07
48.17
22.70
16.51
24.66
Table 2: True Positive rate at low False Positive rate (FPR=5%) results on the Original and Paraphrased
subsets of the WikiMIA benchmark (Shi et al., 2024). Note that the paraphrased version of the 256-
token subset of WikiMIA is not published on HuggingFace, which is why some results are missing for
256 tokens. Bold shows the best results and underline shows the second best results in each section.
As shown, our Infilling Score method on average achieves higher True Positive rate compared to
existing methods, with the best performance on 256-token long sequences.
to use for inference. Since our method offers the capability to leverage the future as well as past
tokens, it shows a significant gain over current state-of-the-art method when input sequences are long.
4.4.2 EVALUATION ON MIMIR
Table 3 shows the results comparing our Infilling Score method with SoTA methods evaluated on
the challenging MIMIR benchmark. In the MIMIR dataset, samples from the “seen” and “unseen”
classes are sampled from the same dataset to ensure 13-gram overlap of up to 0.8 between the classes.
Reference-based models show high performance on this benchmark. However, the drawback of this
7
Under review as a conference paper at ICLR 2025
Wikipedia
Github
Pile CC
PubMed Central
Method
160M 1.4B 160M 1.4B 160M 1.4B 160M 1.4B
Infilling Score (Ours)
Min-K%++ (Zhang et al., 2024)
Min-K%(Shi et al., 2024)
Zlib (Carlini et al., 2021)
Ref (Carlini et al., 2021)
49.7
49.7
50.2
51.1
51.2
53.4
53.7
51.3
52.0
55.2
65.5
64.8
65.7
67.4
63.9
70.0
69.6
69.9
71.0
67.1
ArXiv
DM Math
53.3
53.7
51.0
50.6
51.0
50.3
50.1
49.6
49.2
52.2
HackerNews
52.3
50.6
50.6
49.9
51.3
53.5
51.4
50.3
50.0
53.1
Average
Method
160M 1.4B 160M 1.4B 160M 1.4B 160M 1.4B
Infilling Score (Ours)
Min-K%++ (Zhang et al., 2024)
Min-K%(Shi et al., 2024)
Zlib (Carlini et al., 2021)
Ref (Carlini et al., 2021)
51.0
50.1
51.0
50.1
49.4
51.3
51.1
51.7
50.9
51.5
53.5
50.5
49.4
48.1
51.1
50.4
50.9
49.7
48.2
51.1
50.9
50.7
50.9
49.7
49.1
52.6
51.3
51.3
50.3
52.2
53.4
52.4
52.6
52.3
52.2
54.9
54.1
53.6
53.2
54.6
Table 3: AUROC results on MIMIR dataset (Duan et al., 2024) for Pythia models for different
sizes. Similar to Zhang et al. (2024), we experiment on a subset of MIMIR with maximum 13-gram
overlap of 0.8 between samples form “seen” and “unseen” class. Bold shows the best results and
underline shows the second best results in each section. As shown, our Infilling Score method overall
outperforms existing reference-free and reference-based methods.
Year Pub. Book Title
Contamination Rate
.
1817
2006
1812
2003
1986
2009
1991
2009
1998
1996
2009
1889
2003
2009
1982
2000
2008
2007
2007
2005
2006
2008
Persuasion
Oakleaf bearers
Grimms’ Fairy Tales
The Sacred Land
Howl’s Moving Castle
CATCHING FIRE
Red Magic
Tenth Grade Bleeds
Mad Ship
Too Good to Leave, Too Bad to Stay
Crouching Vampire, Hidden Fang
Three Men in a Boat (To Say Nothing of the Dog)
Something from the Nightside
The Silver Eagle
The Man From St. Petersburg
Ship of Destiny
The Painted Man
The Center Cannot Hold
Raintree: Sanctuary
Sister of the Dead
The Corfu Trilogy
Ascendancy of the Last
99
76
73
73
69
68
66
64
61
58
56
56
54
53
53
53
53
52
52
52
50
50
Table 4: Books detected in the pretraining data of Llama-3-8B (Dubey et al., 2024). Contamination
rate shows the percentage of excerpts sampled from the books which were classified as “seen” using
the Infilling Score method.
approach is that it requires testing multiple different LLMs to determine the best performing baseline
(Duan et al., 2024; Zhang et al., 2024). Despite the competitive nature of the benchmark, our Infilling
Score achieves the best performance compared to both reference-free and reference-based models on
average over different domains.
4.4.3 DETECTING PRETRAINING DATA OF LLAMA-3
We apply Infilling Score to detect books that were likely used in the pretraining of the Llama3-8B
model, recently released by Meta (Dubey et al., 2024). Llama3 is known to be trained using over 15T
tokens of data (7x larger training set than Llama-2) according to Dubey et al. (2024). No information
about the source and distribution of this data is disclosed by the developers, making it difficult to
construct a labeled MIA dataset of books suitable for this model.
We used our books dataset as a validation set to find the best hyperparameters (k% and # future
tokens, m, and the classification threshold τ ) for identifying samples used in pretraining Llama3.
Since Llama3 has been released in 2024, existing temporal benchmarks such as WikiMIA, BookMIA
(Shi et al., 2024), and BookTection (Duarte et al., 2024) cannot be used for pretraining data detection
on this model. We found that using the next 100 tokens when calculating the Infilling Score shows the
highest accuracy on this benchmark. Table 12 shows the performance of our method on this dataset.
8
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
Under review as a conference paper at ICLR 2025
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
Figure 1: Figure shows an example of the distribution of the Infilling Scores for “seen” and “unseen”
excerpts in our validation dataset which consists of text from fiction books. Scores are normalized in
each distribution. The unseen data comes from recent novels published after the training of Llama
3. For the classic novel Persuasion, our method detects 99% of the excerpt to be in the training set.
As seen in this histogram, the distribution for Persuasion matches other seen novels and is clearly
separated from unseen data, as one would expect.
We employ our method on 20,000 excerpts sampled from 200 books. Table 4 presenters the list of
books which we found to be in the training dataset of Llama3-8B with ≥ 50% contamination rate.
Contamination rate shows the percentage of excerpts detected as ’seen’ for each publication. Figure 1
shows that books with high contamination rate have higher sample statistic overlap with the “seen”
excerpts in our validation dataset.
4.4.4 ABLATION STUDY ON THE NUMBER OF FUTURE TOKENS TO USE
It is important to note that the number of future tokens used to calculate the Infilling Score determines
the performance gain of our method. As shown in the Figure 2 increasing the number of future tokens
does not necessarily lead to a higher AUC. However, on the WikiMIA benchmark, using about 5
future tokens leads to relatively better AUC across various context lengths on WikiMIA using Llama-
7B and Llama-13B. We conduct all experiments with different input sequence lengths (32, 64, 128,
and 256) to examine the effect of the number of future tokens across various context lengths. While
the ideal number of next tokens to use remains consistent across various model sizes, the optimal
number may differ depending on data distribution and model architecture. We investigate various
values for m within {0, 1, 3, 5, 10, 20, . . . , N }, where N represents the input sequence length. It’s
important to note that the hyperparameter search does not increase the computational complexity, as
incorporating additional future tokens does not require extra calls to the LLM. We provide additional
results in Appendix A.
Figure 2: The figures show the AUROC achieved by the Infilling Score as the number of future
tokens increases. These results are shown for input sequence lengths of 32, 64, 128, and 256. The
left figure presents the results for Llama-7B, while the right figure shows the results for Llama-13B.
Our baseline, representing existing methods, uses zero future tokens. The optimal number of future
tokens to use is 1 for sequences of 32 tokens. For longer sequences of up to 256 tokens, the optimal
number is around 5 for both models.
9
Under review as a conference paper at ICLR 2025
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
4.5 ALGORITHM RUNTIME
Table 5 compares the runtime of our Infilling Score algorithm with straightforward application
of Bayes, and Min-K%++ (Shi et al., 2024) using Llama-7B. Although both the naive approach
and Infilling score are slower than Min-K%, these methods yield a more accurate estimate of
token likelihoods for membership inference. Note that our proposed test-statistic, Infilling Score,
significantly reduces the computational complexity compared to the naive approach, delivering an
accurate membership inference score within a feasible runtime. WikiMIA dataset has 776 sequences
of length 32, 542 of length 64, 250 of length 128, and 82 of length 256 tokens. The compute
cost increases with sequence length. The 256-token sequences require approximately 2,460 seconds
compared to 776 seconds for 32-token sequences (30 seconds per sequence), highlighting the trade-off
between detection accuracy and computational efficiency.
Seq. length Min-K%++
0.028 sec.
0.042 sec.
0.064 sec.
0.106 sec.
32
64
128
256
Infilling Score Naive Approach
0.952 sec.
3.11 sec.
9.47 sec.
29.98 sec.
207 sec.
334 sec.
581 sec.
1141 sec.
Table 5: Algorithm runtime results comparing Infilling Score, Min-K%++, and the naive approach
discussed in Section 3, sequences of 32, 64, 128, and 256 tokens using Llama-7B on a H200 GPU.
To evaluate the impact of the number of future tokens used, m, on the runtime, we measure the
runtime using 1, 5, and 10 future tokens. As discussed in Section 3.1, the number of LLM calls
required by our Infilling Score algorithm is independent of the number of future tokens used. However,
increasing the number of future tokens also increases the number of terms in the summations in
equation 10. The additional computations have a minimal impact on the runtime as shown in Table 6.
Seq. length
# future tokens
1
32
5
10
1
64
5
10
1
128
5
10
1
256
5
10
Runtime
0.952 sec.
0.953 sec.
0.956 sec.
3.11 sec.
3.12 sec.
3.12 sec.
9.47 sec.
9.48 sec.
9.49 sec.
29.98 sec.
30.01 sec.
30.04 sec.
Table 6: Algorithm runtime as the number of future tokens used increases. As the table indicates,
increasing the number of future tokens to use has minimal impact on runtime.
5 CONCLUSIONS
Limitations One limitation is that computing the Infilling Score requires grey-box access to the LLM,
meaning access to the sample log probabilities estimated by the model. This requirement is common
among most of the existing membership inference methods. Another limitation of our approach lies
in its runtime complexity. As described in Section 3.1, the order of LLM calls required for computing
the infilling likelihood (for a sequence of length N ) with the naive Bayes method is N |V|, which
scales linearly with both sequence length N , and vocabulary size |V|. By introducing the Infilling
Score, we reduce the number of LLM calls to 2N . However, prior methods such as Min-K%and
Min-K%++ require only a single LLM call (to test a sequence of length N ), and are faster compared
to our proposed algorithm.
To conclude, we proposed a novel method that can detect if text sequences have been present in the
training set with significantly better accuracy compared to prior work. Our new test statistic allows us
to derive non-causal likelihoods (up to a multiplicative factor) from pre-trained autoregressive models
and may have other uses, beyond membership inference. Although our method is slower compared to
previous methods, it can be practically run in a few seconds for large foundation models.
Our results present evidence that numerous books and other recent sources of text have been in the
training data of modern LLMs. This test can further be used for measuring dataset contamination
rates, and also evaluating decontamination methods. An important research direction would be to
create larger evaluation datasets for membership inference, and include high n-gram overlap samples
for recent sources that remain unseen to llama3 and other recently released frontier models.
10
Under review as a conference paper at ICLR 2025
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
REFERENCES
Raef Bassily, Vitaly Feldman, Cristóbal Guzmán, and Kunal Talwar. Stability of stochastic gradient
descent on nonsmooth convex losses, 2020.
Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan,
Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron,
Lintang Sutawika, and Oskar van der Wal. Pythia: A suite for analyzing large language models
across training and scaling, 2023.
Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He,
Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu
Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. Gpt-neox-20b: An
open-source autoregressive language model, 2022.
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine
Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and Colin Raffel.
Extracting training data from large language models, 2021.
Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramer.
Membership inference attacks from first principles, 2022.
Kent K. Chang, Mackenzie Cramer, Sandeep Soni, and David Bamman. Speak, memory: An
archaeology of books known to chatgpt/gpt-4, 2023.
Michael Duan, Anshuman Suri, Niloofar Mireshghallah, Sewon Min, Weijia Shi, Luke Zettlemoyer,
Yulia Tsvetkov, Yejin Choi, David Evans, and Hannaneh Hajishirzi. Do membership inference
attacks work on large language models? arXiv preprint arXiv:2402.07841, 2024.
André V. Duarte, Xuandong Zhao, Arlindo L. Oliveira, and Lei Li. De-cop: Detecting copyrighted
content in language models training data, 2024.
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha
Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn,
Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston
Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron,
Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris
McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton
Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, David
Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes,
Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip
Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Graeme
Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu,
Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan Misra, Ivan Evtimov,
Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah,
Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu
Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph
Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Kartikeya Upasani,
Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz
Malik, Kuenley Chiu, Kunal Bhalla, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence
Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas
Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri,
Marcin Kardas, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kambadur, Mike Lewis,
Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Nikolay Bashlykov,
Nikolay Bogoychev, Niladri Chatterji, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan
Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan,
Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy,
Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohit Girdhar, Rohit
Patel, Romain Sauvestre, Ronnie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou,
Rui Wang, Saghar Hosseini, Sahana Chennabasappa, Sanjay Singh, Sean Bell, Seohyun Sonia
Kim, Sergey Edunov, Shaoliang Nie, Sharan Narang, Sharath Raparthy, Sheng Shen, Shengye Wan,
Shruti Bhosale, Shun Zhang, Simon Vandenhende, Soumya Batra, Spencer Whitman, Sten Sootla,
11
Under review as a conference paper at ICLR 2025
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
Stephane Collot, Suchin Gururangan, Sydney Borodinsky, Tamar Herman, Tara Fowler, Tarek
Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao,
Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent
Gonguet, Virginie Do, Vish Vogeti, Vladan Petrovic, Weiwei Chu, Wenhan Xiong, Wenyin Fu,
Whitney Meers, Xavier Martinet, Xiaodong Wang, Xiaoqing Ellen Tan, Xinfeng Xie, Xuchao Jia,
Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen
Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe
Papakipos, Aaditya Singh, Aaron Grattafiori, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya
Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alex
Vaughan, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit Sangani, Anam Yunus, Andrei
Lupu, Andres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew
Ryan, Ankit Ramchandani, Annie Franco, Aparajita Saraf, Arkabandhu Chowdhury, Ashley
Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yazdan, Beau James, Ben Maurer, Benjamin
Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu,
Boyu Ni, Braden Hancock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt
Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Changhan Wang, Changkyu Kim, Chao
Zhou, Chester Hu, Ching-Hsiang Chu, Chris Cai, Chris Tindal, Christoph Feichtenhofer, Damon
Civin, Dana Beaty, Daniel Kreymer, Daniel Li, Danny Wyatt, David Adkins, David Xu, Davide
Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le,
Dustin Holland, Edward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily
Hahn, Emily Wood, Erik Brinkman, Esteban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix
Kreuk, Feng Tian, Firat Ozgenel, Francesco Caggioni, Francisco Guzmán, Frank Kanayet, Frank
Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern,
Govind Thattai, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hamid
Shojanazeri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen
Suk, Henry Aspegren, Hunter Goldman, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Irina-
Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Japhet Asher, Jean-Baptiste
Gaya, Jeff Marcus, Jeff Tang, Jennifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul,
Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan McPhie,
Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Karthik
Prasad, Kartikay Khandelwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly
Michelena, Keqian Li, Kun Huang, Kunal Chawla, Kushal Lakhotia, Kyle Huang, Lailin Chen,
Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu,
Liron Moshkovich, Luca Wehrstedt, Madian Khabsa, Manav Avalani, Manish Bhatt, Maria
Tsimpoukelli, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev,
Maxim Naumov, Maya Lathi, Meghan Keneally, Michael L. Seltzer, Michal Valko, Michelle
Restrepo, Mihir Patel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang,
Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam,
Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier,
Nikolay Pavlovich Laptev, Ning Dong, Ning Zhang, Norman Cheng, Oleg Chernoguz, Olivia
Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pedro
Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani,
Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy,
Raghu Nayani, Rahul Mitra, Raymond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Rohan
Maheswari, Russ Howes, Ruty Rinott, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara
Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Verma, Seiji Yamamoto, Sharadh
Ramaswamy, Shaun Lindsay, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha,
Shiva Shankar, Shuqiang Zhang, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe,
Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan
Govindaprasad, Sumit Gupta, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury,
Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Kohler, Thomas Robinson, Tianhe
Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi,
Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vítor Albiero, Vlad Ionescu,
Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang,
Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaofang Wang, Xiaojian Wu, Xiaolan Wang,
Xide Xia, Xilun Wu, Xinbo Gao, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang,
Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yuchen Hao, Yundi Qian, Yuzi He, Zach Rait,
Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, and Zhiwei Zhao. The llama 3 herd
of models, 2024. URL https://arxiv.org/abs/2407.21783.
12
Under review as a conference paper at ICLR 2025
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang,
Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The pile: An 800gb
dataset of diverse text for language modeling, 2020.
Michael M. Grynbaum and Ryan Mac. The times sues openai and microsoft over a.i. use
of copyrighted work. https://www.nytimes.com/2023/12/27/business/media/
new-york-times-open-ai-microsoft-lawsuit.html, 2023.
Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces, 2023.
Moritz Hardt, Benjamin Recht, and Yoram Singer. Train faster, generalize better: Stability of
stochastic gradient descent, 2016.
Minhao Jiang, Ken Ziyu Liu, Ming Zhong, Rylan Schaeffer, Siru Ouyang, Jiawei Han, and Sanmi
Koyejo. Investigating data contamination for pre-training language models, 2024. URL https:
//arxiv.org/abs/2401.06059.
Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash
Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel,
Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bitton,
Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian,
Hanlin Zhang, Rulin Shao, Sarah Pratt, Sunny Sanyal, Gabriel Ilharco, Giannis Daras, Kalyani
Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Chandu, Thao Nguyen, Igor Vasiljevic, Sham
Kakade, Shuran Song, Sujay Sanghavi, Fartash Faghri, Sewoong Oh, Luke Zettlemoyer, Kyle Lo,
Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca
Soldaini, Pang Wei Koh, Jenia Jitsev, Thomas Kollar, Alexandros G. Dimakis, Yair Carmon, Achal
Dave, Ludwig Schmidt, and Vaishaal Shankar. Datacomp-lm: In search of the next generation of
training sets for language models, 2024. URL https://arxiv.org/abs/2406.11794.
Saeed Mahloujifar, Huseyin A. Inan, Melissa Chase, Esha Ghosh, and Marcello Hasegawa. Member-
ship inference on word embedding and beyond, 2021.
Justus Mattern, Fatemehsadat Mireshghallah, Zhijing Jin, Bernhard Schoelkopf, Mrinmaya Sachan,
and Taylor Berg-Kirkpatrick. Membership inference attacks against language models via neigh-
bourhood comparison. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Findings
of the Association for Computational Linguistics: ACL 2023, pp. 11330–11343, Toronto, Canada,
July 2023a. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.719.
URL https://aclanthology.org/2023.findings-acl.719.
Justus Mattern, Fatemehsadat Mireshghallah, Zhijing Jin, Bernhard Schölkopf, Mrinmaya Sachan,
and Taylor Berg-Kirkpatrick. Membership inference attacks against language models via neigh-
bourhood comparison, 2023b.
Matthieu Meeus, Shubham Jain, Marek Rei, and Yves-Alexandre de Montjoye. Did the neurons read
your book? document-level membership inference for large language models, 2023.
Sewon Min, Suchin Gururangan, Eric Wallace, Hannaneh Hajishirzi, Noah A. Smith, and Luke
Zettlemoyer. Silo language models: Isolating legal risk in a nonparametric datastore, 2023.
Fatemehsadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, and Reza Shokri.
Quantifying privacy risks of masked language models using membership inference attacks, 2022.
Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D. Manning, and Chelsea Finn.
Detectgpt: Zero-shot machine-generated text detection using probability curvature, 2023.
Maximilian Mozes, Xuanli He, Bennett Kleinberg, and Lewis D. Griffin. Use of llms for illicit
purposes: Threats, prevention measures, and vulnerabilities, 2023.
OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni
Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor
Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian,
Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny
Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks,
13
Under review as a conference paper at ICLR 2025
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea
Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen,
Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung,
Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch,
Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty
Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte,
Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel
Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua
Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike
Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon
Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne
Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo
Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar,
Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Jan Hendrik
Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich,
Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy
Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie
Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini,
Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne,
Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David
Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie
Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély,
Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo
Noh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano,
Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng,
Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto,
Michael, Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power,
Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis
Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted
Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel
Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon
Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky,
Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang,
Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston
Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya,
Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason
Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff,
Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu,
Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba,
Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang,
William Zhuk, and Barret Zoph. Gpt-4 technical report, 2024.
Yonatan Oren, Nicole Meister, Niladri Chatterji, Faisal Ladhak, and Tatsunori B. Hashimoto. Proving
test set contamination in black box language models, 2023.
Ashwinee Panda, Christopher A. Choquette-Choo, Zhengming Zhang, Yaoqing Yang, and Prateek
Mittal. Teach llms to phish: Stealing private information from language models, 2024.
Virat Shejwalkar, Huseyin A. Inan, Amir Houmansadr, and Robert Sim. Membership inference
attacks against nlp classification models. 2021. URL https://api.semanticscholar.
org/CorpusID:245222525.
Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen,
and Luke Zettlemoyer. Detecting pretraining data from large language models, 2024.
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks
against machine learning models, 2017.
Congzheng Song and Vitaly Shmatikov. Auditing data provenance in text-generation models, 2019.
14
Under review as a conference paper at ICLR 2025
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée
Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand
Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language
models, 2023a.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cris-
tian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu,
Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn,
Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel
Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee,
Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra,
Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi,
Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh
Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen
Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic,
Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models,
2023b.
Shuo Yang, Wei-Lin Chiang, Lianmin Zheng, Joseph E. Gonzalez, and Ion Stoica. Rethinking
benchmark and contamination for language models with rephrased samples, 2023. URL https:
//arxiv.org/abs/2311.04850.
Jingyang Zhang, Jingwei Sun, Eric Yeats, Yang Ouyang, Martin Kuo, Jianyi Zhang, Hao Yang, and
Hai Li. Min-k%++: Improved baseline for detecting pre-training data from large language models.
arXiv preprint arXiv:2404.02936, 2024.
Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin,
Ji-Rong Wen, and Jiawei Han. Don’t make your llm an evaluation benchmark cheater, 2023.
15
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
Under review as a conference paper at ICLR 2025
A CHOICE OF HYPERPARAMETERS
Infilling Score algorithm has two hyperparameters: m which represents the number of future tokens
to use, and k which represents the k% tokens with minimum probabilities to use. We sweep over
1, 3, 5, 10 and 20 future tokens, and k = 0.1, 0.2, ...0.5. Tables 7, 8, 9, and 10 show AUROC and
TPR at low FPR results on WikiMIA subsets with sequence lengths of 32, 64, 128, and 256. Based
on the results, the optimal number of future tokens is 1 for sequences of 32 tokens and 5 for longer
sequences. We find that k = 0.1 often works best across different model sizes and sequence lengths.
# future tokens
k (Min-k%) AUROC FPR@TPR95 TPR@FPR05 AUROC FPR@TPR95 TPR@FPR05 AUROC FPR@TPR95 TPR@FPR05
Llama-7B
Llama-13B
Llama-30B
1
3
5
10
20
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
89.10
88.10
88.00
88.10
88.00
88.20
87.80
87.80
87.80
87.70
88.00
87.60
87.60
87.70
87.50
86.80
86.60
86.60
86.60
86.40
82.00
81.80
81.80
81.80
81.40
33.90
36.80
37.80
36.00
37.30
34.70
35.70
37.30
36.00
37.30
35.50
37.50
37.80
37.50
37.80
39.60
41.10
41.40
41.60
41.60
47.30
47.80
47.80
48.10
46.80
30.50
27.60
27.90
25.60
25.30
29.50
31.00
31.80
32.00
28.20
34.10
32.80
32.80
33.90
30.50
28.40
26.60
27.10
26.90
25.10
22.70
23.50
22.50
23.00
21.70
89.20
88.60
88.60
88.60
88.60
89.00
88.70
88.60
88.70
88.70
88.40
88.30
88.20
88.30
88.30
86.80
86.60
86.60
86.70
86.60
81.20
81.00
80.90
81.00
80.80
32.90
35.50
36.20
35.20
36.80
32.10
38.30
38.80
38.30
38.00
32.90
33.70
36.20
35.20
34.40
37.00
37.50
36.20
35.50
36.00
44.20
45.50
44.50
45.80
45.00
24.50
26.40
26.90
27.60
25.60
29.50
28.70
28.40
30.50
30.00
27.60
29.50
27.60
30.50
28.90
26.90
26.90
26.40
26.10
26.40
19.60
20.20
19.90
20.20
20.20
87.80
87.30
87.30
87.30
87.30
86.80
86.60
86.60
86.60
86.60
87.30
87.20
87.30
87.30
87.30
85.30
85.20
85.20
85.20
85.30
81.30
81.20
81.10
81.20
81.20
37.30
37.80
39.10
37.80
38.80
36.00
37.50
37.50
37.50
36.80
35.20
35.20
35.50
35.70
35.50
39.80
40.60
40.90
41.40
40.90
47.80
49.10
49.40
48.60
48.30
27.90
27.40
28.90
28.20
27.10
27.90
27.40
28.90
27.10
28.70
31.80
33.10
31.50
32.00
31.00
28.40
27.90
27.90
27.90
27.90
22.20
22.20
22.20
22.20
22.20
Table 7: Complete Infilling Score results testing Llama-7B, Llama-13B, and Llama-30B models on
the Original subset of the WikiMIA 32-token sequences (Shi et al., 2024). For this subset, using one
future token results in the best performance.
# future tokens
k (Min-k%) AUROC FPR@TPR95 TPR@FPR05 AUROC FPR@TPR95 TPR@FPR05 AUROC FPR@TPR95 TPR@FPR05
Llama-7B
Llama-13B
Llama-30B
1
3
5
10
20
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
89.60
89.00
89.00
89.00
89.00
89.70
89.60
89.60
89.70
89.70
89.70
89.70
89.70
89.70
89.70
88.70
88.70
88.70
88.70
88.60
83.00
83.00
82.90
83.00
82.90
33.70
36.40
35.70
36.80
36.40
37.60
38.00
37.60
37.60
37.60
38.00
37.20
38.40
37.20
38.00
40.30
41.10
41.10
40.70
41.10
51.90
52.70
52.70
52.30
52.70
44.00
45.40
45.80
46.80
45.80
41.20
38.70
37.30
39.80
39.40
45.80
45.40
45.80
46.10
46.50
52.80
52.50
52.10
53.20
53.50
33.10
32.40
32.70
32.40
33.10
89.90
89.60
89.60
89.60
89.60
90.00
90.00
90.00
90.00
90.00
90.10
90.10
90.10
90.10
90.00
88.70
88.70
88.70
88.80
88.50
82.50
82.50
82.50
82.50
82.30
35.30
38.40
39.50
37.60
39.50
36.80
37.60
38.80
38.40
38.40
37.20
39.50
39.10
39.10
38.80
40.70
41.50
41.50
41.90
42.20
51.20
50.80
50.80
50.80
50.80
46.10
47.90
47.20
47.20
47.20
52.10
53.50
52.50
53.50
53.20
48.60
49.30
50.00
49.60
46.10
47.50
47.20
47.50
47.50
46.80
26.40
27.10
27.10
27.50
27.10
87.70
87.50
87.50
87.60
87.60
88.00
88.00
88.00
88.00
88.00
88.30
88.30
88.30
88.40
88.30
87.30
87.30
87.30
87.30
87.30
82.50
82.50
82.50
82.50
82.40
36.40
35.70
35.70
35.70
35.70
37.60
35.70
35.70
35.70
35.70
35.30
34.90
34.10
35.30
35.30
38.80
38.00
38.00
38.40
37.60
49.60
49.60
49.20
49.20
48.80
39.10
38.40
39.40
38.70
39.10
36.30
37.30
37.30
37.70
37.70
43.00
43.70
44.00
44.00
43.70
37.70
38.70
38.70
39.10
38.00
36.30
35.60
35.90
35.60
35.60
Table 8: Complete Infilling Score results testing Llama-7B, Llama-13B, and Llama-30B models on
the Original subset of the WikiMIA 64-token sequences (Shi et al., 2024). For this subset, using five
future tokens results in the best performance.
16
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
Under review as a conference paper at ICLR 2025
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
# future tokens
k (Min-k%) AUROC FPR@TPR95 TPR@FPR05 AUROC FPR@TPR95 TPR@FPR05 AUROC FPR@TPR95 TPR@FPR05
Llama-7B
Llama-13B
Llama-30B
1
3
5
10
20
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
87.10
86.80
86.80
86.80
86.80
87.20
87.10
87.10
87.10
87.10
87.70
87.60
87.60
87.60
87.60
87.60
87.50
87.60
87.50
87.50
81.50
81.40
81.40
81.40
81.40
36.90
36.90
36.00
36.90
36.90
37.80
37.80
37.80
37.80
37.80
38.70
38.70
38.70
38.70
38.70
45.90
44.10
43.20
44.10
43.20
54.10
55.00
55.00
55.00
55.00
36.00
35.30
35.30
36.00
36.70
22.30
23.70
23.00
23.70
22.30
37.40
38.10
36.70
37.40
36.70
23.70
24.50
23.70
23.70
23.70
21.60
23.00
21.60
21.60
21.60
88.40
88.20
88.20
88.20
88.20
87.90
87.80
87.80
87.80
87.80
88.40
88.30
88.30
88.30
88.30
87.60
87.60
87.50
87.50
87.50
82.30
82.20
82.20
82.20
82.10
35.10
34.20
34.20
34.20
34.20
36.00
34.20
34.20
35.10
34.20
37.80
37.80
37.80
37.80
36.90
39.60
40.50
40.50
39.60
39.60
53.20
52.30
52.30
52.30
52.30
40.30
39.60
40.30
39.60
41.00
36.70
37.40
37.40
37.40
37.40
34.50
33.80
34.50
34.50
35.30
33.80
34.50
35.30
35.30
33.80
18.70
18.70
18.70
18.70
18.70
84.90
84.70
84.70
84.70
84.70
85.30
85.20
85.20
85.20
85.20
86.70
86.70
86.60
86.60
86.60
86.00
86.00
85.90
85.90
85.90
83.20
83.10
83.10
83.20
83.10
33.30
36.90
36.90
37.80
36.90
39.60
39.60
39.60
39.60
39.60
37.80
37.80
37.80
37.80
37.80
38.70
38.70
38.70
37.80
38.70
52.30
51.40
51.40
51.40
51.40
24.50
24.50
24.50
24.50
24.50
13.70
14.40
13.70
13.70
13.70
19.40
20.10
19.40
19.40
20.10
18.00
18.00
18.00
18.00
18.00
20.90
20.90
21.60
20.90
20.90
Table 9: Complete Infilling Score results testing Llama-7B, Llama-13B, and Llama-30B models on
the Original subset of the WikiMIA 128-token sequences (Shi et al., 2024). Again, using five future
tokens results in the best performance.
# future tokens
k (Min-k%) AUROC FPR@TPR95 TPR@FPR05 AUROC FPR@TPR95 TPR@FPR05 AUROC FPR@TPR95 TPR@FPR05
Llama-7B
Llama-13B
Llama-30B
1
3
5
10
20
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
0.1
0.2
0.3
0.4
0.5
93.80
93.80
93.70
93.80
93.90
96.30
96.10
96.00
96.00
96.00
96.80
96.60
96.50
96.60
96.60
95.70
95.90
95.80
95.80
95.80
93.70
93.70
93.40
93.70
93.50
51.60
51.60
51.60
51.60
51.60
29.00
32.30
35.50
35.50
35.50
22.60
22.60
25.80
22.60
25.80
29.00
25.80
29.00
29.00
29.00
22.60
22.60
22.60
22.60
22.60
80.40
80.40
80.40
80.40
80.40
78.40
74.50
74.50
74.50
72.50
74.50
74.50
74.50
74.50
74.50
78.40
78.40
78.40
78.40
78.40
66.70
66.70
68.60
68.60
66.70
92.90
92.80
92.70
92.70
92.70
95.30
95.30
95.30
95.30
95.30
95.30
95.30
95.20
95.20
95.20
93.00
93.10
93.20
93.10
93.20
90.70
90.60
90.60
90.60
90.50
25.80
29.00
29.00
29.00
29.00
19.40
19.40
19.40
19.40
19.40
22.60
22.60
22.60
22.60
22.60
29.00
29.00
29.00
29.00
29.00
35.50
35.50
35.50
35.50
35.50
66.70
68.60
66.70
66.70
66.70
72.50
72.50
72.50
72.50
72.50
80.40
80.40
80.40
80.40
80.40
54.90
54.90
54.90
54.90
54.90
51.00
51.00
51.00
51.00
51.00
85.60
85.60
85.60
85.60
85.70
90.60
90.60
90.60
90.60
90.60
89.80
89.80
89.80
89.80
89.80
87.40
87.50
87.60
87.50
87.50
85.60
85.60
85.80
85.70
85.70
32.30
35.50
32.30
35.50
32.30
41.90
41.90
41.90
41.90
41.90
35.50
35.50
35.50
35.50
35.50
45.20
45.20
45.20
45.20
45.20
48.40
48.40
48.40
48.40
48.40
21.60
21.60
19.60
21.60
19.60
72.50
72.50
72.50
72.50
72.50
47.10
47.10
47.10
47.10
47.10
49.00
49.00
49.00
49.00
49.00
35.30
33.30
35.30
35.30
35.30
Table 10: Complete Infilling Score results testing Llama-7B, Llama-13B, and Llama-30B models on
the Original subset of the WikiMIA 256-token sequences (Shi et al., 2024). Similar to the WikiMIA
64-token and 128-token sequence subsets, using 5 future tokens results in the best performance.
B ADDITIONAL RESULTS
B.1 STATISTICAL ANALYSIS: INFILLING SCORE VS. MIN-K%++
We employ a bootstrap-based statistical comparison to evaluate Infilling Score and Min-K%++.
We use 1,000 bootstrap iterations to estimate the the mean difference between AUROC metrics
from these methods, along with the standard errors to construct 95% confidence intervals for the
true performance gap. Table 11 shows that Infilling Score consistently outperforms Min-K%++
across different sequence lengths (32, 64, 128, and 256 tokens) and model sizes (7B, 13B, and 30B
parameters).
17
Under review as a conference paper at ICLR 2025
Sequence Length
Model
AUROC (%)
Std Err AUROC (%)
Std Err Difference (%)
p-value
Infilling Score
Min-K%++
Comparison
32 tokens
64 tokens
128 tokens
256 tokens
llama-7b
llama-13b
llama-30b
llama-7b
llama-13b
llama-30b
llama-7b
llama-13b
llama-30b
llama-7b
llama-13b
llama-30b
89.185
88.850
87.628
89.788
90.029
88.206
87.364
88.145
86.207
96.307
95.124
90.737
1.173
1.232
1.236
1.341
1.265
1.447
2.272
2.214
2.797
1.761
2.271
3.782
85.182
84.852
84.390
85.922
85.692
84.828
84.896
83.740
82.398
82.354
82.326
77.411
1.328
1.333
1.329
1.659
1.642
1.705
2.395
2.463
2.602
4.662
4.740
5.643
4.003 ± 1.130
3.998 ± 1.222
3.239 ± 1.157
3.866 ± 1.492
4.338 ± 1.539
3.378 ± 1.601
2.468 ± 2.654
4.405 ± 2.649
3.809 ± 1.993
0.000***
0.004**
0.006**
0.012*
0.010*
0.040*
0.348
0.080
0.064
13.952 ± 4.296
12.797 ± 3.952
13.326 ± 4.459
0.000***
0.000***
0.002**
Table 11: Comparing performance of Infilling Score versus Min-K%++ across different sequence
lengths and model sizes. Results show bootstrap estimates with 1000 iterations. The mean difference
indicates Infilling Score’s improvement over Min-K%++. Statistical significance is denoted as: * (p
< 0.05), ** (p < 0.01), *** (p < 0.001).
B.2 DETECTING PRE-TRAINING DATA FROM BOOKS
We compare the AUROC of Infilling Score with existing methods on a labeled validation subset of
book excerpts. As discussed in Section 4.1, this validation subset contains book excerpts labeled as
“seen” and “unseen”. Infilling Score significantly outperforms existing methods in detecting “seen”
examples.
Method
Infilling Score (Ours)
Min-K%++ (Zhang et al., 2024)
Min-K%(Shi et al., 2024)
Zlib (Carlini et al., 2021)
AUC
0.79
0.53
0.71
0.68
Table 12: Comparing AUROC of Infilling Score, Min-K%++, Min-K%, and Zlib methods on the
validation dataset, detecting book excerpts in Llama3-8B pretraining data.
C COMPUTE RESOURCES
We ran our experiments on A100 (40 GB) and H200 (120 GB) GPUs. Testing Infilling Score on the
WikiMIA benchmark on an A100 node takes approximately between 20 minutes (for a 3B parameter
model) and 35 minutes (for a 30B parameter model). For Llama models, we used float16 data type.
On the MIMIR benchmark, where there are 1000 long samples per class, the test approximately takes
10 hours on each subset on an A100 node.
D INFILLING SCORE ALGORITHM
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
18
Under review as a conference paper at ICLR 2025
Algorithm 1: Infilling Score
Input: Sequence: x : x1, x2...xN , Threshold τ
1 for i = 1 to N do
2
3
4
5
6
7
(cid:113)Ez∼p(.|x1 ...xi−1)[(log p(z|x1...xi−1) − µx<i )2]
i|x1...xi−1)
Compute log p(xi|x1...xi−1)
µx<i ← Ez∼p(.|x1...xi−1)[log p(z|x1...xi−1)]
σx<i ←
Find x∗
Compute log p(x∗
r ← (log p(xi|x1...xi−1) − µx<i )/σx<i − (log p(x∗
for j = i + 1 to i + m do
∈V p(x′
i ← arg maxx′
i
i |x1...xi−1)
8
9
10
11
12
13
14 end
15 Min-K%(x) ← k% of tokens from x with the lowest InfillingScoretoken(xi)
16 InfillingScore(x) = (cid:80)
17 return InfillingScore(x) < τ
end
InfillingScoretoken(xi) ← r
xi∈min-k% InfillingScoretoken(xi)
Compute log p(xj |x1...xj−1)
Compute log p(xj |x1...xi∗ ...xj−1)
r ← r + (log p(xj |x1...xj−1) − µx<i)/σx<i − (log p(xj |x1...xi∗ ...xj−1) − µx<i)/σx<i
i |x1...xi−1) − µx<i )/σx<i
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
19
|
nDvgHIBRxQ | Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist | [
8,
6,
6,
5
] | "Under review as a conference paper at ICLR 2025\n\nIS YOUR MODEL REALLY A GOOD MATH REASONER?\nEVAL(...TRUNCATED) |
leSbzBtofH | AutoAdvExBench: Benchmarking Autonomous Exploitation of Adversarial Example Defenses | [
8,
5,
8,
6,
5,
5
] | "Under review as a conference paper at ICLR 2025\n\nAUTOADVEXBENCH:\nBENCHMARKING AUTONOMOUS EXPLOIT(...TRUNCATED) |
44CoQe6VCq | Test of Time: A Benchmark for Evaluating LLMs on Temporal Reasoning | [
8,
6,
8,
6
] | "Under review as a conference paper at ICLR 2025\n\nTEST OF TIME: A BENCHMARK FOR EVALUATING\nLLMS O(...TRUNCATED) |
6RiBl5sCDF | GeoX: Geometric Problem Solving Through Unified Formalized Vision-Language Pre-training | [
6,
8,
6,
8
] | "Under review as a conference paper at ICLR 2025\n\nGEOX: GEOMETRIC PROBLEM SOLVING THROUGH\nUNIFIED(...TRUNCATED) |
rawj2PdHBq | Can Medical Vision-Language Pre-training Succeed with Purely Synthetic Data? | [
8,
5,
5
] | "Under review as a conference paper at ICLR 2025\n\nCAN MEDICAL VISION-LANGUAGE PRE-TRAINING\nSUCCEE(...TRUNCATED) |
y3zswp3gek | HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models | [
6,
6,
10,
6
] | "Under review as a conference paper at ICLR 2025\n\nHARMAUG: EFFECTIVE DATA AUGMENTATION FOR\nKNOWLE(...TRUNCATED) |
KvaDHPhhir | Sketch2Diagram: Generating Vector Diagrams from Hand-Drawn Sketches | [
8,
6,
5,
6
] | "Under review as a conference paper at ICLR 2025\n\nSKETCH2DIAGRAM: GENERATING VECTOR DIA-\nGRAMS FR(...TRUNCATED) |
y9A2TpaGsE | Language Agents Meet Causality -- Bridging LLMs and Causal World Models | [
6,
6,
6,
8
] | "Under review as a conference paper at ICLR 2025\n\nLANGUAGE AGENTS MEET CAUSALITY – BRIDGING\nLLM(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 20