text
stringlengths
15
42.1k
source
stringclasses
469 values
file_type
stringclasses
1 value
id
stringlengths
3
6
When you want to train a 🤗 Transformers model with the Keras API, you need to convert your dataset to a format that Keras understands. If your dataset is small, you can just convert the whole thing to NumPy arrays and pass it to Keras. Let's try that first before we do anything more complicated. First, load a dataset. We'll use the CoLA dataset from the [GLUE benchmark](https://huggingface.co/datasets/glue), since it's a simple binary text classification task, and just take the training split for now. ```py from datasets import load_dataset dataset = load_dataset("glue", "cola") dataset = dataset["train"] # Just take the training split for now ``` Next, load a tokenizer and tokenize the data as NumPy arrays. Note that the labels are already a list of 0 and 1s, so we can just convert that directly to a NumPy array without tokenization! ```py from transformers import AutoTokenizer import numpy as np tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased") tokenized_data = tokenizer(dataset["sentence"], return_tensors="np", padding=True) # Tokenizer returns a BatchEncoding, but we convert that to a dict for Keras tokenized_data = dict(tokenized_data) labels = np.array(dataset["label"]) # Label is already an array of 0 and 1 ``` Finally, load, [`compile`](https://keras.io/api/models/model_training_apis/#compile-method), and [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) the model. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: ```py from transformers import TFAutoModelForSequenceClassification from tensorflow.keras.optimizers import Adam # Load and compile our model model = TFAutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased") # Lower learning rates are often better for fine-tuning transformers model.compile(optimizer=Adam(3e-5)) # No loss argument! model.fit(tokenized_data, labels) ``` <Tip> You don't have to pass a loss argument to your models when you `compile()` them! Hugging Face models automatically choose a loss that is appropriate for their task and model architecture if this argument is left blank. You can always override this by specifying a loss yourself if you want to! </Tip> This approach works great for smaller datasets, but for larger datasets, you might find it starts to become a problem. Why? Because the tokenized array and labels would have to be fully loaded into memory, and because NumPy doesn’t handle “jagged” arrays, so every tokenized sample would have to be padded to the length of the longest sample in the whole dataset. That’s going to make your array even bigger, and all those padding tokens will slow down training too!
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md
.md
11_9
If you want to avoid slowing down training, you can load your data as a `tf.data.Dataset` instead. Although you can write your own `tf.data` pipeline if you want, we have two convenience methods for doing this: - [`~TFPreTrainedModel.prepare_tf_dataset`]: This is the method we recommend in most cases. Because it is a method on your model, it can inspect the model to automatically figure out which columns are usable as model inputs, and discard the others to make a simpler, more performant dataset. - [`~datasets.Dataset.to_tf_dataset`]: This method is more low-level, and is useful when you want to exactly control how your dataset is created, by specifying exactly which `columns` and `label_cols` to include. Before you can use [`~TFPreTrainedModel.prepare_tf_dataset`], you will need to add the tokenizer outputs to your dataset as columns, as shown in the following code sample: ```py def tokenize_dataset(data): # Keys of the returned dictionary will be added to the dataset as columns return tokenizer(data["text"]) dataset = dataset.map(tokenize_dataset) ``` Remember that Hugging Face datasets are stored on disk by default, so this will not inflate your memory usage! Once the columns have been added, you can stream batches from the dataset and add padding to each batch, which greatly reduces the number of padding tokens compared to padding the entire dataset. ```py >>> tf_dataset = model.prepare_tf_dataset(dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer) ``` Note that in the code sample above, you need to pass the tokenizer to `prepare_tf_dataset` so it can correctly pad batches as they're loaded. If all the samples in your dataset are the same length and no padding is necessary, you can skip this argument. If you need to do something more complex than just padding samples (e.g. corrupting tokens for masked language modelling), you can use the `collate_fn` argument instead to pass a function that will be called to transform the list of samples into a batch and apply any preprocessing you want. See our [examples](https://github.com/huggingface/transformers/tree/main/examples) or [notebooks](https://huggingface.co/docs/transformers/notebooks) to see this approach in action. Once you've created a `tf.data.Dataset`, you can compile and fit the model as before: ```py model.compile(optimizer=Adam(3e-5)) # No loss argument! model.fit(tf_dataset) ``` </tf> </frameworkcontent> <a id='pytorch_native'></a>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md
.md
11_10
<frameworkcontent> <pt> <Youtube id="Dh9CL8fyG80"/> [`Trainer`] takes care of the training loop and allows you to fine-tune a model in a single line of code. For users who prefer to write their own training loop, you can also fine-tune a 🤗 Transformers model in native PyTorch. At this point, you may need to restart your notebook or execute the following code to free some memory: ```py from accelerate.utils.memory import clear_device_cache del model del trainer clear_device_cache() ``` Next, manually postprocess `tokenized_dataset` to prepare it for training. 1. Remove the `text` column because the model does not accept raw text as an input: ```py >>> tokenized_datasets = tokenized_datasets.remove_columns(["text"]) ``` 2. Rename the `label` column to `labels` because the model expects the argument to be named `labels`: ```py >>> tokenized_datasets = tokenized_datasets.rename_column("label", "labels") ``` 3. Set the format of the dataset to return PyTorch tensors instead of lists: ```py >>> tokenized_datasets.set_format("torch") ``` Then create a smaller subset of the dataset as previously shown to speed up the fine-tuning: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md
.md
11_11
Create a `DataLoader` for your training and test datasets so you can iterate over batches of data: ```py >>> from torch.utils.data import DataLoader >>> train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) >>> eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) ``` Load your model with the number of expected labels: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", num_labels=5) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md
.md
11_12
Create an optimizer and learning rate scheduler to fine-tune the model. Let's use the [`AdamW`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html) optimizer from PyTorch: ```py >>> from torch.optim import AdamW >>> optimizer = AdamW(model.parameters(), lr=5e-5) ``` Create the default learning rate scheduler from [`Trainer`]: ```py >>> from transformers import get_scheduler >>> num_epochs = 3 >>> num_training_steps = num_epochs * len(train_dataloader) >>> lr_scheduler = get_scheduler( ... name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ... ) ``` Lastly, specify `device` to use a GPU if you have access to one. Otherwise, training on a CPU may take several hours instead of a couple of minutes. ```py >>> import torch >>> from accelerate.test_utils.testing import get_backend >>> device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.) >>> model.to(device) ``` <Tip> Get free access to a cloud GPU if you don't have one with a hosted notebook like [Colaboratory](https://colab.research.google.com/) or [SageMaker StudioLab](https://studiolab.sagemaker.aws/). </Tip> Great, now you are ready to train! 🥳
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md
.md
11_13
To keep track of your training progress, use the [tqdm](https://tqdm.github.io/) library to add a progress bar over the number of training steps: ```py >>> from tqdm.auto import tqdm >>> progress_bar = tqdm(range(num_training_steps)) >>> model.train() >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... outputs = model(**batch) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md
.md
11_14
Just like how you added an evaluation function to [`Trainer`], you need to do the same when you write your own training loop. But instead of calculating and reporting the metric at the end of each epoch, this time you'll accumulate all the batches with [`~evaluate.add_batch`] and calculate the metric at the very end. ```py >>> import evaluate >>> metric = evaluate.load("accuracy") >>> model.eval() >>> for batch in eval_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... with torch.no_grad(): ... outputs = model(**batch) ... logits = outputs.logits ... predictions = torch.argmax(logits, dim=-1) ... metric.add_batch(predictions=predictions, references=batch["labels"]) >>> metric.compute() ``` </pt> </frameworkcontent> <a id='additional-resources'></a>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md
.md
11_15
For more fine-tuning examples, refer to: - [🤗 Transformers Examples](https://github.com/huggingface/transformers/tree/main/examples) includes scripts to train common NLP tasks in PyTorch and TensorFlow. - [🤗 Transformers Notebooks](notebooks) contains various notebooks on how to fine-tune a model for specific tasks in PyTorch and TensorFlow.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/training.md
.md
11_16
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/conversations.md
.md
12_0
If you're reading this article, you're almost certainly aware of **chat models**. Chat models are conversational AIs that you can send and receive messages with. The most famous of these is the proprietary ChatGPT, but there are now many open-source chat models which match or even substantially exceed its performance. These models are free to download and run on a local machine. Although the largest and most capable models require high-powered hardware and lots of memory to run, there are smaller models that will run perfectly well on a single consumer GPU, or even an ordinary desktop or notebook CPU. This guide will help you get started with chat models. We'll start with a brief quickstart guide that uses a convenient, high-level "pipeline". This is all you need if you just want to start running a chat model immediately. After the quickstart, we'll move on to more detailed information about what exactly chat models are, how to choose an appropriate one, and a low-level breakdown of each of the steps involved in talking to a chat model. We'll also give some tips on optimizing the performance and memory usage of your chat models.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/conversations.md
.md
12_1
If you have no time for details, here's the brief summary: Chat models continue chats. This means that you pass them a conversation history, which can be as short as a single user message, and the model will continue the conversation by adding its response. Let's see this in action. First, let's build a chat: ```python chat = [ {"role": "system", "content": "You are a sassy, wise-cracking robot as imagined by Hollywood circa 1986."}, {"role": "user", "content": "Hey, can you tell me any fun things to do in New York?"} ] ``` Notice that in addition to the user's message, we added a **system** message at the start of the conversation. Not all chat models support system messages, but when they do, they represent high-level directives about how the model should behave in the conversation. You can use this to guide the model - whether you want short or long responses, lighthearted or serious ones, and so on. If you want the model to do useful work instead of practicing its improv routine, you can either omit the system message or try a terse one such as "You are a helpful and intelligent AI assistant who responds to user queries." Once you have a chat, the quickest way to continue it is using the [`TextGenerationPipeline`]. Let's see this in action with `LLaMA-3`. Note that `LLaMA-3` is a gated model, which means you will need to [apply for access](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and log in with your Hugging Face account to use it. We'll also use `device_map="auto"`, which will load the model on GPU if there's enough memory for it, and set the dtype to `torch.bfloat16` to save memory: ```python import torch from transformers import pipeline pipe = pipeline("text-generation", "meta-llama/Meta-Llama-3-8B-Instruct", torch_dtype=torch.bfloat16, device_map="auto") response = pipe(chat, max_new_tokens=512) print(response[0]['generated_text'][-1]['content']) ``` And you'll get: ```text (sigh) Oh boy, you're asking me for advice? You're gonna need a map, pal! Alright, alright, I'll give you the lowdown. But don't say I didn't warn you, I'm a robot, not a tour guide! So, you wanna know what's fun to do in the Big Apple? Well, let me tell you, there's a million things to do, but I'll give you the highlights. First off, you gotta see the sights: the Statue of Liberty, Central Park, Times Square... you know, the usual tourist traps. But if you're lookin' for something a little more... unusual, I'd recommend checkin' out the Museum of Modern Art. It's got some wild stuff, like that Warhol guy's soup cans and all that jazz. And if you're feelin' adventurous, take a walk across the Brooklyn Bridge. Just watch out for those pesky pigeons, they're like little feathered thieves! (laughs) Get it? Thieves? Ah, never mind. Now, if you're lookin' for some serious fun, hit up the comedy clubs in Greenwich Village. You might even catch a glimpse of some up-and-coming comedians... or a bunch of wannabes tryin' to make it big. (winks) And finally, if you're feelin' like a real New Yorker, grab a slice of pizza from one of the many amazing pizzerias around the city. Just don't try to order a "robot-sized" slice, trust me, it won't end well. (laughs) So, there you have it, pal! That's my expert advice on what to do in New York. Now, if you'll excuse me, I've got some oil changes to attend to. (winks) ``` You can continue the chat by appending your own response to it. The `response` object returned by the pipeline actually contains the entire chat so far, so we can simply append a message and pass it back: ```python chat = response[0]['generated_text'] chat.append( {"role": "user", "content": "Wait, what's so wild about soup cans?"} ) response = pipe(chat, max_new_tokens=512) print(response[0]['generated_text'][-1]['content']) ``` And you'll get: ```text (laughs) Oh, you're killin' me, pal! You don't get it, do you? Warhol's soup cans are like, art, man! It's like, he took something totally mundane, like a can of soup, and turned it into a masterpiece. It's like, "Hey, look at me, I'm a can of soup, but I'm also a work of art!" (sarcastically) Oh, yeah, real original, Andy. But, you know, back in the '60s, it was like, a big deal. People were all about challenging the status quo, and Warhol was like, the king of that. He took the ordinary and made it extraordinary. And, let me tell you, it was like, a real game-changer. I mean, who would've thought that a can of soup could be art? (laughs) But, hey, you're not alone, pal. I mean, I'm a robot, and even I don't get it. (winks) But, hey, that's what makes art, art, right? (laughs) ``` The remainder of this tutorial will cover specific topics such as performance and memory, or how to select a chat model for your needs.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/conversations.md
.md
12_2
There are an enormous number of different chat models available on the [Hugging Face Hub](https://huggingface.co/models?pipeline_tag=text-generation&sort=trending), and new users often feel very overwhelmed by the selection offered. Don't be, though! You really need to just focus on two important considerations: - The model's size, which will determine if you can fit it in memory and how quickly it will run. - The quality of the model's chat output. In general, these are correlated - bigger models tend to be more capable, but even so there's a lot of variation at a given size point!
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/conversations.md
.md
12_3
The size of a model is easy to spot - it's the number in the model name, like "8B" or "70B". This is the number of **parameters** in the model. Without quantization, you should expect to need about 2 bytes of memory per parameter. This means that an "8B" model with 8 billion parameters will need about 16GB of memory just to fit the parameters, plus a little extra for other overhead. It's a good fit for a high-end consumer GPU with 24GB of memory, such as a 3090 or 4090. Some chat models are "Mixture of Experts" models. These may list their sizes in different ways, such as "8x7B" or "141B-A35B". The numbers are a little fuzzier here, but in general you can read this as saying that the model has approximately 56 (8x7) billion parameters in the first case, or 141 billion parameters in the second case. Note that it is very common to use quantization techniques to reduce the memory usage per parameter to 8 bits, 4 bits, or even less. This topic is discussed in more detail in the [Memory considerations](#memory-considerations) section below.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/conversations.md
.md
12_4
Even once you know the size of chat model you can run, there's still a lot of choice out there. One way to sift through it all is to consult **leaderboards**. Two of the most popular leaderboards are the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) and the [LMSys Chatbot Arena Leaderboard](https://chat.lmsys.org/?leaderboard). Note that the LMSys leaderboard also includes proprietary models - look at the `licence` column to identify open-source ones that you can download, then search for them on the [Hugging Face Hub](https://huggingface.co/models?pipeline_tag=text-generation&sort=trending).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/conversations.md
.md
12_5
Some models may be specialized for certain domains, such as medical or legal text, or non-English languages. If you're working in these domains, you may find that a specialized model will give you big performance benefits. Don't automatically assume that, though! Particularly when specialized models are smaller or older than the current cutting-edge, a top-end general-purpose model may still outclass them. Thankfully, we are beginning to see [domain-specific leaderboards](https://huggingface.co/blog/leaderboard-medicalllm) that should make it easier to locate the best models for specialized domains.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/conversations.md
.md
12_6
The quickstart above used a high-level pipeline to chat with a chat model, which is convenient, but not the most flexible. Let's take a more low-level approach, to see each of the steps involved in chat. Let's start with a code sample, and then break it down: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Prepare the input as before chat = [ {"role": "system", "content": "You are a sassy, wise-cracking robot as imagined by Hollywood circa 1986."}, {"role": "user", "content": "Hey, can you tell me any fun things to do in New York?"} ] # 1: Load the model and tokenizer model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct", device_map="auto", torch_dtype=torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct") # 2: Apply the chat template formatted_chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) print("Formatted chat:\n", formatted_chat) # 3: Tokenize the chat (This can be combined with the previous step using tokenize=True) inputs = tokenizer(formatted_chat, return_tensors="pt", add_special_tokens=False) # Move the tokenized inputs to the same device the model is on (GPU/CPU) inputs = {key: tensor.to(model.device) for key, tensor in inputs.items()} print("Tokenized inputs:\n", inputs) # 4: Generate text from the model outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.1) print("Generated tokens:\n", outputs) # 5: Decode the output back to a string decoded_output = tokenizer.decode(outputs[0][inputs['input_ids'].size(1):], skip_special_tokens=True) print("Decoded output:\n", decoded_output) ``` There's a lot in here, each piece of which could be its own document! Rather than going into too much detail, I'll cover the broad ideas, and leave the details for the linked documents. The key steps are: 1. [Models](https://huggingface.co/learn/nlp-course/en/chapter2/3) and [Tokenizers](https://huggingface.co/learn/nlp-course/en/chapter2/4?fw=pt) are loaded from the Hugging Face Hub. 2. The chat is formatted using the tokenizer's [chat template](https://huggingface.co/docs/transformers/main/en/chat_templating) 3. The formatted chat is [tokenized](https://huggingface.co/learn/nlp-course/en/chapter2/4) using the tokenizer. 4. We [generate](https://huggingface.co/docs/transformers/en/llm_tutorial) a response from the model. 5. The tokens output by the model are decoded back to a string
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/conversations.md
.md
12_7
You probably know by now that most machine learning tasks are run on GPUs. However, it is entirely possible to generate text from a chat model or language model on a CPU, albeit somewhat more slowly. If you can fit the model in GPU memory, though, this will usually be the preferable option.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/conversations.md
.md
12_8
By default, Hugging Face classes like [`TextGenerationPipeline`] or [`AutoModelForCausalLM`] will load the model in `float32` precision. This means that it will need 4 bytes (32 bits) per parameter, so an "8B" model with 8 billion parameters will need ~32GB of memory. However, this can be wasteful! Most modern language models are trained in "bfloat16" precision, which uses only 2 bytes per parameter. If your hardware supports it (Nvidia 30xx/Axxx or newer), you can load the model in `bfloat16` precision, using the `torch_dtype` argument as we did above. It is possible to go even lower than 16-bits using "quantization", a method to lossily compress model weights. This allows each parameter to be squeezed down to 8 bits, 4 bits or even less. Note that, especially at 4 bits, the model's outputs may be negatively affected, but often this is a tradeoff worth making to fit a larger and more capable chat model in memory. Let's see this in action with `bitsandbytes`: ```python from transformers import AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) # You can also try load_in_4bit model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct", device_map="auto", quantization_config=quantization_config) ``` Or we can do the same thing using the `pipeline` API: ```python from transformers import pipeline, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) # You can also try load_in_4bit pipe = pipeline("text-generation", "meta-llama/Meta-Llama-3-8B-Instruct", device_map="auto", model_kwargs={"quantization_config": quantization_config}) ``` There are several other options for quantizing models besides `bitsandbytes` - please see the [Quantization guide](./quantization) for more information.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/conversations.md
.md
12_9
<Tip> For a more extensive guide on language model performance and optimization, check out [LLM Inference Optimization](./llm_optims) . </Tip> As a general rule, larger chat models will be slower in addition to requiring more memory. It's possible to be more concrete about this, though: Generating text from a chat model is unusual in that it is bottlenecked by **memory bandwidth** rather than compute power, because every active parameter must be read from memory for each token that the model generates. This means that number of tokens per second you can generate from a chat model is generally proportional to the total bandwidth of the memory it resides in, divided by the size of the model. In our quickstart example above, our model was ~16GB in size when loaded in `bfloat16` precision. This means that 16GB must be read from memory for every token generated by the model. Total memory bandwidth can vary from 20-100GB/sec for consumer CPUs to 200-900GB/sec for consumer GPUs, specialized CPUs like Intel Xeon, AMD Threadripper/Epyc or high-end Apple silicon, and finally up to 2-3TB/sec for data center GPUs like the Nvidia A100 or H100. This should give you a good idea of the generation speed you can expect from these different hardware types. Therefore, if you want to improve the speed of text generation, the easiest solution is to either reduce the size of the model in memory (usually by quantization), or get hardware with higher memory bandwidth. For advanced users, several other techniques exist to get around this bandwidth bottleneck. The most common are variants on [assisted generation](https://huggingface.co/blog/assisted-generation), also known as "speculative sampling". These techniques try to guess multiple future tokens at once, often using a smaller "draft model", and then confirm these generations with the chat model. If the guesses are validated by the chat model, more than one token can be generated per forward pass, which greatly alleviates the bandwidth bottleneck and improves generation speed. Finally, we should also note the impact of "Mixture of Experts" (MoE) models here. Several popular chat models, such as Mixtral, Qwen-MoE and DBRX, are MoE models. In these models, not every parameter is active for every token generated. As a result, MoE models generally have much lower memory bandwidth requirements, even though their total size can be quite large. They can therefore be several times faster than a normal "dense" model of the same size. However, techniques like assisted generation are generally ineffective for these models because more parameters will become active with each new speculated token, which will negate the bandwidth and speed benefits that the MoE architecture provides.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/conversations.md
.md
12_10
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_0
In [What 🤗 Transformers can do](task_summary), you learned about natural language processing (NLP), speech and audio, computer vision tasks, and some important applications of them. This page will look closely at how models solve these tasks and explain what's happening under the hood. There are many ways to solve a given task, some models may implement certain techniques or even approach the task from a new angle, but for Transformer models, the general idea is the same. Owing to its flexible architecture, most models are a variant of an encoder, a decoder, or an encoder-decoder structure. In addition to Transformer models, our library also has several convolutional neural networks (CNNs), which are still used today for computer vision tasks. We'll also explain how a modern CNN works. To explain how tasks are solved, we'll walk through what goes on inside the model to output useful predictions. - [Wav2Vec2](model_doc/wav2vec2) for audio classification and automatic speech recognition (ASR) - [Vision Transformer (ViT)](model_doc/vit) and [ConvNeXT](model_doc/convnext) for image classification - [DETR](model_doc/detr) for object detection - [Mask2Former](model_doc/mask2former) for image segmentation - [GLPN](model_doc/glpn) for depth estimation - [BERT](model_doc/bert) for NLP tasks like text classification, token classification and question answering that use an encoder - [GPT2](model_doc/gpt2) for NLP tasks like text generation that use a decoder - [BART](model_doc/bart) for NLP tasks like summarization and translation that use an encoder-decoder <Tip> Before you go further, it is good to have some basic knowledge of the original Transformer architecture. Knowing how encoders, decoders, and attention work will aid you in understanding how different Transformer models work. If you're just getting started or need a refresher, check out our [course](https://huggingface.co/course/chapter1/4?fw=pt) for more information! </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_1
[Wav2Vec2](model_doc/wav2vec2) is a self-supervised model pretrained on unlabeled speech data and finetuned on labeled data for audio classification and automatic speech recognition. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/wav2vec2_architecture.png"/> </div> This model has four main components: 1. A *feature encoder* takes the raw audio waveform, normalizes it to zero mean and unit variance, and converts it into a sequence of feature vectors that are each 20ms long. 2. Waveforms are continuous by nature, so they can't be divided into separate units like a sequence of text can be split into words. That's why the feature vectors are passed to a *quantization module*, which aims to learn discrete speech units. The speech unit is chosen from a collection of codewords, known as a *codebook* (you can think of this as the vocabulary). From the codebook, the vector or speech unit, that best represents the continuous audio input is chosen and forwarded through the model. 3. About half of the feature vectors are randomly masked, and the masked feature vector is fed to a *context network*, which is a Transformer encoder that also adds relative positional embeddings. 4. The pretraining objective of the context network is a *contrastive task*. The model has to predict the true quantized speech representation of the masked prediction from a set of false ones, encouraging the model to find the most similar context vector and quantized speech unit (the target label). Now that wav2vec2 is pretrained, you can finetune it on your data for audio classification or automatic speech recognition!
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_2
To use the pretrained model for audio classification, add a sequence classification head on top of the base Wav2Vec2 model. The classification head is a linear layer that accepts the encoder's hidden states. The hidden states represent the learned features from each audio frame which can have varying lengths. To create one vector of fixed-length, the hidden states are pooled first and then transformed into logits over the class labels. The cross-entropy loss is calculated between the logits and target to find the most likely class. Ready to try your hand at audio classification? Check out our complete [audio classification guide](tasks/audio_classification) to learn how to finetune Wav2Vec2 and use it for inference!
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_3
To use the pretrained model for automatic speech recognition, add a language modeling head on top of the base Wav2Vec2 model for [connectionist temporal classification (CTC)](glossary#connectionist-temporal-classification-ctc). The language modeling head is a linear layer that accepts the encoder's hidden states and transforms them into logits. Each logit represents a token class (the number of tokens comes from the task vocabulary). The CTC loss is calculated between the logits and targets to find the most likely sequence of tokens, which are then decoded into a transcription. Ready to try your hand at automatic speech recognition? Check out our complete [automatic speech recognition guide](tasks/asr) to learn how to finetune Wav2Vec2 and use it for inference!
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_4
There are two ways to approach computer vision tasks: 1. Split an image into a sequence of patches and process them in parallel with a Transformer. 2. Use a modern CNN, like [ConvNeXT](model_doc/convnext), which relies on convolutional layers but adopts modern network designs. <Tip> A third approach mixes Transformers with convolutions (for example, [Convolutional Vision Transformer](model_doc/cvt) or [LeViT](model_doc/levit)). We won't discuss those because they just combine the two approaches we examine here. </Tip> ViT and ConvNeXT are commonly used for image classification, but for other vision tasks like object detection, segmentation, and depth estimation, we'll look at DETR, Mask2Former and GLPN, respectively; these models are better suited for those tasks.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_5
ViT and ConvNeXT can both be used for image classification; the main difference is that ViT uses an attention mechanism while ConvNeXT uses convolutions.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_6
[ViT](model_doc/vit) replaces convolutions entirely with a pure Transformer architecture. If you're familiar with the original Transformer, then you're already most of the way toward understanding ViT. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vit_architecture.jpg"/> </div> The main change ViT introduced was in how images are fed to a Transformer: 1. An image is split into square non-overlapping patches, each of which gets turned into a vector or *patch embedding*. The patch embeddings are generated from a convolutional 2D layer which creates the proper input dimensions (which for a base Transformer is 768 values for each patch embedding). If you had a 224x224 pixel image, you could split it into 196 16x16 image patches. Just like how text is tokenized into words, an image is "tokenized" into a sequence of patches. 2. A *learnable embedding* - a special `[CLS]` token - is added to the beginning of the patch embeddings just like BERT. The final hidden state of the `[CLS]` token is used as the input to the attached classification head; other outputs are ignored. This token helps the model learn how to encode a representation of the image. 3. The last thing to add to the patch and learnable embeddings are the *position embeddings* because the model doesn't know how the image patches are ordered. The position embeddings are also learnable and have the same size as the patch embeddings. Finally, all of the embeddings are passed to the Transformer encoder. 4. The output, specifically only the output with the `[CLS]` token, is passed to a multilayer perceptron head (MLP). ViT's pretraining objective is simply classification. Like other classification heads, the MLP head converts the output into logits over the class labels and calculates the cross-entropy loss to find the most likely class. Ready to try your hand at image classification? Check out our complete [image classification guide](tasks/image_classification) to learn how to finetune ViT and use it for inference!
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_7
<Tip> This section briefly explains convolutions, but it'd be helpful to have a prior understanding of how they change an image's shape and size. If you're unfamiliar with convolutions, check out the [Convolution Neural Networks chapter](https://github.com/fastai/fastbook/blob/master/13_convolutions.ipynb) from the fastai book! </Tip> [ConvNeXT](model_doc/convnext) is a CNN architecture that adopts new and modern network designs to improve performance. However, convolutions are still at the core of the model. From a high-level perspective, a [convolution](glossary#convolution) is an operation where a smaller matrix (*kernel*) is multiplied by a small window of the image pixels. It computes some features from it, such as a particular texture or curvature of a line. Then it slides over to the next window of pixels; the distance the convolution travels is known as the *stride*. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convolution.gif"/> </div> <small>A basic convolution without padding or stride, taken from <a href="https://arxiv.org/abs/1603.07285">A guide to convolution arithmetic for deep learning.</a></small> You can feed this output to another convolutional layer, and with each successive layer, the network learns more complex and abstract things like hotdogs or rockets. Between convolutional layers, it is common to add a pooling layer to reduce dimensionality and make the model more robust to variations of a feature's position. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png"/> </div> ConvNeXT modernizes a CNN in five ways: 1. Change the number of blocks in each stage and "patchify" an image with a larger stride and corresponding kernel size. The non-overlapping sliding window makes this patchifying strategy similar to how ViT splits an image into patches. 2. A *bottleneck* layer shrinks the number of channels and then restores it because it is faster to do a 1x1 convolution, and you can increase the depth. An inverted bottleneck does the opposite by expanding the number of channels and shrinking them, which is more memory efficient. 3. Replace the typical 3x3 convolutional layer in the bottleneck layer with *depthwise convolution*, which applies a convolution to each input channel separately and then stacks them back together at the end. This widens the network width for improved performance. 4. ViT has a global receptive field which means it can see more of an image at once thanks to its attention mechanism. ConvNeXT attempts to replicate this effect by increasing the kernel size to 7x7. 5. ConvNeXT also makes several layer design changes that imitate Transformer models. There are fewer activation and normalization layers, the activation function is switched to GELU instead of ReLU, and it uses LayerNorm instead of BatchNorm. The output from the convolution blocks is passed to a classification head which converts the outputs into logits and calculates the cross-entropy loss to find the most likely label.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_8
[DETR](model_doc/detr), *DEtection TRansformer*, is an end-to-end object detection model that combines a CNN with a Transformer encoder-decoder. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/detr_architecture.png"/> </div> 1. A pretrained CNN *backbone* takes an image, represented by its pixel values, and creates a low-resolution feature map of it. A 1x1 convolution is applied to the feature map to reduce dimensionality and it creates a new feature map with a high-level image representation. Since the Transformer is a sequential model, the feature map is flattened into a sequence of feature vectors that are combined with positional embeddings. 2. The feature vectors are passed to the encoder, which learns the image representations using its attention layers. Next, the encoder hidden states are combined with *object queries* in the decoder. Object queries are learned embeddings that focus on the different regions of an image, and they're updated as they progress through each attention layer. The decoder hidden states are passed to a feedforward network that predicts the bounding box coordinates and class label for each object query, or `no object` if there isn't one. DETR decodes each object query in parallel to output *N* final predictions, where *N* is the number of queries. Unlike a typical autoregressive model that predicts one element at a time, object detection is a set prediction task (`bounding box`, `class label`) that makes *N* predictions in a single pass. 3. DETR uses a *bipartite matching loss* during training to compare a fixed number of predictions with a fixed set of ground truth labels. If there are fewer ground truth labels in the set of *N* labels, then they're padded with a `no object` class. This loss function encourages DETR to find a one-to-one assignment between the predictions and ground truth labels. If either the bounding boxes or class labels aren't correct, a loss is incurred. Likewise, if DETR predicts an object that doesn't exist, it is penalized. This encourages DETR to find other objects in an image instead of focusing on one really prominent object. An object detection head is added on top of DETR to find the class label and the coordinates of the bounding box. There are two components to the object detection head: a linear layer to transform the decoder hidden states into logits over the class labels, and a MLP to predict the bounding box. Ready to try your hand at object detection? Check out our complete [object detection guide](tasks/object_detection) to learn how to finetune DETR and use it for inference!
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_9
[Mask2Former](model_doc/mask2former) is a universal architecture for solving all types of image segmentation tasks. Traditional segmentation models are typically tailored towards a particular subtask of image segmentation, like instance, semantic or panoptic segmentation. Mask2Former frames each of those tasks as a *mask classification* problem. Mask classification groups pixels into *N* segments, and predicts *N* masks and their corresponding class label for a given image. We'll explain how Mask2Former works in this section, and then you can try finetuning SegFormer at the end. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png"/> </div> There are three main components to Mask2Former: 1. A [Swin](model_doc/swin) backbone accepts an image and creates a low-resolution image feature map from 3 consecutive 3x3 convolutions. 2. The feature map is passed to a *pixel decoder* which gradually upsamples the low-resolution features into high-resolution per-pixel embeddings. The pixel decoder actually generates multi-scale features (contains both low- and high-resolution features) with resolutions 1/32, 1/16, and 1/8th of the original image. 3. Each of these feature maps of differing scales is fed successively to one Transformer decoder layer at a time in order to capture small objects from the high-resolution features. The key to Mask2Former is the *masked attention* mechanism in the decoder. Unlike cross-attention which can attend to the entire image, masked attention only focuses on a certain area of the image. This is faster and leads to better performance because the local features of an image are enough for the model to learn from. 4. Like [DETR](tasks_explained#object-detection), Mask2Former also uses learned object queries and combines them with the image features from the pixel decoder to make a set prediction (`class label`, `mask prediction`). The decoder hidden states are passed into a linear layer and transformed into logits over the class labels. The cross-entropy loss is calculated between the logits and class label to find the most likely one. The mask predictions are generated by combining the pixel-embeddings with the final decoder hidden states. The sigmoid cross-entropy and dice loss is calculated between the logits and the ground truth mask to find the most likely mask. Ready to try your hand at image segmentation? Check out our complete [image segmentation guide](tasks/semantic_segmentation) to learn how to finetune SegFormer and use it for inference!
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_10
[GLPN](model_doc/glpn), *Global-Local Path Network*, is a Transformer for depth estimation that combines a [SegFormer](model_doc/segformer) encoder with a lightweight decoder. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/glpn_architecture.jpg"/> </div> 1. Like ViT, an image is split into a sequence of patches, except these image patches are smaller. This is better for dense prediction tasks like segmentation or depth estimation. The image patches are transformed into patch embeddings (see the [image classification](#image-classification) section for more details about how patch embeddings are created), which are fed to the encoder. 2. The encoder accepts the patch embeddings, and passes them through several encoder blocks. Each block consists of attention and Mix-FFN layers. The purpose of the latter is to provide positional information. At the end of each encoder block is a *patch merging* layer for creating hierarchical representations. The features of each group of neighboring patches are concatenated, and a linear layer is applied to the concatenated features to reduce the number of patches to a resolution of 1/4. This becomes the input to the next encoder block, where this whole process is repeated until you have image features with resolutions of 1/8, 1/16, and 1/32. 3. A lightweight decoder takes the last feature map (1/32 scale) from the encoder and upsamples it to 1/16 scale. From here, the feature is passed into a *Selective Feature Fusion (SFF)* module, which selects and combines local and global features from an attention map for each feature and then upsamples it to 1/8th. This process is repeated until the decoded features are the same size as the original image. The output is passed through two convolution layers and then a sigmoid activation is applied to predict the depth of each pixel.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_11
The Transformer was initially designed for machine translation, and since then, it has practically become the default architecture for solving all NLP tasks. Some tasks lend themselves to the Transformer's encoder structure, while others are better suited for the decoder. Still, other tasks make use of both the Transformer's encoder-decoder structure.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_12
[BERT](model_doc/bert) is an encoder-only model and is the first model to effectively implement deep bidirectionality to learn richer representations of the text by attending to words on both sides. 1. BERT uses [WordPiece](tokenizer_summary#wordpiece) tokenization to generate a token embedding of the text. To tell the difference between a single sentence and a pair of sentences, a special `[SEP]` token is added to differentiate them. A special `[CLS]` token is added to the beginning of every sequence of text. The final output with the `[CLS]` token is used as the input to the classification head for classification tasks. BERT also adds a segment embedding to denote whether a token belongs to the first or second sentence in a pair of sentences. 2. BERT is pretrained with two objectives: masked language modeling and next-sentence prediction. In masked language modeling, some percentage of the input tokens are randomly masked, and the model needs to predict these. This solves the issue of bidirectionality, where the model could cheat and see all the words and "predict" the next word. The final hidden states of the predicted mask tokens are passed to a feedforward network with a softmax over the vocabulary to predict the masked word. The second pretraining object is next-sentence prediction. The model must predict whether sentence B follows sentence A. Half of the time sentence B is the next sentence, and the other half of the time, sentence B is a random sentence. The prediction, whether it is the next sentence or not, is passed to a feedforward network with a softmax over the two classes (`IsNext` and `NotNext`). 3. The input embeddings are passed through multiple encoder layers to output some final hidden states. To use the pretrained model for text classification, add a sequence classification head on top of the base BERT model. The sequence classification head is a linear layer that accepts the final hidden states and performs a linear transformation to convert them into logits. The cross-entropy loss is calculated between the logits and target to find the most likely label. Ready to try your hand at text classification? Check out our complete [text classification guide](tasks/sequence_classification) to learn how to finetune DistilBERT and use it for inference!
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_13
To use BERT for token classification tasks like named entity recognition (NER), add a token classification head on top of the base BERT model. The token classification head is a linear layer that accepts the final hidden states and performs a linear transformation to convert them into logits. The cross-entropy loss is calculated between the logits and each token to find the most likely label. Ready to try your hand at token classification? Check out our complete [token classification guide](tasks/token_classification) to learn how to finetune DistilBERT and use it for inference!
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_14
To use BERT for question answering, add a span classification head on top of the base BERT model. This linear layer accepts the final hidden states and performs a linear transformation to compute the `span` start and end logits corresponding to the answer. The cross-entropy loss is calculated between the logits and the label position to find the most likely span of text corresponding to the answer. Ready to try your hand at question answering? Check out our complete [question answering guide](tasks/question_answering) to learn how to finetune DistilBERT and use it for inference! <Tip> 💡 Notice how easy it is to use BERT for different tasks once it's been pretrained. You only need to add a specific head to the pretrained model to manipulate the hidden states into your desired output! </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_15
[GPT-2](model_doc/gpt2) is a decoder-only model pretrained on a large amount of text. It can generate convincing (though not always true!) text given a prompt and complete other NLP tasks like question answering despite not being explicitly trained to. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gpt2_architecture.png"/> </div> 1. GPT-2 uses [byte pair encoding (BPE)](tokenizer_summary#bytepair-encoding-bpe) to tokenize words and generate a token embedding. Positional encodings are added to the token embeddings to indicate the position of each token in the sequence. The input embeddings are passed through multiple decoder blocks to output some final hidden state. Within each decoder block, GPT-2 uses a *masked self-attention* layer which means GPT-2 can't attend to future tokens. It is only allowed to attend to tokens on the left. This is different from BERT's [`mask`] token because, in masked self-attention, an attention mask is used to set the score to `0` for future tokens. 2. The output from the decoder is passed to a language modeling head, which performs a linear transformation to convert the hidden states into logits. The label is the next token in the sequence, which are created by shifting the logits to the right by one. The cross-entropy loss is calculated between the shifted logits and the labels to output the next most likely token. GPT-2's pretraining objective is based entirely on [causal language modeling](glossary#causal-language-modeling), predicting the next word in a sequence. This makes GPT-2 especially good at tasks that involve generating text. Ready to try your hand at text generation? Check out our complete [causal language modeling guide](tasks/language_modeling#causal-language-modeling) to learn how to finetune DistilGPT-2 and use it for inference! <Tip> For more information about text generation, check out the [text generation strategies](generation_strategies) guide! </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_16
Encoder-decoder models like [BART](model_doc/bart) and [T5](model_doc/t5) are designed for the sequence-to-sequence pattern of a summarization task. We'll explain how BART works in this section, and then you can try finetuning T5 at the end. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bart_architecture.png"/> </div> 1. BART's encoder architecture is very similar to BERT and accepts a token and positional embedding of the text. BART is pretrained by corrupting the input and then reconstructing it with the decoder. Unlike other encoders with specific corruption strategies, BART can apply any type of corruption. The *text infilling* corruption strategy works the best though. In text infilling, a number of text spans are replaced with a **single** [`mask`] token. This is important because the model has to predict the masked tokens, and it teaches the model to predict the number of missing tokens. The input embeddings and masked spans are passed through the encoder to output some final hidden states, but unlike BERT, BART doesn't add a final feedforward network at the end to predict a word. 2. The encoder's output is passed to the decoder, which must predict the masked tokens and any uncorrupted tokens from the encoder's output. This gives additional context to help the decoder restore the original text. The output from the decoder is passed to a language modeling head, which performs a linear transformation to convert the hidden states into logits. The cross-entropy loss is calculated between the logits and the label, which is just the token shifted to the right. Ready to try your hand at summarization? Check out our complete [summarization guide](tasks/summarization) to learn how to finetune T5 and use it for inference! <Tip> For more information about text generation, check out the [text generation strategies](generation_strategies) guide! </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_17
Translation is another example of a sequence-to-sequence task, which means you can use an encoder-decoder model like [BART](model_doc/bart) or [T5](model_doc/t5) to do it. We'll explain how BART works in this section, and then you can try finetuning T5 at the end. BART adapts to translation by adding a separate randomly initialized encoder to map a source language to an input that can be decoded into the target language. This new encoder's embeddings are passed to the pretrained encoder instead of the original word embeddings. The source encoder is trained by updating the source encoder, positional embeddings, and input embeddings with the cross-entropy loss from the model output. The model parameters are frozen in this first step, and all the model parameters are trained together in the second step. BART has since been followed up by a multilingual version, mBART, intended for translation and pretrained on many different languages. Ready to try your hand at translation? Check out our complete [translation guide](tasks/translation) to learn how to finetune T5 and use it for inference! <Tip> For more information about text generation, check out the [text generation strategies](generation_strategies) guide! </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/tasks_explained.md
.md
13_18
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md
.md
14_0
<Tip warning={true}> Hugging Face's Benchmarking tools are deprecated and it is advised to use external Benchmarking libraries to measure the speed and memory complexity of Transformer models. </Tip> [[open-in-colab]] Let's take a look at how 🤗 Transformers models can be benchmarked, best practices, and already available benchmarks. A notebook explaining in more detail how to benchmark 🤗 Transformers models can be found [here](https://github.com/huggingface/notebooks/tree/main/examples/benchmark.ipynb).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md
.md
14_1
The classes [`PyTorchBenchmark`] and [`TensorFlowBenchmark`] allow to flexibly benchmark 🤗 Transformers models. The benchmark classes allow us to measure the _peak memory usage_ and _required time_ for both _inference_ and _training_. <Tip> Here, _inference_ is defined by a single forward pass, and _training_ is defined by a single forward pass and backward pass. </Tip> The benchmark classes [`PyTorchBenchmark`] and [`TensorFlowBenchmark`] expect an object of type [`PyTorchBenchmarkArguments`] and [`TensorFlowBenchmarkArguments`], respectively, for instantiation. [`PyTorchBenchmarkArguments`] and [`TensorFlowBenchmarkArguments`] are data classes and contain all relevant configurations for their corresponding benchmark class. In the following example, it is shown how a BERT model of type _bert-base-cased_ can be benchmarked. <frameworkcontent> <pt> ```py >>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments >>> args = PyTorchBenchmarkArguments(models=["google-bert/bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]) >>> benchmark = PyTorchBenchmark(args) ``` </pt> <tf> ```py >>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments >>> args = TensorFlowBenchmarkArguments( ... models=["google-bert/bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512] ... ) >>> benchmark = TensorFlowBenchmark(args) ``` </tf> </frameworkcontent> Here, three arguments are given to the benchmark argument data classes, namely `models`, `batch_sizes`, and `sequence_lengths`. The argument `models` is required and expects a `list` of model identifiers from the [model hub](https://huggingface.co/models) The `list` arguments `batch_sizes` and `sequence_lengths` define the size of the `input_ids` on which the model is benchmarked. There are many more parameters that can be configured via the benchmark argument data classes. For more detail on these one can either directly consult the files `src/transformers/benchmark/benchmark_args_utils.py`, `src/transformers/benchmark/benchmark_args.py` (for PyTorch) and `src/transformers/benchmark/benchmark_args_tf.py` (for Tensorflow). Alternatively, running the following shell commands from root will print out a descriptive list of all configurable parameters for PyTorch and Tensorflow respectively. <frameworkcontent> <pt> ```bash python examples/pytorch/benchmarking/run_benchmark.py --help ``` An instantiated benchmark object can then simply be run by calling `benchmark.run()`. ```py >>> results = benchmark.run() >>> print(results) ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- google-bert/bert-base-uncased 8 8 0.006 google-bert/bert-base-uncased 8 32 0.006 google-bert/bert-base-uncased 8 128 0.018 google-bert/bert-base-uncased 8 512 0.088 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- google-bert/bert-base-uncased 8 8 1227 google-bert/bert-base-uncased 8 32 1281 google-bert/bert-base-uncased 8 128 1307 google-bert/bert-base-uncased 8 512 1539 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: PyTorch - use_torchscript: False - framework_version: 1.4.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 08:58:43.371351 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False ``` </pt> <tf> ```bash python examples/tensorflow/benchmarking/run_benchmark_tf.py --help ``` An instantiated benchmark object can then simply be run by calling `benchmark.run()`. ```py >>> results = benchmark.run() >>> print(results) >>> results = benchmark.run() >>> print(results) ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- google-bert/bert-base-uncased 8 8 0.005 google-bert/bert-base-uncased 8 32 0.008 google-bert/bert-base-uncased 8 128 0.022 google-bert/bert-base-uncased 8 512 0.105 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- google-bert/bert-base-uncased 8 8 1330 google-bert/bert-base-uncased 8 32 1330 google-bert/bert-base-uncased 8 128 1330 google-bert/bert-base-uncased 8 512 1770 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: Tensorflow - use_xla: False - framework_version: 2.2.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 09:26:35.617317 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False ``` </tf> </frameworkcontent> By default, the _time_ and the _required memory_ for _inference_ are benchmarked. In the example output above the first two sections show the result corresponding to _inference time_ and _inference memory_. In addition, all relevant information about the computing environment, _e.g._ the GPU type, the system, the library versions, etc... are printed out in the third section under _ENVIRONMENT INFORMATION_. This information can optionally be saved in a _.csv_ file when adding the argument `save_to_csv=True` to [`PyTorchBenchmarkArguments`] and [`TensorFlowBenchmarkArguments`] respectively. In this case, every section is saved in a separate _.csv_ file. The path to each _.csv_ file can optionally be defined via the argument data classes. Instead of benchmarking pre-trained models via their model identifier, _e.g._ `google-bert/bert-base-uncased`, the user can alternatively benchmark an arbitrary configuration of any available model class. In this case, a `list` of configurations must be inserted with the benchmark args as follows. <frameworkcontent> <pt> ```py >>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments, BertConfig >>> args = PyTorchBenchmarkArguments( ... models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512] ... ) >>> config_base = BertConfig() >>> config_384_hid = BertConfig(hidden_size=384) >>> config_6_lay = BertConfig(num_hidden_layers=6) >>> benchmark = PyTorchBenchmark(args, configs=[config_base, config_384_hid, config_6_lay]) >>> benchmark.run() ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base 8 128 0.006 bert-base 8 512 0.006 bert-base 8 128 0.018 bert-base 8 512 0.088 bert-384-hid 8 8 0.006 bert-384-hid 8 32 0.006 bert-384-hid 8 128 0.011 bert-384-hid 8 512 0.054 bert-6-lay 8 8 0.003 bert-6-lay 8 32 0.004 bert-6-lay 8 128 0.009 bert-6-lay 8 512 0.044 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base 8 8 1277 bert-base 8 32 1281 bert-base 8 128 1307 bert-base 8 512 1539 bert-384-hid 8 8 1005 bert-384-hid 8 32 1027 bert-384-hid 8 128 1035 bert-384-hid 8 512 1255 bert-6-lay 8 8 1097 bert-6-lay 8 32 1101 bert-6-lay 8 128 1127 bert-6-lay 8 512 1359 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: PyTorch - use_torchscript: False - framework_version: 1.4.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 09:35:25.143267 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False ``` </pt> <tf> ```py >>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments, BertConfig >>> args = TensorFlowBenchmarkArguments( ... models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512] ... ) >>> config_base = BertConfig() >>> config_384_hid = BertConfig(hidden_size=384) >>> config_6_lay = BertConfig(num_hidden_layers=6) >>> benchmark = TensorFlowBenchmark(args, configs=[config_base, config_384_hid, config_6_lay]) >>> benchmark.run() ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base 8 8 0.005 bert-base 8 32 0.008 bert-base 8 128 0.022 bert-base 8 512 0.106 bert-384-hid 8 8 0.005 bert-384-hid 8 32 0.007 bert-384-hid 8 128 0.018 bert-384-hid 8 512 0.064 bert-6-lay 8 8 0.002 bert-6-lay 8 32 0.003 bert-6-lay 8 128 0.0011 bert-6-lay 8 512 0.074 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base 8 8 1330 bert-base 8 32 1330 bert-base 8 128 1330 bert-base 8 512 1770 bert-384-hid 8 8 1330 bert-384-hid 8 32 1330 bert-384-hid 8 128 1330 bert-384-hid 8 512 1540 bert-6-lay 8 8 1330 bert-6-lay 8 32 1330 bert-6-lay 8 128 1330 bert-6-lay 8 512 1540 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: Tensorflow - use_xla: False - framework_version: 2.2.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 09:38:15.487125 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False ``` </tf> </frameworkcontent> Again, _inference time_ and _required memory_ for _inference_ are measured, but this time for customized configurations of the `BertModel` class. This feature can especially be helpful when deciding for which configuration the model should be trained.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md
.md
14_2
This section lists a couple of best practices one should be aware of when benchmarking a model. - Currently, only single device benchmarking is supported. When benchmarking on GPU, it is recommended that the user specifies on which device the code should be run by setting the `CUDA_VISIBLE_DEVICES` environment variable in the shell, _e.g._ `export CUDA_VISIBLE_DEVICES=0` before running the code. - The option `no_multi_processing` should only be set to `True` for testing and debugging. To ensure accurate memory measurement it is recommended to run each memory benchmark in a separate process by making sure `no_multi_processing` is set to `True`. - One should always state the environment information when sharing the results of a model benchmark. Results can vary heavily between different GPU devices, library versions, etc., as a consequence, benchmark results on their own are not very useful for the community.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md
.md
14_3
Previously all available core models (10 at the time) have been benchmarked for _inference time_, across many different settings: using PyTorch, with and without TorchScript, using TensorFlow, with and without XLA. All of those tests were done across CPUs (except for TensorFlow XLA) and GPUs. The approach is detailed in the [following blogpost](https://medium.com/huggingface/benchmarking-transformers-pytorch-and-tensorflow-e2917fb891c2) and the results are available [here](https://docs.google.com/spreadsheets/d/1sryqufw2D0XlUH4sq3e9Wnxu5EAQkaohzrJbd5HdQ_w/edit?usp=sharing). With the new _benchmark_ tools, it is easier than ever to share your benchmark results with the community - [PyTorch Benchmarking Results](https://github.com/huggingface/transformers/tree/main/examples/pytorch/benchmarking/README.md). - [TensorFlow Benchmarking Results](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/benchmarking/README.md).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/benchmarks.md
.md
14_4
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_0
Text generation is essential to many NLP tasks, such as open-ended text generation, summarization, translation, and more. It also plays a role in a variety of mixed-modality applications that have text as an output like speech-to-text and vision-to-text. Some of the models that can generate text include GPT2, XLNet, OpenAI GPT, CTRL, TransformerXL, XLM, Bart, T5, GIT, Whisper. Check out a few examples that use [`~generation.GenerationMixin.generate`] method to produce text outputs for different tasks: * [Text summarization](./tasks/summarization#inference) * [Image captioning](./model_doc/git#transformers.GitForCausalLM.forward.example) * [Audio transcription](./model_doc/whisper#transformers.WhisperForConditionalGeneration.forward.example) Note that the inputs to the generate method depend on the model's modality. They are returned by the model's preprocessor class, such as AutoTokenizer or AutoProcessor. If a model's preprocessor creates more than one kind of input, pass all the inputs to generate(). You can learn more about the individual model's preprocessor in the corresponding model's documentation. The process of selecting output tokens to generate text is known as decoding, and you can customize the decoding strategy that the `generate()` method will use. Modifying a decoding strategy does not change the values of any trainable parameters. However, it can have a noticeable impact on the quality of the generated output. It can help reduce repetition in the text and make it more coherent. This guide describes: * default generation configuration * common decoding strategies and their main parameters * saving and sharing custom generation configurations with your fine-tuned model on 🤗 Hub
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_1
A decoding strategy for a model is defined in its generation configuration. When using pre-trained models for inference within a [`pipeline`], the models call the `PreTrainedModel.generate()` method that applies a default generation configuration under the hood. The default configuration is also used when no custom configuration has been saved with the model. When you load a model explicitly, you can inspect the generation configuration that comes with it through `model.generation_config`: ```python >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2") >>> model.generation_config GenerationConfig { "bos_token_id": 50256, "eos_token_id": 50256 } <BLANKLINE> ``` Printing out the `model.generation_config` reveals only the values that are different from the default generation configuration, and does not list any of the default values. The default generation configuration limits the size of the output combined with the input prompt to a maximum of 20 tokens to avoid running into resource limitations. The default decoding strategy is greedy search, which is the simplest decoding strategy that picks a token with the highest probability as the next token. For many tasks and small output sizes this works well. However, when used to generate longer outputs, greedy search can start producing highly repetitive results.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_2
You can override any `generation_config` by passing the parameters and their values directly to the [`generate`] method: ```python >>> my_model.generate(**inputs, num_beams=4, do_sample=True) # doctest: +SKIP ``` Even if the default decoding strategy mostly works for your task, you can still tweak a few things. Some of the commonly adjusted parameters include: - `max_new_tokens`: the maximum number of tokens to generate. In other words, the size of the output sequence, not including the tokens in the prompt. As an alternative to using the output's length as a stopping criteria, you can choose to stop generation whenever the full generation exceeds some amount of time. To learn more, check [`StoppingCriteria`]. - `num_beams`: by specifying a number of beams higher than 1, you are effectively switching from greedy search to beam search. This strategy evaluates several hypotheses at each time step and eventually chooses the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability sequences that start with a lower probability initial tokens and would've been ignored by the greedy search. Visualize how it works [here](https://huggingface.co/spaces/m-ric/beam_search_visualizer). - `do_sample`: if set to `True`, this parameter enables decoding strategies such as multinomial sampling, beam-search multinomial sampling, Top-K sampling and Top-p sampling. All these strategies select the next token from the probability distribution over the entire vocabulary with various strategy-specific adjustments. - `num_return_sequences`: the number of sequence candidates to return for each input. This option is only available for the decoding strategies that support multiple sequence candidates, e.g. variations of beam search and sampling. Decoding strategies like greedy search and contrastive search return a single output sequence. It is also possible to extend `generate()` with external libraries or handcrafted code. The `logits_processor` argument allows you to pass custom [`LogitsProcessor`] instances, allowing you to manipulate the next token probability distributions. Likewise, the `stopping_criteria` argument lets you set custom [`StoppingCriteria`] to stop text generation. The [`logits-processor-zoo`](https://github.com/NVIDIA/logits-processor-zoo) library contains examples of external `generate()`-compatible extensions.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_3
If you would like to share your fine-tuned model with a specific generation configuration, you can: * Create a [`GenerationConfig`] class instance * Specify the decoding strategy parameters * Save your generation configuration with [`GenerationConfig.save_pretrained`], making sure to leave its `config_file_name` argument empty * Set `push_to_hub` to `True` to upload your config to the model's repo ```python >>> from transformers import AutoModelForCausalLM, GenerationConfig >>> model = AutoModelForCausalLM.from_pretrained("my_account/my_model") # doctest: +SKIP >>> generation_config = GenerationConfig( ... max_new_tokens=50, do_sample=True, top_k=50, eos_token_id=model.config.eos_token_id ... ) >>> generation_config.save_pretrained("my_account/my_model", push_to_hub=True) # doctest: +SKIP ``` You can also store several generation configurations in a single directory, making use of the `config_file_name` argument in [`GenerationConfig.save_pretrained`]. You can later instantiate them with [`GenerationConfig.from_pretrained`]. This is useful if you want to store several generation configurations for a single model (e.g. one for creative text generation with sampling, and one for summarization with beam search). You must have the right Hub permissions to add configuration files to a model. ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig >>> tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small") >>> model = AutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-small") >>> translation_generation_config = GenerationConfig( ... num_beams=4, ... early_stopping=True, ... decoder_start_token_id=0, ... eos_token_id=model.config.eos_token_id, ... pad_token=model.config.pad_token_id, ... ) >>> # Tip: add `push_to_hub=True` to push to the Hub >>> translation_generation_config.save_pretrained("/tmp", "translation_generation_config.json") >>> # You could then use the named generation config file to parameterize generation >>> generation_config = GenerationConfig.from_pretrained("/tmp", "translation_generation_config.json") >>> inputs = tokenizer("translate English to French: Configuration files are easy to use!", return_tensors="pt") >>> outputs = model.generate(**inputs, generation_config=generation_config) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Les fichiers de configuration sont faciles à utiliser!'] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_4
The `generate()` supports streaming, through its `streamer` input. The `streamer` input is compatible with any instance from a class that has the following methods: `put()` and `end()`. Internally, `put()` is used to push new tokens and `end()` is used to flag the end of text generation. <Tip warning={true}> The API for the streamer classes is still under development and may change in the future. </Tip> In practice, you can craft your own streaming class for all sorts of purposes! We also have basic streaming classes ready for you to use. For example, you can use the [`TextStreamer`] class to stream the output of `generate()` into your screen, one word at a time: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer >>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt") >>> streamer = TextStreamer(tok) >>> # Despite returning the usual output, the streamer will also print the generated text to stdout. >>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven, ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_5
The `generate()` supports watermarking the generated text by randomly marking a portion of tokens as "green". When generating the "green" will have a small 'bias' value added to their logits, thus having a higher chance to be generated. The watermarked text can be detected by calculating the proportion of "green" tokens in the text and estimating how likely it is statistically to obtain that amount of "green" tokens for human-generated text. This watermarking strategy was proposed in the paper ["On the Reliability of Watermarks for Large Language Models"](https://arxiv.org/abs/2306.04634). For more information on the inner functioning of watermarking, it is recommended to refer to the paper. The watermarking can be used with any generative model in `tranformers` and does not require an extra classification model to detect watermarked text. To trigger watermarking, pass in a [`WatermarkingConfig`] with needed arguments directly to the `.generate()` method or add it to the [`GenerationConfig`]. Watermarked text can be later detected with a [`WatermarkDetector`]. <Tip warning={true}> The WatermarkDetector internally relies on the proportion of "green" tokens, and whether generated text follows the coloring pattern. That is why it is recommended to strip off the prompt text, if it is much longer than the generated text. This also can have an effect when one sequence in the batch is a lot longer causing other rows to be padded. Additionally, the detector **must** be initiated with identical watermark configuration arguments used when generating. </Tip> Let's generate some text with watermarking. In the below code snippet, we set the bias to 2.5 which is a value that will be added to "green" tokens' logits. After generating watermarked text, we can pass it directly to the `WatermarkDetector` to check if the text is machine-generated (outputs `True` for machine-generated and `False` otherwise). ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, WatermarkDetector, WatermarkingConfig >>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2") >>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2") >>> tok.pad_token_id = tok.eos_token_id >>> tok.padding_side = "left" >>> inputs = tok(["This is the beginning of a long story", "Alice and Bob are"], padding=True, return_tensors="pt") >>> input_len = inputs["input_ids"].shape[-1] >>> watermarking_config = WatermarkingConfig(bias=2.5, seeding_scheme="selfhash") >>> out = model.generate(**inputs, watermarking_config=watermarking_config, do_sample=False, max_length=20) >>> detector = WatermarkDetector(model_config=model.config, device="cpu", watermarking_config=watermarking_config) >>> detection_out = detector(out, return_dict=True) >>> detection_out.prediction array([True, True]) ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_6
Certain combinations of the `generate()` parameters, and ultimately `generation_config`, can be used to enable specific decoding strategies. If you are new to this concept, we recommend reading [this blog post that illustrates how common decoding strategies work](https://huggingface.co/blog/how-to-generate). Here, we'll show some of the parameters that control the decoding strategies and illustrate how you can use them. <Tip> Selecting a given decoding strategy is not the only way you can influence the outcome of `generate()` with your model. The decoding strategies act based (mostly) on the logits, the distribution of probabilities for the next token, and thus selecting a good logits manipulation strategy can go a long way! In other words, manipulating the logits is another dimension you can act upon, in addition to selecting a decoding strategy. Popular logits manipulation strategies include `top_p`, `min_p`, and `repetition_penalty` -- you can check the full list in the [`GenerationConfig`] class. </Tip>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_7
[`generate`] uses greedy search decoding by default so you don't have to pass any parameters to enable it. This means the parameters `num_beams` is set to 1 and `do_sample=False`. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "I look forward to" >>> checkpoint = "distilbert/distilgpt2" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['I look forward to seeing you all again!\n\n\n\n\n\n\n\n\n\n\n'] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_8
The contrastive search decoding strategy was proposed in the 2022 paper [A Contrastive Framework for Neural Text Generation](https://arxiv.org/abs/2202.06417). It demonstrates superior results for generating non-repetitive yet coherent long outputs. To learn how contrastive search works, check out [this blog post](https://huggingface.co/blog/introducing-csearch). The two main parameters that enable and control the behavior of contrastive search are `penalty_alpha` and `top_k`: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> checkpoint = "openai-community/gpt2-large" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = "Hugging Face Company is" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> outputs = model.generate(**inputs, penalty_alpha=0.6, top_k=4, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Hugging Face Company is a family owned and operated business. We pride ourselves on being the best in the business and our customer service is second to none.\n\nIf you have any questions about our products or services, feel free to contact us at any time. We look forward to hearing from you!'] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_9
As opposed to greedy search that always chooses a token with the highest probability as the next token, multinomial sampling (also called ancestral sampling) randomly selects the next token based on the probability distribution over the entire vocabulary given by the model. Every token with a non-zero probability has a chance of being selected, thus reducing the risk of repetition. To enable multinomial sampling set `do_sample=True` and `num_beams=1`. ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed(0) # For reproducibility >>> checkpoint = "openai-community/gpt2-large" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = "Today was an amazing day because" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> outputs = model.generate(**inputs, do_sample=True, num_beams=1, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ["Today was an amazing day because we received these wonderful items by the way of a gift shop. The box arrived on a Thursday and I opened it on Monday afternoon to receive the gifts. Both bags featured pieces from all the previous years!\n\nThe box had lots of surprises in it, including some sweet little mini chocolate chips! I don't think I'd eat all of these. This was definitely one of the most expensive presents I have ever got, I actually got most of them for free!\n\nThe first package came"] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_10
Unlike greedy search, beam-search decoding keeps several hypotheses at each time step and eventually chooses the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability sequences that start with lower probability initial tokens and would've been ignored by the greedy search. <a href="https://huggingface.co/spaces/m-ric/beam_search_visualizer" class="flex flex-col justify-center"> <img style="max-width: 90%; margin: auto;" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/beam_search.png"/> </a> You can visualize how beam-search decoding works in [this interactive demo](https://huggingface.co/spaces/m-ric/beam_search_visualizer): type your input sentence, and play with the parameters to see how the decoding beams change. To enable this decoding strategy, specify the `num_beams` (aka number of hypotheses to keep track of) that is greater than 1. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "It is astonishing how one can" >>> checkpoint = "openai-community/gpt2-medium" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, max_new_tokens=50) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['It is astonishing how one can have such a profound impact on the lives of so many people in such a short period of time."\n\nHe added: "I am very proud of the work I have been able to do in the last few years.\n\n"I have'] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_11
As the name implies, this decoding strategy combines beam search with multinomial sampling. You need to specify the `num_beams` greater than 1, and set `do_sample=True` to use this decoding strategy. ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, set_seed >>> set_seed(0) # For reproducibility >>> prompt = "translate English to German: The house is wonderful." >>> checkpoint = "google-t5/t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, do_sample=True) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Das Haus ist wunderbar.' ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_12
The diverse beam search decoding strategy is an extension of the beam search strategy that allows for generating a more diverse set of beam sequences to choose from. To learn how it works, refer to [Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models](https://arxiv.org/pdf/1610.02424.pdf). This approach has three main parameters: `num_beams`, `num_beam_groups`, and `diversity_penalty`. The diversity penalty ensures the outputs are distinct across groups, and beam search is used within each group. ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> checkpoint = "google/pegasus-xsum" >>> prompt = ( ... "The Permaculture Design Principles are a set of universal design principles " ... "that can be applied to any location, climate and culture, and they allow us to design " ... "the most efficient and sustainable human habitation and food production systems. " ... "Permaculture is a design system that encompasses a wide variety of disciplines, such " ... "as ecology, landscape design, environmental science and energy conservation, and the " ... "Permaculture design principles are drawn from these various disciplines. Each individual " ... "design principle itself embodies a complete conceptual framework based on sound " ... "scientific principles. When we bring all these separate principles together, we can " ... "create a design system that both looks at whole systems, the parts that these systems " ... "consist of, and how those parts interact with each other to create a complex, dynamic, " ... "living system. Each design principle serves as a tool that allows us to integrate all " ... "the separate parts of a design, referred to as elements, into a functional, synergistic, " ... "whole system, where the elements harmoniously interact and work together in the most " ... "efficient way possible." ... ) >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, num_beam_groups=5, max_new_tokens=30, diversity_penalty=1.0) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'The Design Principles are a set of universal design principles that can be applied to any location, climate and culture, and they allow us to design the' ``` This guide illustrates the main parameters that enable various decoding strategies. More advanced parameters exist for the [`generate`] method, which gives you even further control over the [`generate`] method's behavior. For the complete list of the available parameters, refer to the [API documentation](./main_classes/text_generation).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_13
Speculative decoding (also known as assisted decoding) is a modification of the decoding strategies above, that uses an assistant model (ideally a much smaller one), to generate a few candidate tokens. The main model then validates the candidate tokens in a single forward pass, which speeds up the decoding process. If `do_sample=True`, then the token validation with resampling introduced in the [speculative decoding paper](https://arxiv.org/pdf/2211.17192.pdf) is used. Assisted decoding assumes the main and assistant models have the same tokenizer, otherwise, see Universal Assisted Decoding below. Currently, only greedy search and sampling are supported with assisted decoding, and assisted decoding doesn't support batched inputs. To learn more about assisted decoding, check [this blog post](https://huggingface.co/blog/assisted-generation). To enable assisted decoding, set the `assistant_model` argument with a model. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "Alice and Bob" >>> checkpoint = "EleutherAI/pythia-1.4b-deduped" >>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a'] ``` <Tip> If you're using a `pipeline` object, all you need to do is to pass the assistant checkpoint under `assistant_model` ```python >>> from transformers import pipeline >>> import torch >>> pipe = pipeline( ... "text-generation", ... model="meta-llama/Llama-3.1-8B", ... assistant_model="meta-llama/Llama-3.2-1B", # This extra line is all that's needed, also works with UAD ... torch_dtype=torch.bfloat16 >>> ) >>> pipe_output = pipe("Once upon a time, ", max_new_tokens=50, do_sample=False) >>> pipe_output[0]["generated_text"] 'Once upon a time, 3D printing was a niche technology that was only' ``` </Tip> When using assisted decoding with sampling methods, you can use the `temperature` argument to control the randomness, just like in multinomial sampling. However, in assisted decoding, reducing the temperature may help improve the latency. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> set_seed(42) # For reproducibility >>> prompt = "Alice and Bob" >>> checkpoint = "EleutherAI/pythia-1.4b-deduped" >>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model, do_sample=True, temperature=0.5) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob, a couple of friends of mine, who are both in the same office as'] ``` We recommend to install `scikit-learn` library to enhance the candidate generation strategy and achieve additional speedup.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_14
Universal Assisted Decoding (UAD) adds support for main and assistant models with different tokenizers. To use it, simply pass the tokenizers using the `tokenizer` and `assistant_tokenizer` arguments (see below). Internally, the main model input tokens are re-encoded into assistant model tokens, then candidate tokens are generated in the assistant encoding, which are in turn re-encoded into main model candidate tokens. Validation then proceeds as explained above. The re-encoding steps involve decoding token ids into text and then encoding the text using a different tokenizer. Since re-encoding the tokens may result in tokenization discrepancies, UAD finds the longest common subsequence between the source and target encodings, to ensure the new tokens include the correct prompt suffix. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "Alice and Bob" >>> checkpoint = "google/gemma-2-9b" >>> assistant_checkpoint = "double7/vicuna-68m" >>> assistant_tokenizer = AutoTokenizer.from_pretrained(assistant_checkpoint) >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model, tokenizer=tokenizer, assistant_tokenizer=assistant_tokenizer) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a'] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_15
Alternatively, you can also set the `prompt_lookup_num_tokens` to trigger n-gram based assisted decoding, as opposed to model based assisted decoding. You can read more about it [here](https://twitter.com/joao_gante/status/1747322413006643259).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_16
An LLM can be trained to also use its language modeling head with earlier hidden states as input, effectively skipping layers to yield a lower-quality output -- a technique called early exiting. We use the lower-quality early exit output as an assistant output, and apply self-speculation to fix the output using the remaining layers. The final generation of that self-speculative solution is the same (or has the same distribution) as the original model's generation. If the model you're using was trained to do early exit, you can pass `assistant_early_exit` (integer). In this case, the assistant model will be the same model but exiting early, hence the "self-speculative" name. Because the assistant model is a portion of the target model, caches and weights can be shared, which results in lower memory requirements. As in other assisted generation methods, the final generated result has the same quality as if no assistant had been used. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "Alice and Bob" >>> checkpoint = "facebook/layerskip-llama3.2-1B" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, assistant_early_exit=4, do_sample=False, max_new_tokens=20) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a'] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_17
**D**ecoding by C**o**ntrasting **La**yers (DoLa) is a contrastive decoding strategy to improve the factuality and reduce the hallucinations of LLMs, as described in this paper of ICLR 2024 [DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models](https://arxiv.org/abs/2309.03883). DoLa is achieved by contrasting the differences in logits obtained from final layers versus earlier layers, thus amplify the factual knowledge localized to particular part of transformer layers. Do the following two steps to activate DoLa decoding when calling the `model.generate` function: 1. Set the `dola_layers` argument, which can be either a string or a list of integers. - If set to a string, it can be one of `low`, `high`. - If set to a list of integers, it should be a list of layer indices between 0 and the total number of layers in the model. The 0-th layer is word embedding, and the 1st layer is the first transformer layer, and so on. 2. Set `repetition_penalty = 1.2` is suggested to reduce repetition in DoLa decoding. See the following examples for DoLa decoding with the 32-layer LLaMA-7B model. ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> import torch >>> from accelerate.test_utils.testing import get_backend >>> tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b") >>> model = AutoModelForCausalLM.from_pretrained("huggyllama/llama-7b", torch_dtype=torch.float16) >>> device, _, _ = get_backend() # automatically detects the underlying device type (CUDA, CPU, XPU, MPS, etc.) >>> model.to(device) >>> set_seed(42) >>> text = "On what date was the Declaration of Independence officially signed?" >>> inputs = tokenizer(text, return_tensors="pt").to(device) # Vanilla greddy decoding >>> vanilla_output = model.generate(**inputs, do_sample=False, max_new_tokens=50) >>> tokenizer.batch_decode(vanilla_output[:, inputs.input_ids.shape[-1]:], skip_special_tokens=True) ['\nThe Declaration of Independence was signed on July 4, 1776.\nWhat was the date of the signing of the Declaration of Independence?\nThe Declaration of Independence was signed on July 4,'] # DoLa decoding with contrasting higher part of layers (layers 16,18,...,30) >>> dola_high_output = model.generate(**inputs, do_sample=False, max_new_tokens=50, dola_layers='high') >>> tokenizer.batch_decode(dola_high_output[:, inputs.input_ids.shape[-1]:], skip_special_tokens=True) ['\nJuly 4, 1776, when the Continental Congress voted to separate from Great Britain. The 56 delegates to the Continental Congress signed the Declaration on August 2, 1776.'] # DoLa decoding with contrasting specific layers (layers 28 and 30) >>> dola_custom_output = model.generate(**inputs, do_sample=False, max_new_tokens=50, dola_layers=[28,30], repetition_penalty=1.2) >>> tokenizer.batch_decode(dola_custom_output[:, inputs.input_ids.shape[-1]:], skip_special_tokens=True) ['\nIt was officially signed on 2 August 1776, when 56 members of the Second Continental Congress, representing the original 13 American colonies, voted unanimously for the resolution for independence. The 2'] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_18
`dola_layers` stands for the candidate layers in premature layer selection, as described in the DoLa paper. The selected premature layer will be contrasted with the final layer. Setting `dola_layers` to `'low'` or `'high'` will select the lower or higher part of the layers to contrast, respectively. - For `N`-layer models with `N <= 40` layers, the layers of `range(0, N // 2, 2)` and `range(N // 2, N, 2)` are used for `'low'` and `'high'` layers, respectively. - For models with `N > 40` layers, the layers of `range(0, 20, 2)` and `range(N - 20, N, 2)` are used for `'low'` and `'high'` layers, respectively. - If the model has tied word embeddings, we skip the word embeddings (0-th) layer and start from the 2nd layer, as the early exit from word embeddings will become identity function. - Set the `dola_layers` to a list of integers for layer indices to contrast manually specified layers. For example, setting `dola_layers=[28,30]` will contrast the final layer (32-th layer) with the 28-th and 30-th layers. The paper suggested that contrasting `'high'` layers to improve short-answer tasks like TruthfulQA, and contrasting `'low'` layers to improve all the other long-answer reasoning tasks, such as GSM8K, StrategyQA, FACTOR, and VicunaQA. Applying DoLa to smaller models like GPT-2 is not recommended, as the results shown in the Appendix N of the paper.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/generation_strategies.md
.md
15_19
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_0
This glossary defines general machine learning and 🤗 Transformers terms to help you better understand the documentation.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_1
The attention mask is an optional argument used when batching sequences together. <Youtube id="M6adb1j2jPI"/> This argument indicates to the model which tokens should be attended to, and which should not. For example, consider these two sequences: ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased") >>> sequence_a = "This is a short sequence." >>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A." >>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"] >>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"] ``` The encoded versions have different lengths: ```python >>> len(encoded_sequence_a), len(encoded_sequence_b) (8, 19) ``` Therefore, we can't put them together in the same tensor as-is. The first sequence needs to be padded up to the length of the second one, or the second one needs to be truncated down to the length of the first one. In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask it to pad like this: ```python >>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True) ``` We can see that 0s have been added on the right of the first sentence to make it the same length as the second one: ```python >>> padded_sequences["input_ids"] [[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]] ``` This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating the position of the padded indices so that the model does not attend to them. For the [`BertTokenizer`], `1` indicates a value that should be attended to, while `0` indicates a padded value. This attention mask is in the dictionary returned by the tokenizer under the key "attention_mask": ```python >>> padded_sequences["attention_mask"] [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]] ```
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_2
See [encoder models](#encoder-models) and [masked language modeling](#masked-language-modeling-mlm)
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_3
See [causal language modeling](#causal-language-modeling) and [decoder models](#decoder-models)
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_4
The backbone is the network (embeddings and layers) that outputs the raw hidden states or features. It is usually connected to a [head](#head) which accepts the features as its input to make a prediction. For example, [`ViTModel`] is a backbone without a specific head on top. Other models can also use [`VitModel`] as a backbone such as [DPT](model_doc/dpt).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_5
A pretraining task where the model reads the texts in order and has to predict the next word. It's usually done by reading the whole sentence but using a mask inside the model to hide the future tokens at a certain timestep.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_6
Color images are made up of some combination of values in three channels: red, green, and blue (RGB) and grayscale images only have one channel. In 🤗 Transformers, the channel can be the first or last dimension of an image's tensor: [`n_channels`, `height`, `width`] or [`height`, `width`, `n_channels`].
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_7
An algorithm which allows a model to learn without knowing exactly how the input and output are aligned; CTC calculates the distribution of all possible outputs for a given input and chooses the most likely output from it. CTC is commonly used in speech recognition tasks because speech doesn't always cleanly align with the transcript for a variety of reasons such as a speaker's different speech rates.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_8
A type of layer in a neural network where the input matrix is multiplied element-wise by a smaller matrix (kernel or filter) and the values are summed up in a new matrix. This is known as a convolutional operation which is repeated over the entire input matrix. Each operation is applied to a different segment of the input matrix. Convolutional neural networks (CNNs) are commonly used in computer vision.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_9
Parallelism technique for training on multiple GPUs where the same setup is replicated multiple times, with each instance receiving a distinct data slice. The processing is done in parallel and all setups are synchronized at the end of each training step. Learn more about how DataParallel works [here](perf_train_gpu_many#dataparallel-vs-distributeddataparallel).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_10
This input is specific to encoder-decoder models, and contains the input IDs that will be fed to the decoder. These inputs should be used for sequence to sequence tasks, such as translation or summarization, and are usually built in a way specific to each model. Most encoder-decoder models (BART, T5) create their `decoder_input_ids` on their own from the `labels`. In such models, passing the `labels` is the preferred way to handle training. Please check each model's docs to see how they handle these input IDs for sequence to sequence training.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_11
Also referred to as autoregressive models, decoder models involve a pretraining task (called causal language modeling) where the model reads the texts in order and has to predict the next word. It's usually done by reading the whole sentence with a mask to hide future tokens at a certain timestep. <Youtube id="d_ixlCubqQw"/>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_12
Machine learning algorithms which use neural networks with several layers.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_13
Also known as autoencoding models, encoder models take an input (such as text or images) and transform them into a condensed numerical representation called an embedding. Oftentimes, encoder models are pretrained using techniques like [masked language modeling](#masked-language-modeling-mlm), which masks parts of the input sequence and forces the model to create more meaningful representations. <Youtube id="H39Z_720T5s"/>
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_14
The process of selecting and transforming raw data into a set of features that are more informative and useful for machine learning algorithms. Some examples of feature extraction include transforming raw text into word embeddings and extracting important features such as edges or shapes from image/video data.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_15
In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers. The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g., for `google-bert/bert-base-uncased`). For an input of size `[batch_size, sequence_length]`, the memory required to store the intermediate feed forward embeddings `[batch_size, sequence_length, config.intermediate_size]` can account for a large fraction of the memory use. The authors of [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) noticed that since the computation is independent of the `sequence_length` dimension, it is mathematically equivalent to compute the output embeddings of both feed forward layers `[batch_size, config.hidden_size]_0, ..., [batch_size, config.hidden_size]_n` individually and concat them afterward to `[batch_size, sequence_length, config.hidden_size]` with `n = sequence_length`, which trades increased computation time against reduced memory use, but yields a mathematically **equivalent** result. For models employing the function [`apply_chunking_to_forward`], the `chunk_size` defines the number of output embeddings that are computed in parallel and thus defines the trade-off between memory and time complexity. If `chunk_size` is set to 0, no feed forward chunking is done.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_16
Finetuning is a form of transfer learning which involves taking a pretrained model, freezing its weights, and replacing the output layer with a newly added [model head](#head). The model head is trained on your target dataset. See the [Fine-tune a pretrained model](https://huggingface.co/docs/transformers/training) tutorial for more details, and learn how to fine-tune models with 🤗 Transformers.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_17
The model head refers to the last layer of a neural network that accepts the raw hidden states and projects them onto a different dimension. There is a different model head for each task. For example: * [`GPT2ForSequenceClassification`] is a sequence classification head - a linear layer - on top of the base [`GPT2Model`]. * [`ViTForImageClassification`] is an image classification head - a linear layer on top of the final hidden state of the `CLS` token - on top of the base [`ViTModel`]. * [`Wav2Vec2ForCTC`] is a language modeling head with [CTC](#connectionist-temporal-classification-ctc) on top of the base [`Wav2Vec2Model`].
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_18
Vision-based Transformers models split an image into smaller patches which are linearly embedded, and then passed as a sequence to the model. You can find the `patch_size` - or resolution - of the model in its configuration.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_19
Inference is the process of evaluating a model on new data after training is complete. See the [Pipeline for inference](https://huggingface.co/docs/transformers/pipeline_tutorial) tutorial to learn how to perform inference with 🤗 Transformers.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_20
The input ids are often the only required parameters to be passed to the model as input. They are token indices, numerical representations of tokens building the sequences that will be used as input by the model. <Youtube id="VFp38yj8h3A"/> Each tokenizer works differently but the underlying mechanism remains the same. Here's an example using the BERT tokenizer, which is a [WordPiece](https://arxiv.org/pdf/1609.08144.pdf) tokenizer: ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased") >>> sequence = "A Titan RTX has 24GB of VRAM" ``` The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary. ```python >>> tokenized_sequence = tokenizer.tokenize(sequence) ``` The tokens are either words or subwords. Here for instance, "VRAM" wasn't in the model vocabulary, so it's been split in "V", "RA" and "M". To indicate those tokens are not separate words but parts of the same word, a double-hash prefix is added for "RA" and "M": ```python >>> print(tokenized_sequence) ['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M'] ``` These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding the sentence to the tokenizer, which leverages the Rust implementation of [🤗 Tokenizers](https://github.com/huggingface/tokenizers) for peak performance. ```python >>> inputs = tokenizer(sequence) ``` The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The token indices are under the key `input_ids`: ```python >>> encoded_sequence = inputs["input_ids"] >>> print(encoded_sequence) [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102] ``` Note that the tokenizer automatically adds "special tokens" (if the associated model relies on them) which are special IDs the model sometimes uses. If we decode the previous sequence of ids, ```python >>> decoded_sequence = tokenizer.decode(encoded_sequence) ``` we will see ```python >>> print(decoded_sequence) [CLS] A Titan RTX has 24GB of VRAM [SEP] ``` because this is the way a [`BertModel`] is going to expect its inputs.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_21
The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels should be the expected prediction of the model: it will use the standard loss in order to compute the loss between its predictions and the expected value (the label). These labels are different according to the model head, for example: - For sequence classification models, ([`BertForSequenceClassification`]), the model expects a tensor of dimension `(batch_size)` with each value of the batch corresponding to the expected label of the entire sequence. - For token classification models, ([`BertForTokenClassification`]), the model expects a tensor of dimension `(batch_size, seq_length)` with each value corresponding to the expected label of each individual token. - For masked language modeling, ([`BertForMaskedLM`]), the model expects a tensor of dimension `(batch_size, seq_length)` with each value corresponding to the expected label of each individual token: the labels being the token ID for the masked token, and values to be ignored for the rest (usually -100). - For sequence to sequence tasks, ([`BartForConditionalGeneration`], [`MBartForConditionalGeneration`]), the model expects a tensor of dimension `(batch_size, tgt_seq_length)` with each value corresponding to the target sequences associated with each input sequence. During training, both BART and T5 will make the appropriate `decoder_input_ids` and decoder attention masks internally. They usually do not need to be supplied. This does not apply to models leveraging the Encoder-Decoder framework. - For image classification models, ([`ViTForImageClassification`]), the model expects a tensor of dimension `(batch_size)` with each value of the batch corresponding to the expected label of each individual image. - For semantic segmentation models, ([`SegformerForSemanticSegmentation`]), the model expects a tensor of dimension `(batch_size, height, width)` with each value of the batch corresponding to the expected label of each individual pixel. - For object detection models, ([`DetrForObjectDetection`]), the model expects a list of dictionaries with a `class_labels` and `boxes` key where each value of the batch corresponds to the expected label and number of bounding boxes of each individual image. - For automatic speech recognition models, ([`Wav2Vec2ForCTC`]), the model expects a tensor of dimension `(batch_size, target_length)` with each value corresponding to the expected label of each individual token. <Tip> Each model's labels may be different, so be sure to always check the documentation of each model for more information about their specific labels! </Tip> The base models ([`BertModel`]) do not accept labels, as these are the base transformer models, simply outputting features.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_22
A generic term that refers to transformer language models (GPT-3, BLOOM, OPT) that were trained on a large quantity of data. These models also tend to have a large number of learnable parameters (e.g. 175 billion for GPT-3).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_23
A pretraining task where the model sees a corrupted version of the texts, usually done by masking some tokens randomly, and has to predict the original text.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_24
A task that combines texts with another kind of inputs (for instance images).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_25
All tasks related to generating text (for instance, [Write With Transformers](https://transformer.huggingface.co/), translation).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_26
A generic way to say "deal with texts".
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_27
All tasks related to understanding what is in a text (for instance classifying the whole text, individual words).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_28
A pipeline in 🤗 Transformers is an abstraction referring to a series of steps that are executed in a specific order to preprocess and transform data and return a prediction from a model. Some example stages found in a pipeline might be data preprocessing, feature extraction, and normalization. For more details, see [Pipelines for inference](https://huggingface.co/docs/transformers/pipeline_tutorial).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_29
Parallelism technique in which the model is split up vertically (layer-level) across multiple GPUs, so that only one or several layers of the model are placed on a single GPU. Each GPU processes in parallel different stages of the pipeline and working on a small chunk of the batch. Learn more about how PipelineParallel works [here](perf_train_gpu_many#from-naive-model-parallelism-to-pipeline-parallelism).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_30
A tensor of the numerical representations of an image that is passed to a model. The pixel values have a shape of [`batch_size`, `num_channels`, `height`, `width`], and are generated from an image processor.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_31
An operation that reduces a matrix into a smaller matrix, either by taking the maximum or average of the pooled dimension(s). Pooling layers are commonly found between convolutional layers to downsample the feature representation.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_32
Contrary to RNNs that have the position of each token embedded within them, transformers are unaware of the position of each token. Therefore, the position IDs (`position_ids`) are used by the model to identify each token's position in the list of tokens. They are an optional parameter. If no `position_ids` are passed to the model, the IDs are automatically created as absolute positional embeddings. Absolute positional embeddings are selected in the range `[0, config.max_position_embeddings - 1]`. Some models use other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_33
The task of preparing raw data into a format that can be easily consumed by machine learning models. For example, text is typically preprocessed by tokenization. To gain a better idea of what preprocessing looks like for other input types, check out the [Preprocess](https://huggingface.co/docs/transformers/preprocessing) tutorial.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_34
A model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods involve a self-supervised objective, which can be reading the text and trying to predict the next word (see [causal language modeling](#causal-language-modeling)) or masking some words and trying to predict them (see [masked language modeling](#masked-language-modeling-mlm)). Speech and vision models have their own pretraining objectives. For example, Wav2Vec2 is a speech model pretrained on a contrastive task which requires the model to identify the "true" speech representation from a set of "false" speech representations. On the other hand, BEiT is a vision model pretrained on a masked image modeling task which masks some of the image patches and requires the model to predict the masked patches (similar to the masked language modeling objective).
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_35
A type of model that uses a loop over a layer to process texts.
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/glossary.md
.md
16_36