text
stringlengths 3
7.31k
| source
stringclasses 40
values | url
stringlengths 53
184
| source_section
stringlengths 0
105
| file_type
stringclasses 1
value | id
stringlengths 3
6
|
---|---|---|---|---|---|
```python
Raised by `hf_raise_for_status` when the server returns a HTTP 400 error.
Example:
```py
>>> resp = requests.post("hf.co/api/check", ...)
>>> hf_raise_for_status(resp, endpoint_name="check")
huggingface_hub.utils._errors.BadRequestError: Bad request for check endpoint: {details} (Request ID: XXX)
```
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/utilities/#huggingfacehubutilsbadrequesterror | #huggingfacehubutilsbadrequesterror | .md | 15_38 |
```python
Raised when trying to access a file or snapshot that is not on the disk when network is
disabled or unavailable (connection issue). The entry may exist on the Hub.
Note: `ValueError` type is to ensure backward compatibility.
Note: `LocalEntryNotFoundError` derives from `HTTPError` because of `EntryNotFoundError`
even when it is not a network issue.
Example:
```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download('bert-base-cased', '<non-cached-file>', local_files_only=True)
(...)
huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.
```
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/utilities/#huggingfacehubutilslocalentrynotfounderror | #huggingfacehubutilslocalentrynotfounderror | .md | 15_39 |
```python
Raised when a request is made but `HF_HUB_OFFLINE=1` is set as environment variable.
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/utilities/#huggingfacehubutilsofflinemodeisenabled | #huggingfacehubutilsofflinemodeisenabled | .md | 15_40 |
`huggingface_hub` includes an helper to send telemetry data. This information helps us debug issues and prioritize new features.
Users can disable telemetry collection at any time by setting the `HF_HUB_DISABLE_TELEMETRY=1` environment variable.
Telemetry is also disabled in offline mode (i.e. when setting HF_HUB_OFFLINE=1).
If you are maintainer of a third-party library, sending telemetry data is as simple as making a call to [`send_telemetry`].
Data is sent in a separate thread to reduce as much as possible the impact for users. | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/utilities/#telemetry | #telemetry | .md | 15_41 |
Error fetching docstring for utils.send_telemetry: module 'utils' has no attribute 'send_telemetry' | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/utilities/#utilssendtelemetry | #utilssendtelemetry | .md | 15_42 |
`huggingface_hub` includes custom validators to validate method arguments automatically.
Validation is inspired by the work done in [Pydantic](https://pydantic-docs.helpmanual.io/)
to validate type hints but with more limited features. | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/utilities/#validators | #validators | .md | 15_43 |
[`~utils.validate_hf_hub_args`] is a generic decorator to encapsulate
methods that have arguments following `huggingface_hub`'s naming. By default, all
arguments that has a validator implemented will be validated.
If an input is not valid, a [`~utils.HFValidationError`] is thrown. Only
the first non-valid value throws an error and stops the validation process.
Usage:
```py
>>> from huggingface_hub.utils import validate_hf_hub_args
>>> @validate_hf_hub_args
... def my_cool_method(repo_id: str):
... print(repo_id)
>>> my_cool_method(repo_id="valid_repo_id")
valid_repo_id
>>> my_cool_method("other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.
>>> my_cool_method(repo_id="other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.
>>> @validate_hf_hub_args
... def my_cool_auth_method(token: str):
... print(token)
>>> my_cool_auth_method(token="a token")
"a token"
>>> my_cool_auth_method(use_auth_token="a use_auth_token")
"a use_auth_token"
>>> my_cool_auth_method(token="a token", use_auth_token="a use_auth_token")
UserWarning: Both `token` and `use_auth_token` are passed (...). `use_auth_token` value will be ignored.
"a token"
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/utilities/#generic-decorator | #generic-decorator | .md | 15_44 |
Error fetching docstring for utils.validate_hf_hub_args: module 'utils' has no attribute 'validate_hf_hub_args' | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/utilities/#utilsvalidatehfhubargs | #utilsvalidatehfhubargs | .md | 15_45 |
Error fetching docstring for utils.HFValidationError: module 'utils' has no attribute 'HFValidationError' | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/utilities/#utilshfvalidationerror | #utilshfvalidationerror | .md | 15_46 |
Validators can also be used individually. Here is a list of all arguments that can be
validated. | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/utilities/#argument-validators | #argument-validators | .md | 15_47 |
Error fetching docstring for utils.validate_repo_id: module 'utils' has no attribute 'validate_repo_id' | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/utilities/#utilsvalidaterepoid | #utilsvalidaterepoid | .md | 15_48 |
Not exactly a validator, but ran as well. | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/utilities/#smoothlydeprecateuseauthtoken | #smoothlydeprecateuseauthtoken | .md | 15_49 |
Error fetching docstring for utils.smoothly_deprecate_use_auth_token: module 'utils' has no attribute 'smoothly_deprecate_use_auth_token' | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/utilities.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/utilities/#utilssmoothlydeprecateuseauthtoken | #utilssmoothlydeprecateuseauthtoken | .md | 15_50 |
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/space_runtime.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/space_runtime/ | .md | 16_0 |
|
Check the [`HfApi`] documentation page for the reference of methods to manage your Space on the Hub.
- Duplicate a Space: [`duplicate_space`]
- Fetch current runtime: [`get_space_runtime`]
- Manage secrets: [`add_space_secret`] and [`delete_space_secret`]
- Manage hardware: [`request_space_hardware`]
- Manage state: [`pause_space`], [`restart_space`], [`set_space_sleep_time`] | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/space_runtime.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/space_runtime/#managing-your-space-runtime | #managing-your-space-runtime | .md | 16_1 |
```python
Contains information about the current runtime of a Space.
Args:
stage (`str`):
Current stage of the space. Example: RUNNING.
hardware (`str` or `None`):
Current hardware of the space. Example: "cpu-basic". Can be `None` if Space
is `BUILDING` for the first time.
requested_hardware (`str` or `None`):
Requested hardware. Can be different than `hardware` especially if the request
has just been made. Example: "t4-medium". Can be `None` if no hardware has
been requested yet.
sleep_time (`int` or `None`):
Number of seconds the Space will be kept alive after the last request. By default (if value is `None`), the
Space will never go to sleep if it's running on an upgraded hardware, while it will go to sleep after 48
hours on a free 'cpu-basic' hardware. For more details, see https://huggingface.co/docs/hub/spaces-gpus#sleep-time.
raw (`dict`):
Raw response from the server. Contains more information about the Space
runtime like number of replicas, number of cpu, memory size,...
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/space_runtime.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/space_runtime/#spaceruntime | #spaceruntime | .md | 16_2 |
```python
Enumeration of hardwares available to run your Space on the Hub.
Value can be compared to a string:
```py
assert SpaceHardware.CPU_BASIC == "cpu-basic"
```
Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceInfo.ts#L73 (private url).
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/space_runtime.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/space_runtime/#spacehardware | #spacehardware | .md | 16_3 |
```python
Enumeration of possible stage of a Space on the Hub.
Value can be compared to a string:
```py
assert SpaceStage.BUILDING == "BUILDING"
```
Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceInfo.ts#L61 (private url).
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/space_runtime.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/space_runtime/#spacestage | #spacestage | .md | 16_4 |
```python
Enumeration of persistent storage available for your Space on the Hub.
Value can be compared to a string:
```py
assert SpaceStorage.SMALL == "small"
```
Taken from https://github.com/huggingface/moon-landing/blob/main/server/repo_types/SpaceHardwareFlavor.ts#L24 (private url).
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/space_runtime.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/space_runtime/#spacestorage | #spacestorage | .md | 16_5 |
```python
Contains information about the current variables of a Space.
Args:
key (`str`):
Variable key. Example: `"MODEL_REPO_ID"`
value (`str`):
Variable value. Example: `"the_model_repo_id"`.
description (`str` or None):
Description of the variable. Example: `"Model Repo ID of the implemented model"`.
updatedAt (`datetime` or None):
datetime of the last update of the variable (if the variable has been updated at least once).
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/space_runtime.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/space_runtime/#spacevariable | #spacevariable | .md | 16_6 |
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/ | .md | 17_0 |
|
Webhooks are a foundation for MLOps-related features. They allow you to listen for new changes on specific repos or to
all repos belonging to particular users/organizations you're interested in following. To learn
more about webhooks on the Huggingface Hub, you can read the Webhooks [guide](https://huggingface.co/docs/hub/webhooks).
<Tip>
Check out this [guide](../guides/webhooks_server) for a step-by-step tutorial on how to setup your webhooks server and
deploy it as a Space.
</Tip>
<Tip warning={true}>
This is an experimental feature. This means that we are still working on improving the API. Breaking changes might be
introduced in the future without prior notice. Make sure to pin the version of `huggingface_hub` in your requirements.
A warning is triggered when you use an experimental feature. You can disable it by setting `HF_HUB_DISABLE_EXPERIMENTAL_WARNING=1` as an environment variable.
</Tip> | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#webhooks-server | #webhooks-server | .md | 17_1 |
The server is a [Gradio](https://gradio.app/) app. It has a UI to display instructions for you or your users and an API
to listen to webhooks. Implementing a webhook endpoint is as simple as decorating a function. You can then debug it
by redirecting the Webhooks to your machine (using a Gradio tunnel) before deploying it to a Space. | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#server | #server | .md | 17_2 |
```python
The [`WebhooksServer`] class lets you create an instance of a Gradio app that can receive Huggingface webhooks.
These webhooks can be registered using the [`~WebhooksServer.add_webhook`] decorator. Webhook endpoints are added to
the app as a POST endpoint to the FastAPI router. Once all the webhooks are registered, the `launch` method has to be
called to start the app.
It is recommended to accept [`WebhookPayload`] as the first argument of the webhook function. It is a Pydantic
model that contains all the information about the webhook event. The data will be parsed automatically for you.
Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to setup your
WebhooksServer and deploy it on a Space.
<Tip warning={true}>
`WebhooksServer` is experimental. Its API is subject to change in the future.
</Tip>
<Tip warning={true}>
You must have `gradio` installed to use `WebhooksServer` (`pip install --upgrade gradio`).
</Tip>
Args:
ui (`gradio.Blocks`, optional):
A Gradio UI instance to be used as the Space landing page. If `None`, a UI displaying instructions
about the configured webhooks is created.
webhook_secret (`str`, optional):
A secret key to verify incoming webhook requests. You can set this value to any secret you want as long as
you also configure it in your [webhooks settings panel](https://huggingface.co/settings/webhooks). You
can also set this value as the `WEBHOOK_SECRET` environment variable. If no secret is provided, the
webhook endpoints are opened without any security.
Example:
```python
import gradio as gr
from huggingface_hub import WebhooksServer, WebhookPayload
with gr.Blocks() as ui:
...
app = WebhooksServer(ui=ui, webhook_secret="my_secret_key")
@app.add_webhook("/say_hello")
async def hello(payload: WebhookPayload):
return {"message": "hello"}
app.launch()
```
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhooksserver | #huggingfacehubwebhooksserver | .md | 17_3 |
```python
Decorator to start a [`WebhooksServer`] and register the decorated function as a webhook endpoint.
This is a helper to get started quickly. If you need more flexibility (custom landing page or webhook secret),
you can use [`WebhooksServer`] directly. You can register multiple webhook endpoints (to the same server) by using
this decorator multiple times.
Check out the [webhooks guide](../guides/webhooks_server) for a step-by-step tutorial on how to setup your
server and deploy it on a Space.
<Tip warning={true}>
`webhook_endpoint` is experimental. Its API is subject to change in the future.
</Tip>
<Tip warning={true}>
You must have `gradio` installed to use `webhook_endpoint` (`pip install --upgrade gradio`).
</Tip>
Args:
path (`str`, optional):
The URL path to register the webhook function. If not provided, the function name will be used as the path.
In any case, all webhooks are registered under `/webhooks`.
Examples:
The default usage is to register a function as a webhook endpoint. The function name will be used as the path.
The server will be started automatically at exit (i.e. at the end of the script).
```python
from huggingface_hub import webhook_endpoint, WebhookPayload
@webhook_endpoint
async def trigger_training(payload: WebhookPayload):
if payload.repo.type == "dataset" and payload.event.action == "update": | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookendpoint | #huggingfacehubwebhookendpoint | .md | 17_4 |
... | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#trigger-a-training-job-if-a-dataset-is-updated | #trigger-a-training-job-if-a-dataset-is-updated | .md | 17_5 |
```
Advanced usage: register a function as a webhook endpoint and start the server manually. This is useful if you
are running it in a notebook.
```python
from huggingface_hub import webhook_endpoint, WebhookPayload
@webhook_endpoint
async def trigger_training(payload: WebhookPayload):
if payload.repo.type == "dataset" and payload.event.action == "update": | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#server-is-automatically-started-at-the-end-of-the-script | #server-is-automatically-started-at-the-end-of-the-script | .md | 17_6 |
... | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#trigger-a-training-job-if-a-dataset-is-updated | #trigger-a-training-job-if-a-dataset-is-updated | .md | 17_7 |
trigger_training.launch()
```
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#start-the-server-manually | #start-the-server-manually | .md | 17_8 |
[`WebhookPayload`] is the main data structure that contains the payload from Webhooks. This is
a `pydantic` class which makes it very easy to use with FastAPI. If you pass it as a parameter to a webhook endpoint, it
will be automatically validated and parsed as a Python object.
For more information about webhooks payload, you can refer to the Webhooks Payload [guide](https://huggingface.co/docs/hub/webhooks#webhook-payloads). | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#payload | #payload | .md | 17_9 |
No docstring found for huggingface_hub.WebhookPayload
No docstring found for huggingface_hub.WebhookPayload | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayload | #huggingfacehubwebhookpayload | .md | 17_10 |
No docstring found for huggingface_hub.WebhookPayloadComment | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloadcomment | #huggingfacehubwebhookpayloadcomment | .md | 17_11 |
No docstring found for huggingface_hub.WebhookPayloadDiscussion | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloaddiscussion | #huggingfacehubwebhookpayloaddiscussion | .md | 17_12 |
No docstring found for huggingface_hub.WebhookPayloadDiscussionChanges | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloaddiscussionchanges | #huggingfacehubwebhookpayloaddiscussionchanges | .md | 17_13 |
No docstring found for huggingface_hub.WebhookPayloadEvent | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloadevent | #huggingfacehubwebhookpayloadevent | .md | 17_14 |
No docstring found for huggingface_hub.WebhookPayloadMovedTo | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloadmovedto | #huggingfacehubwebhookpayloadmovedto | .md | 17_15 |
No docstring found for huggingface_hub.WebhookPayloadRepo | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloadrepo | #huggingfacehubwebhookpayloadrepo | .md | 17_16 |
No docstring found for huggingface_hub.WebhookPayloadUrl | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloadurl | #huggingfacehubwebhookpayloadurl | .md | 17_17 |
No docstring found for huggingface_hub.WebhookPayloadWebhook | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/webhooks_server.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/webhooks_server/#huggingfacehubwebhookpayloadwebhook | #huggingfacehubwebhookpayloadwebhook | .md | 17_18 |
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/ | .md | 18_0 |
|
The `huggingface_hub` library offers a range of mixins that can be used as a parent class for your objects, in order to
provide simple uploading and downloading functions. Check out our [integration guide](../guides/integrations) to learn
how to integrate any ML framework with the Hub. | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#mixins | #mixins | .md | 18_1 |
```python
A generic mixin to integrate ANY machine learning framework with the Hub.
To integrate your framework, your model class must inherit from this class. Custom logic for saving/loading models
have to be overwritten in [`_from_pretrained`] and [`_save_pretrained`]. [`PyTorchModelHubMixin`] is a good example
of mixin integration with the Hub. Check out our [integration guide](../guides/integrations) for more instructions.
When inheriting from [`ModelHubMixin`], you can define class-level attributes. These attributes are not passed to
`__init__` but to the class definition itself. This is useful to define metadata about the library integrating
[`ModelHubMixin`].
For more details on how to integrate the mixin with your library, checkout the [integration guide](../guides/integrations).
Args:
repo_url (`str`, *optional*):
URL of the library repository. Used to generate model card.
docs_url (`str`, *optional*):
URL of the library documentation. Used to generate model card.
model_card_template (`str`, *optional*):
Template of the model card. Used to generate model card. Defaults to a generic template.
language (`str` or `List[str]`, *optional*):
Language supported by the library. Used to generate model card.
library_name (`str`, *optional*):
Name of the library integrating ModelHubMixin. Used to generate model card.
license (`str`, *optional*):
License of the library integrating ModelHubMixin. Used to generate model card.
E.g: "apache-2.0"
license_name (`str`, *optional*):
Name of the library integrating ModelHubMixin. Used to generate model card.
Only used if `license` is set to `other`.
E.g: "coqui-public-model-license".
license_link (`str`, *optional*):
URL to the license of the library integrating ModelHubMixin. Used to generate model card.
Only used if `license` is set to `other` and `license_name` is set.
E.g: "https://coqui.ai/cpml".
pipeline_tag (`str`, *optional*):
Tag of the pipeline. Used to generate model card. E.g. "text-classification".
tags (`List[str]`, *optional*):
Tags to be added to the model card. Used to generate model card. E.g. ["x-custom-tag", "arxiv:2304.12244"]
coders (`Dict[Type, Tuple[Callable, Callable]]`, *optional*):
Dictionary of custom types and their encoders/decoders. Used to encode/decode arguments that are not
jsonable by default. E.g dataclasses, argparse.Namespace, OmegaConf, etc.
Example:
```python
>>> from huggingface_hub import ModelHubMixin | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#modelhubmixin | #modelhubmixin | .md | 18_2 |
>>> class MyCustomModel(
... ModelHubMixin,
... library_name="my-library",
... tags=["x-custom-tag", "arxiv:2304.12244"],
... repo_url="https://github.com/huggingface/my-cool-library",
... docs_url="https://huggingface.co/docs/my-cool-library",
... # ^ optional metadata to generate model card
... ):
... def __init__(self, size: int = 512, device: str = "cpu"):
... # define how to initialize your model
... super().__init__()
... ...
...
... def _save_pretrained(self, save_directory: Path) -> None:
... # define how to serialize your model
... ...
...
... @classmethod
... def from_pretrained(
... cls: Type[T],
... pretrained_model_name_or_path: Union[str, Path],
... *,
... force_download: bool = False,
... resume_download: Optional[bool] = None,
... proxies: Optional[Dict] = None,
... token: Optional[Union[str, bool]] = None,
... cache_dir: Optional[Union[str, Path]] = None,
... local_files_only: bool = False,
... revision: Optional[str] = None,
... **model_kwargs,
... ) -> T:
... # define how to deserialize your model
... ...
>>> model = MyCustomModel(size=256, device="gpu") | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#inherit-from-modelhubmixin | #inherit-from-modelhubmixin | .md | 18_3 |
>>> model.save_pretrained("my-awesome-model") | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#save-model-weights-to-local-directory | #save-model-weights-to-local-directory | .md | 18_4 |
>>> model.push_to_hub("my-awesome-model") | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#push-model-weights-to-the-hub | #push-model-weights-to-the-hub | .md | 18_5 |
>>> reloaded_model = MyCustomModel.from_pretrained("username/my-awesome-model")
>>> reloaded_model.size
256 | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#download-and-initialize-weights-from-the-hub | #download-and-initialize-weights-from-the-hub | .md | 18_6 |
>>> from huggingface_hub import ModelCard
>>> card = ModelCard.load("username/my-awesome-model")
>>> card.data.tags
["x-custom-tag", "pytorch_model_hub_mixin", "model_hub_mixin"]
>>> card.data.library_name
"my-library"
```
```
- all
- _save_pretrained
- _from_pretrained | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#model-card-has-been-correctly-populated | #model-card-has-been-correctly-populated | .md | 18_7 |
```python
Implementation of [`ModelHubMixin`] to provide model Hub upload/download capabilities to PyTorch models. The model
is set in evaluation mode by default using `model.eval()` (dropout modules are deactivated). To train the model,
you should first set it back in training mode with `model.train()`.
See [`ModelHubMixin`] for more details on how to use the mixin.
Example:
```python
>>> import torch
>>> import torch.nn as nn
>>> from huggingface_hub import PyTorchModelHubMixin
>>> class MyModel(
... nn.Module,
... PyTorchModelHubMixin,
... library_name="keras-nlp",
... repo_url="https://github.com/keras-team/keras-nlp",
... docs_url="https://keras.io/keras_nlp/",
... # ^ optional metadata to generate model card
... ):
... def __init__(self, hidden_size: int = 512, vocab_size: int = 30000, output_size: int = 4):
... super().__init__()
... self.param = nn.Parameter(torch.rand(hidden_size, vocab_size))
... self.linear = nn.Linear(output_size, vocab_size)
... def forward(self, x):
... return self.linear(x + self.param)
>>> model = MyModel(hidden_size=256) | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pytorchmodelhubmixin | #pytorchmodelhubmixin | .md | 18_8 |
>>> model.save_pretrained("my-awesome-model") | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#save-model-weights-to-local-directory | #save-model-weights-to-local-directory | .md | 18_9 |
>>> model.push_to_hub("my-awesome-model") | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#push-model-weights-to-the-hub | #push-model-weights-to-the-hub | .md | 18_10 |
>>> model = MyModel.from_pretrained("username/my-awesome-model")
>>> model.hidden_size
256
```
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#download-and-initialize-weights-from-the-hub | #download-and-initialize-weights-from-the-hub | .md | 18_11 |
```python
Implementation of [`ModelHubMixin`] to provide model Hub upload/download
capabilities to Keras models.
```python
>>> import tensorflow as tf
>>> from huggingface_hub import KerasModelHubMixin
>>> class MyModel(tf.keras.Model, KerasModelHubMixin):
... def __init__(self, **kwargs):
... super().__init__()
... self.config = kwargs.pop("config", None)
... self.dummy_inputs = ...
... self.layer = ...
... def call(self, *args):
... return ...
>>> # Initialize and compile the model as you normally would
>>> model = MyModel()
>>> model.compile(...)
>>> # Build the graph by training it or passing dummy inputs
>>> _ = model(model.dummy_inputs)
>>> # Save model weights to local directory
>>> model.save_pretrained("my-awesome-model")
>>> # Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")
>>> # Download and initialize weights from the Hub
>>> model = MyModel.from_pretrained("username/super-cool-model")
```
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#kerasmodelhubmixin | #kerasmodelhubmixin | .md | 18_12 |
```python
Instantiate a pretrained Keras model from a pre-trained model from the Hub.
The model is expected to be in `SavedModel` format.
Args:
pretrained_model_name_or_path (`str` or `os.PathLike`):
Can be either:
- A string, the `model id` of a pretrained model hosted inside a
model repo on huggingface.co. Valid model ids can be located
at the root-level, like `bert-base-uncased`, or namespaced
under a user or organization name, like
`dbmdz/bert-base-german-cased`.
- You can add `revision` by appending `@` at the end of model_id
simply like this: `dbmdz/bert-base-german-cased@main` Revision
is the specific model version to use. It can be a branch name,
a tag name, or a commit id, since we use a git-based system
for storing models and other artifacts on huggingface.co, so
`revision` can be any identifier allowed by git.
- A path to a `directory` containing model weights saved using
[`~transformers.PreTrainedModel.save_pretrained`], e.g.,
`./my_model_directory/`.
- `None` if you are both providing the configuration and state
dictionary (resp. with keyword arguments `config` and
`state_dict`).
force_download (`bool`, *optional*, defaults to `False`):
Whether to force the (re-)download of the model weights and
configuration files, overriding the cached versions if they exist.
proxies (`Dict[str, str]`, *optional*):
A dictionary of proxy servers to use by protocol or endpoint, e.g.,
`{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The
proxies are used on each request.
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If
`True`, will use the token generated when running `transformers-cli
login` (stored in `~/.huggingface`).
cache_dir (`Union[str, os.PathLike]`, *optional*):
Path to a directory in which a downloaded pretrained model
configuration should be cached if the standard cache should not be
used.
local_files_only(`bool`, *optional*, defaults to `False`):
Whether to only look at local files (i.e., do not try to download
the model).
model_kwargs (`Dict`, *optional*):
model_kwargs will be passed to the model during initialization
<Tip>
Passing `token=True` is required when you want to use a private
model.
</Tip>
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#frompretrainedkeras | #frompretrainedkeras | .md | 18_13 |
```python
Upload model checkpoint to the Hub.
Use `allow_patterns` and `ignore_patterns` to precisely filter which files should be pushed to the hub. Use
`delete_patterns` to delete existing remote files in the same commit. See [`upload_folder`] reference for more
details.
Args:
model (`Keras.Model`):
The [Keras model](`https://www.tensorflow.org/api_docs/python/tf/keras/Model`) you'd like to push to the
Hub. The model must be compiled and built.
repo_id (`str`):
ID of the repository to push to (example: `"username/my-model"`).
commit_message (`str`, *optional*, defaults to "Add Keras model"):
Message to commit while pushing.
private (`bool`, *optional*):
Whether the repository created should be private.
If `None` (default), the repo will be public unless the organization's default is private.
api_endpoint (`str`, *optional*):
The API endpoint to use when pushing the model to the hub.
token (`str`, *optional*):
The token to use as HTTP bearer authorization for remote files. If
not set, will use the token set when logging in with
`huggingface-cli login` (stored in `~/.huggingface`).
branch (`str`, *optional*):
The git branch on which to push the model. This defaults to
the default branch as specified in your repository, which
defaults to `"main"`.
create_pr (`boolean`, *optional*):
Whether or not to create a Pull Request from `branch` with that commit.
Defaults to `False`.
config (`dict`, *optional*):
Configuration object to be saved alongside the model weights.
allow_patterns (`List[str]` or `str`, *optional*):
If provided, only files matching at least one pattern are pushed.
ignore_patterns (`List[str]` or `str`, *optional*):
If provided, files matching any of the patterns are not pushed.
delete_patterns (`List[str]` or `str`, *optional*):
If provided, remote files matching any of the patterns will be deleted from the repo.
log_dir (`str`, *optional*):
TensorBoard logging directory to be pushed. The Hub automatically
hosts and displays a TensorBoard instance if log files are included
in the repository.
include_optimizer (`bool`, *optional*, defaults to `False`):
Whether or not to include optimizer during serialization.
tags (Union[`list`, `str`], *optional*):
List of tags that are related to model or string of a single tag. See example tags
[here](https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1).
plot_model (`bool`, *optional*, defaults to `True`):
Setting this to `True` will plot the model and put it in the model
card. Requires graphviz and pydot to be installed.
model_save_kwargs(`dict`, *optional*):
model_save_kwargs will be passed to
[`tf.keras.models.save_model()`](https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model).
Returns:
The url of the commit of your model in the given repository.
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubkeras | #pushtohubkeras | .md | 18_14 |
```python
Saves a Keras model to save_directory in SavedModel format. Use this if
you're using the Functional or Sequential APIs.
Args:
model (`Keras.Model`):
The [Keras
model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)
you'd like to save. The model must be compiled and built.
save_directory (`str` or `Path`):
Specify directory in which you want to save the Keras model.
config (`dict`, *optional*):
Configuration object to be saved alongside the model weights.
include_optimizer(`bool`, *optional*, defaults to `False`):
Whether or not to include optimizer in serialization.
plot_model (`bool`, *optional*, defaults to `True`):
Setting this to `True` will plot the model and put it in the model
card. Requires graphviz and pydot to be installed.
tags (Union[`str`,`list`], *optional*):
List of tags that are related to model or string of a single tag. See example tags
[here](https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1).
model_save_kwargs(`dict`, *optional*):
model_save_kwargs will be passed to
[`tf.keras.models.save_model()`](https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model).
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#savepretrainedkeras | #savepretrainedkeras | .md | 18_15 |
```python
Load pretrained fastai model from the Hub or from a local directory.
Args:
repo_id (`str`):
The location where the pickled fastai.Learner is. It can be either of the two:
- Hosted on the Hugging Face Hub. E.g.: 'espejelomar/fatai-pet-breeds-classification' or 'distilgpt2'.
You can add a `revision` by appending `@` at the end of `repo_id`. E.g.: `dbmdz/bert-base-german-cased@main`.
Revision is the specific model version to use. Since we use a git-based system for storing models and other
artifacts on the Hugging Face Hub, it can be a branch name, a tag name, or a commit id.
- Hosted locally. `repo_id` would be a directory containing the pickle and a pyproject.toml
indicating the fastai and fastcore versions used to build the `fastai.Learner`. E.g.: `./my_model_directory/`.
revision (`str`, *optional*):
Revision at which the repo's files are downloaded. See documentation of `snapshot_download`.
Returns:
The `fastai.Learner` model in the `repo_id` repo.
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#frompretrainedfastai | #frompretrainedfastai | .md | 18_16 |
```python
Upload learner checkpoint files to the Hub.
Use `allow_patterns` and `ignore_patterns` to precisely filter which files should be pushed to the hub. Use
`delete_patterns` to delete existing remote files in the same commit. See [`upload_folder`] reference for more
details.
Args:
learner (`Learner`):
The `fastai.Learner' you'd like to push to the Hub.
repo_id (`str`):
The repository id for your model in Hub in the format of "namespace/repo_name". The namespace can be your individual account or an organization to which you have write access (for example, 'stanfordnlp/stanza-de').
commit_message (`str`, *optional*):
Message to commit while pushing. Will default to :obj:`"add model"`.
private (`bool`, *optional*):
Whether or not the repository created should be private.
If `None` (default), will default to been public except if the organization's default is private.
token (`str`, *optional*):
The Hugging Face account token to use as HTTP bearer authorization for remote files. If :obj:`None`, the token will be asked by a prompt.
config (`dict`, *optional*):
Configuration object to be saved alongside the model weights.
branch (`str`, *optional*):
The git branch on which to push the model. This defaults to
the default branch as specified in your repository, which
defaults to `"main"`.
create_pr (`boolean`, *optional*):
Whether or not to create a Pull Request from `branch` with that commit.
Defaults to `False`.
api_endpoint (`str`, *optional*):
The API endpoint to use when pushing the model to the hub.
allow_patterns (`List[str]` or `str`, *optional*):
If provided, only files matching at least one pattern are pushed.
ignore_patterns (`List[str]` or `str`, *optional*):
If provided, files matching any of the patterns are not pushed.
delete_patterns (`List[str]` or `str`, *optional*):
If provided, remote files matching any of the patterns will be deleted from the repo.
Returns:
The url of the commit of your model in the given repository.
<Tip>
Raises the following error:
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
if the user is not log on to the Hugging Face Hub.
</Tip>
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/mixins.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/mixins/#pushtohubfastai | #pushtohubfastai | .md | 18_17 |
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
<!--⚠️ Note that this file is auto-generated by `utils/generate_inference_types.py`. Do not modify it manually.--> | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/ | .md | 19_0 |
|
This page lists the types (e.g. dataclasses) available for each task supported on the Hugging Face Hub.
Each task is specified using a JSON schema, and the types are generated from these schemas - with some customization
due to Python requirements.
Visit [@huggingface.js/tasks](https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks)
to find the JSON schemas for each task.
This part of the lib is still under development and will be improved in future releases. | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#inference-types | #inference-types | .md | 19_1 |
```python
Inputs for Audio Classification inference
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubaudioclassificationinput | #huggingfacehubaudioclassificationinput | .md | 19_2 |
```python
Outputs for Audio Classification inference
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubaudioclassificationoutputelement | #huggingfacehubaudioclassificationoutputelement | .md | 19_3 |
```python
Additional inference parameters for Audio Classification
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubaudioclassificationparameters | #huggingfacehubaudioclassificationparameters | .md | 19_4 |
```python
Inputs for Audio to Audio inference
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubaudiotoaudioinput | #huggingfacehubaudiotoaudioinput | .md | 19_5 |
```python
Outputs of inference for the Audio To Audio task
A generated audio file with its label.
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubaudiotoaudiooutputelement | #huggingfacehubaudiotoaudiooutputelement | .md | 19_6 |
```python
Parametrization of the text generation process
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubautomaticspeechrecognitiongenerationparameters | #huggingfacehubautomaticspeechrecognitiongenerationparameters | .md | 19_7 |
```python
Inputs for Automatic Speech Recognition inference
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubautomaticspeechrecognitioninput | #huggingfacehubautomaticspeechrecognitioninput | .md | 19_8 |
```python
Outputs of inference for the Automatic Speech Recognition task
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubautomaticspeechrecognitionoutput | #huggingfacehubautomaticspeechrecognitionoutput | .md | 19_9 |
```python
AutomaticSpeechRecognitionOutputChunk(text: str, timestamps: List[float])
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubautomaticspeechrecognitionoutputchunk | #huggingfacehubautomaticspeechrecognitionoutputchunk | .md | 19_10 |
```python
Additional inference parameters for Automatic Speech Recognition
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubautomaticspeechrecognitionparameters | #huggingfacehubautomaticspeechrecognitionparameters | .md | 19_11 |
```python
Chat Completion Input.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninput | #huggingfacehubchatcompletioninput | .md | 19_12 |
```python
ChatCompletionInputFunctionDefinition(arguments: Any, name: str, description: Optional[str] = None)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputfunctiondefinition | #huggingfacehubchatcompletioninputfunctiondefinition | .md | 19_13 |
```python
ChatCompletionInputFunctionName(name: str)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputfunctionname | #huggingfacehubchatcompletioninputfunctionname | .md | 19_14 |
```python
ChatCompletionInputGrammarType(type: 'ChatCompletionInputGrammarTypeType', value: Any)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputgrammartype | #huggingfacehubchatcompletioninputgrammartype | .md | 19_15 |
```python
ChatCompletionInputMessage(content: Union[List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputMessageChunk], str], role: str, name: Optional[str] = None)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputmessage | #huggingfacehubchatcompletioninputmessage | .md | 19_16 |
```python
ChatCompletionInputMessageChunk(type: 'ChatCompletionInputMessageChunkType', image_url: Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputURL] = None, text: Optional[str] = None)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputmessagechunk | #huggingfacehubchatcompletioninputmessagechunk | .md | 19_17 |
```python
ChatCompletionInputStreamOptions(include_usage: bool)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputstreamoptions | #huggingfacehubchatcompletioninputstreamoptions | .md | 19_18 |
```python
ChatCompletionInputTool(function: huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputFunctionDefinition, type: str)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputtool | #huggingfacehubchatcompletioninputtool | .md | 19_19 |
```python
ChatCompletionInputToolChoiceClass(function: huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputFunctionName)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputtoolchoiceclass | #huggingfacehubchatcompletioninputtoolchoiceclass | .md | 19_20 |
```python
ChatCompletionInputURL(url: str)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletioninputurl | #huggingfacehubchatcompletioninputurl | .md | 19_21 |
```python
Chat Completion Output.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionoutput | #huggingfacehubchatcompletionoutput | .md | 19_22 |
```python
ChatCompletionOutputComplete(finish_reason: str, index: int, message: huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputMessage, logprobs: Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprobs] = None)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionoutputcomplete | #huggingfacehubchatcompletionoutputcomplete | .md | 19_23 |
```python
ChatCompletionOutputFunctionDefinition(arguments: Any, name: str, description: Optional[str] = None)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionoutputfunctiondefinition | #huggingfacehubchatcompletionoutputfunctiondefinition | .md | 19_24 |
```python
ChatCompletionOutputLogprob(logprob: float, token: str, top_logprobs: List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputTopLogprob])
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionoutputlogprob | #huggingfacehubchatcompletionoutputlogprob | .md | 19_25 |
```python
ChatCompletionOutputLogprobs(content: List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprob])
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionoutputlogprobs | #huggingfacehubchatcompletionoutputlogprobs | .md | 19_26 |
```python
ChatCompletionOutputMessage(role: str, content: Optional[str] = None, tool_calls: Optional[List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputToolCall]] = None)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionoutputmessage | #huggingfacehubchatcompletionoutputmessage | .md | 19_27 |
```python
ChatCompletionOutputToolCall(function: huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputFunctionDefinition, id: str, type: str)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionoutputtoolcall | #huggingfacehubchatcompletionoutputtoolcall | .md | 19_28 |
```python
ChatCompletionOutputTopLogprob(logprob: float, token: str)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionoutputtoplogprob | #huggingfacehubchatcompletionoutputtoplogprob | .md | 19_29 |
```python
ChatCompletionOutputUsage(completion_tokens: int, prompt_tokens: int, total_tokens: int)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionoutputusage | #huggingfacehubchatcompletionoutputusage | .md | 19_30 |
```python
Chat Completion Stream Output.
Auto-generated from TGI specs.
For more details, check out
https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionstreamoutput | #huggingfacehubchatcompletionstreamoutput | .md | 19_31 |
```python
ChatCompletionStreamOutputChoice(delta: huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputDelta, index: int, finish_reason: Optional[str] = None, logprobs: Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputLogprobs] = None)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionstreamoutputchoice | #huggingfacehubchatcompletionstreamoutputchoice | .md | 19_32 |
```python
ChatCompletionStreamOutputDelta(role: str, content: Optional[str] = None, tool_calls: Optional[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputDeltaToolCall] = None)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionstreamoutputdelta | #huggingfacehubchatcompletionstreamoutputdelta | .md | 19_33 |
```python
ChatCompletionStreamOutputDeltaToolCall(function: huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputFunction, id: str, index: int, type: str)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionstreamoutputdeltatoolcall | #huggingfacehubchatcompletionstreamoutputdeltatoolcall | .md | 19_34 |
```python
ChatCompletionStreamOutputFunction(arguments: str, name: Optional[str] = None)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionstreamoutputfunction | #huggingfacehubchatcompletionstreamoutputfunction | .md | 19_35 |
```python
ChatCompletionStreamOutputLogprob(logprob: float, token: str, top_logprobs: List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputTopLogprob])
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionstreamoutputlogprob | #huggingfacehubchatcompletionstreamoutputlogprob | .md | 19_36 |
```python
ChatCompletionStreamOutputLogprobs(content: List[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputLogprob])
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionstreamoutputlogprobs | #huggingfacehubchatcompletionstreamoutputlogprobs | .md | 19_37 |
```python
ChatCompletionStreamOutputTopLogprob(logprob: float, token: str)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionstreamoutputtoplogprob | #huggingfacehubchatcompletionstreamoutputtoplogprob | .md | 19_38 |
```python
ChatCompletionStreamOutputUsage(completion_tokens: int, prompt_tokens: int, total_tokens: int)
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubchatcompletionstreamoutputusage | #huggingfacehubchatcompletionstreamoutputusage | .md | 19_39 |
```python
Inputs for Depth Estimation inference
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubdepthestimationinput | #huggingfacehubdepthestimationinput | .md | 19_40 |
```python
Outputs of inference for the Depth Estimation task
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubdepthestimationoutput | #huggingfacehubdepthestimationoutput | .md | 19_41 |
```python
Inputs for Document Question Answering inference
``` | /Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_types.md | https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_types/#huggingfacehubdocumentquestionansweringinput | #huggingfacehubdocumentquestionansweringinput | .md | 19_42 |