runtime error

Exit code: 1. Reason: ocal/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1294, in get_hf_file_metadata r = _request_wrapper( File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 278, in _request_wrapper response = _request_wrapper( File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 302, in _request_wrapper hf_raise_for_status(response) File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 423, in hf_raise_for_status raise _format(GatedRepoError, message, response) from e huggingface_hub.errors.GatedRepoError: 401 Client Error. (Request ID: Root=1-67872649-6892e36115f8b17e7a369e98;976f15c1-abb5-4dba-a225-8ab4b8d503c4) Cannot access gated repo for url https://huggingface.co/meta-llama/Llama-2-70b-chat-hf/resolve/main/tokenizer_config.json. Access to model meta-llama/Llama-2-70b-chat-hf is restricted. You must have access to it and be authenticated to access it. Please log in. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/user/app/app.py", line 39, in <module> tokenizer = AutoTokenizer.from_pretrained(model_id, use_auth_token=os.environ["HUGGINGFACE_TOKEN"]) File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 718, in from_pretrained tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 550, in get_tokenizer_config resolved_config_file = cached_file( File "/usr/local/lib/python3.10/site-packages/transformers/utils/hub.py", line 445, in cached_file raise EnvironmentError( OSError: You are trying to access a gated repo. Make sure to request access at https://huggingface.co/meta-llama/Llama-2-70b-chat-hf and pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`.

Container logs:

Fetching error logs...