runtime error

Exit code: 1. Reason: :00<00:00, 175MB/s] special_tokens_map.json: 0%| | 0.00/335 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 335/335 [00:00<00:00, 2.34MB/s] config.json: 0%| | 0.00/884 [00:00<?, ?B/s] config.json: 100%|██████████| 884/884 [00:00<00:00, 6.47MB/s] adapter_config.json: 0%| | 0.00/761 [00:00<?, ?B/s] adapter_config.json: 100%|██████████| 761/761 [00:00<00:00, 6.49MB/s] config.json: 0%| | 0.00/1.41k [00:00<?, ?B/s] config.json: 100%|██████████| 1.41k/1.41k [00:00<00:00, 7.52MB/s] The `load_in_4bit` and `load_in_8bit` arguments are deprecated and will be removed in the future versions. Please, pass a `BitsAndBytesConfig` object in `quantization_config` argument instead. Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>. /usr/local/lib/python3.10/site-packages/transformers/quantizers/auto.py:195: UserWarning: You passed `quantization_config` or equivalent parameters to `from_pretrained` but the model you're loading already has a `quantization_config` attribute. The `quantization_config` from the model will be used. warnings.warn(warning_msg) Traceback (most recent call last): File "/home/user/app/app.py", line 7, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3620, in from_pretrained hf_quantizer.validate_environment( File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/quantizer_bnb_4bit.py", line 75, in validate_environment raise ImportError( ImportError: Using `bitsandbytes` 4-bit quantization requires the latest version of bitsandbytes: `pip install -U bitsandbytes`

Container logs:

Fetching error logs...