runtime error

Exit code: 1. Reason: �█▉ | 4.47G/4.99G [00:10<00:00, 527MB/s] model-00003-of-00004.safetensors: 100%|█████████▉| 4.99G/4.99G [00:11<00:00, 444MB/s] Downloading shards: 75%|███████▌ | 3/4 [00:32<00:10, 10.95s/it] model-00004-of-00004.safetensors: 0%| | 0.00/1.26G [00:00<?, ?B/s] model-00004-of-00004.safetensors: 2%|▏ | 21.0M/1.26G [00:01<01:26, 14.2MB/s] model-00004-of-00004.safetensors: 25%|██▌ | 315M/1.26G [00:02<00:06, 151MB/s]  model-00004-of-00004.safetensors: 86%|████████▌ | 1.08G/1.26G [00:03<00:00, 403MB/s] model-00004-of-00004.safetensors: 100%|█████████▉| 1.26G/1.26G [00:03<00:00, 322MB/s] Downloading shards: 100%|██████████| 4/4 [00:36<00:00, 8.23s/it] Downloading shards: 100%|██████████| 4/4 [00:36<00:00, 9.14s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 80, in <module> tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, torch_dtype="bfloat16", device_map=device_map) File "/usr/local/lib/python3.10/site-packages/llava/model/builder.py", line 228, in load_pretrained_model model = LlavaQwenForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, attn_implementation=attn_implementation, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4124, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1586, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1727, in _check_and_enable_flash_attn_2 raise ValueError( ValueError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available on CPU. Please make sure torch can access a CUDA device.

Container logs:

Fetching error logs...