unsloth Fine tune 出現錯誤

#7
by chengregy - opened

老師好, 非常謝謝老師提供如此棒的model供學生使用
而學生利用 colab , unsloth 進行 fine-tune時出現了錯誤,

model, tokenizer = FastLanguageModel.from_pretrained(
#model_name = "yentinglin/Taiwan-LLM-7B-v2.1-chat",
model_name = "yentinglin/Llama-3-Taiwan-8B-Instruct",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
# token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
)

RuntimeError Traceback (most recent call last)
in <cell line: 20>()
18 ] # More models at https://huggingface.co/unsloth
19
---> 20 model, tokenizer = FastLanguageModel.from_pretrained(
21 #model_name = "yentinglin/Taiwan-LLM-7B-v2.1-chat",
22 model_name = "yentinglin/Llama-3-Taiwan-8B-Instruct",

/usr/local/lib/python3.10/dist-packages/unsloth/models/loader.py in from_pretrained(model_name, max_seq_length, dtype, load_in_4bit, token, device_map, rope_scaling, fix_tokenizer, trust_remote_code, use_gradient_checkpointing, resize_model_vocab, revision, *args, **kwargs)
115 )
116 elif not is_model and not is_peft:
--> 117 raise RuntimeError(
118 f"Unsloth: {model_name} is not a base model or a PEFT model.\n"
119 "We could not locate a config.json or adapter_config.json file.\n"\

RuntimeError: Unsloth: yentinglin/Llama-3-Taiwan-8B-Instruct is not a base model or a PEFT model.
We could not locate a config.json or adapter_config.json file.
Are you certain the model name is correct? Does it actually exist?

請問老師是否可以指導一下,這問題是否因為權限的關係無法直接調用model呢 ?
因為我交叉比對使用老師提供的 yentinglin/Taiwan-LLM-7B-v2.1-chat 是很順利進行 fine-tune的動作

懇請老師給予指導

非常謝謝

已確定是權限上的問題了
很不好意思po這個問題

再次謝謝老師提供model

Sign up or log in to comment