7.png

Blaze.1-32B-Instruct

Blaze.1-32B-Instruct is based on the QwQ-32B-Preview model, fine-tuned using synthetic data for mathematical reasoning and conditional reasoning to handle complex reasoning problems. The model may unexpectedly mix languages or switch between them, affecting response clarity. Additionally, it may enter recursive reasoning loops, resulting in lengthy responses without a conclusive answer, as it focuses on maintaining a continuous chain of thought reasoning.

Quickstart Chat Template

Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Blaze.1-32B-Instruct"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "How many r in strawberry."
messages = [
    {"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Intended Use

Blaze.1-32B-Instruct is designed to assist with complex reasoning tasks, including mathematical problem-solving, logical reasoning, and step-by-step explanations. It is particularly useful for applications requiring conditional reasoning, structured content generation, and language understanding across multiple domains. The model is also fine-tuned for conversational AI, making it well-suited for virtual assistants, educational tools, and research purposes. Additionally, it supports tasks involving multilingual understanding, making it valuable in environments where language switching or code-mixed text processing is required.

Limitations

  1. Language Mixing and Code-Switching Issues: The model may unexpectedly switch between languages or mix them within a single response, potentially reducing the clarity of outputs.
  2. Recursive Reasoning Loops: During complex reasoning, the model may enter circular reasoning patterns, resulting in overly lengthy responses without arriving at a definitive conclusion.
  3. Overfitting to Training Data: Since Blaze.1-32B-Instruct is fine-tuned on specific synthetic datasets, its performance might be biased toward certain types of problems and may generalize poorly on entirely new tasks.
  4. Context Sensitivity: While the model is trained for step-by-step reasoning, it may occasionally lose track of the context in longer conversations, leading to irrelevant or incomplete answers.
  5. Resource Intensity: As a large model (32B parameters), it requires significant computational resources for both inference and deployment, which may limit its usability in low-resource environments.
Downloads last month
0
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for prithivMLmods/Blaze.1-32B-Instruct

Base model

Qwen/Qwen2.5-32B
Finetuned
(35)
this model

Collection including prithivMLmods/Blaze.1-32B-Instruct