Model Card for Lucie-7B-Instruct
Model Description
Lucie-7B-Instruct is a fine-tuned version of Lucie-7B, an open-source, multilingual causal language model created by OpenLLM-France.
Lucie-7B-Instruct is fine-tuned on synthetic instructions produced by ChatGPT and Gemma and a small set of customized prompts about OpenLLM and Lucie.
Training details
Training data
Lucie-7B-Instruct is trained on the following datasets:
- Alpaca-cleaned (English; 51604 samples)
- Alpaca-cleaned-fr (French; 51655 samples)
- Magpie-Gemma (English; 195167 samples)
- Wildchat (French subset; 26436 samples)
- Hard-coded prompts concerning OpenLLM and Lucie (based on allenai/tulu-3-hard-coded-10x)
- French: openllm_french.jsonl (24x10 samples)
- English: openllm_english.jsonl (24x10 samples)
Preprocessing
- Filtering by language: Magpie-Gemma and Wildchat were filtered to keep only English and French samples, respectively.
- Filtering by keyword: Examples containing assistant responses were filtered out from the four synthetic datasets if the responses contained a keyword from the list filter_strings. This filter is designed to remove examples in which the assistant is presented as model other than Lucie (e.g., ChatGPT, Gemma, Llama, ...).
Training procedure
The model architecture and hyperparameters are the same as for Lucie-7B during the annealing phase with the following exceptions:
- context length: 4096
- batch size: 1024
- max learning rate: 3e-5
- min learning rate: 3e-6
Testing the model
Test with ollama
- Download and install Ollama
- Download the GGUF model
- Copy the
Modelfile
, adpating if necessary the path to the GGUF file (line starting withFROM
). - Run in a shell:
ollama create -f Modelfile Lucie
ollama run Lucie
- Once ">>>" appears, type your prompt(s) and press Enter.
- Optionally, restart a conversation by typing "
/clear
" - End the session by typing "
/bye
".
Useful for debug:
- How to print input requests and output responses in Ollama server?
- Documentation on Modelfile
- Examples: Ollama model library
- Llama 3 example: https://ollama.com/library/llama3.1
- Examples: Ollama model library
- Add GUI : https://docs.openwebui.com/
Test with vLLM
1. Run vLLM Docker Container
Use the following command to deploy the model,
replacing INSERT_YOUR_HF_TOKEN
with your Hugging Face Hub token.
docker run --runtime nvidia --gpus=all \
--env "HUGGING_FACE_HUB_TOKEN=INSERT_YOUR_HF_TOKEN" \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:latest \
--model OpenLLM-France/Lucie-7B-Instruct
2. Test using OpenAI Client in Python
To test the deployed model, use the OpenAI Python client as follows:
from openai import OpenAI
# Initialize the client
client = OpenAI(base_url='http://localhost:8000/v1', api_key='empty')
# Define the input content
content = "Hello Lucie"
# Generate a response
chat_response = client.chat.completions.create(
model="OpenLLM-France/Lucie-7B-Instruct",
messages=[
{"role": "user", "content": content}
],
)
print(chat_response.choices[0].message.content)
Citation
When using the Lucie-7B-Instruct model, please cite the following paper:
✍ Olivier Gouvert, Julie Hunter, Jérôme Louradour, Evan Dufraisse, Yaya Sy, Pierre-Carl Langlais, Anastasia Stasenko, Laura Rivière, Christophe Cerisara, Jean-Pierre Lorré (2025) Lucie-7B LLM and its training dataset
@misc{openllm2023claire,
title={The Lucie-7B LLM and the Lucie Training Dataset:
open resources for multilingual language generation},
author={Olivier Gouvert and Julie Hunter and Jérôme Louradour and Evan Dufraisse and Yaya Sy and Pierre-Carl Langlais and Anastasia Stasenko and Laura Rivière and Christophe Cerisara and Jean-Pierre Lorré},
year={2025},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Acknowledgements
This work was performed using HPC resources from GENCI–IDRIS (Grant 2024-GC011015444).
Lucie-7B was created by members of LINAGORA and the OpenLLM-France community, including in alphabetical order: Olivier Gouvert (LINAGORA), Ismaïl Harrando (LINAGORA/SciencesPo), Julie Hunter (LINAGORA), Jean-Pierre Lorré (LINAGORA), Jérôme Louradour (LINAGORA), Michel-Marie Maudet (LINAGORA), and Laura Rivière (LINAGORA).
We thank Clément Bénesse (Opsci), Christophe Cerisara (LORIA), Evan Dufraisse (CEA), Guokan Shang (MBZUAI), Joël Gombin (Opsci), Jordan Ricker (Opsci), and Olivier Ferret (CEA) for their helpful input.
Contact
- Downloads last month
- 222