LLMs quantized with GPTQ
Irina Proskurina
iproskurina
AI & ML interests
LLMs: quantization, pre-training
Recent Activity
new activity
about 1 month ago
TheBloke/Mistral-7B-Instruct-v0.2-GPTQ:weights not used when initializing MistralForCausalLM
updated
a model
about 1 month ago
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g3
updated
a model
about 1 month ago
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g2
Organizations
Collections
4
models
43
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g3
Text Generation
•
Updated
•
31
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g2
Text Generation
•
Updated
•
6
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g1
Text Generation
•
Updated
•
5
iproskurina/opt-125m-gptq2
Text Generation
•
Updated
•
7
iproskurina/distilbert-base-alternate-layers
Updated
iproskurina/en_grammar_checker
Updated
•
7
•
4
iproskurina/Mistral-7B-v0.3-gptq-3bit
Text Generation
•
Updated
•
13
iproskurina/Mistral-7B-v0.3-GPTQ-8bit-g128
Text Generation
•
Updated
•
8
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g128
Text Generation
•
Updated
•
17
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g64
Text Generation
•
Updated
•
6