Requesting Support for GGUF Quantization of MiniMax-Text-01 through llama.cpp
#1
by
Doctor-Chad-PhD
- opened
Dear MiniMax Team,
I would like to request the support of GGUF quantization through the llama.cpp library.
As this will allow more users to use your new model.
The repo for llama.cpp can be found here: https://github.com/ggerganov/llama.cpp.
Thank you for considering this request.