Post
779
Check out my collection of pre-made GGUF LoRA adapters!
This allow you to use both normal + abliterated version of popular models like llama, qwen, etc, without having to double to amount of VRAM usage.
ngxson/gguf_lora_collection
This allow you to use both normal + abliterated version of popular models like llama, qwen, etc, without having to double to amount of VRAM usage.
ngxson/gguf_lora_collection