phi-4-4.0bpw-exl2

Original Model: /microsoft/phi-4
Quantization Method: EXL2

Overview

This is an EXL2 4.0bpw quantized version of phi-4.

Quantization By

I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models.

I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai

Downloads last month
2
GGUF
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for roleplaiapp/phi-4-4.0bpw-exl2

Base model

microsoft/phi-4
Finetuned
(16)
this model