SD3.5-Large-IP-Adapter
This repository contains the checkpoints for the diffusers implementation of InstantX/SD3.5-Large-IP-Adapter, an IP-Adapter for SD3.5-Large model released by researchers from InstantX Team, where image work just like text, so it may not be responsive or interfere with other text, but we do hope you enjoy this model, have fun and share your creative works with us on Twitter.
Model Card
This is a regular IP-Adapter, where the new layers are added into all 38 blocks. We use google/siglip-so400m-patch14-384 to encode image for its superior performance, and adopt a TimeResampler to project. The image token number is set to 64.
Showcases
Inference
import torch
from PIL import Image
from diffusers import StableDiffusion3Pipeline
from transformers import SiglipVisionModel, SiglipImageProcessor
model_path = "stabilityai/stable-diffusion-3.5-large"
image_encoder_path = "google/siglip-so400m-patch14-384"
ip_adapter_path = "guiyrt/InstantX-SD3.5-Large-IP-Adapter-diffusers"
feature_extractor = SiglipImageProcessor.from_pretrained(
image_encoder_path, torch_dtype=torch.bfloat16
)
image_encoder = SiglipVisionModel.from_pretrained(
image_encoder_path, torch_dtype=torch.bfloat16
)
pipe = StableDiffusion3Pipeline.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
feature_extractor=feature_extractor,
image_encoder=image_encoder,
).to(torch.device("cuda"))
pipe.load_ip_adapter(ip_adapter_path)
ref_img = Image.open("image.jpg").convert('RGB')
# please note that SD3.5 Large is sensitive to highres generation like 1536x1536
image = pipe(
width=1024,
height=1024,
prompt="a cat",
negative_prompt="lowres, low quality, worst quality",
num_inference_steps=24,
guidance_scale=5.0,
generator=torch.manual_seed(42),
ip_adapter_image=ref_img
).images[0]
image.save("result.jpg")
GPU Memory Constrains
If you run out of GPU memory, you can use sequential CPU offloading (should work even with 8GB GPUs, assuming enough system RAM). It comes at the cost of longer inference time, as the parameters are only copied to the GPU strictly when required, but the output is exactly the same as using a larger GPU that fits the entire pipeline in memory. Refer to Memory Optimisations for SD3 for additional methods on how to reduce GPU memory usage, such as removing or using a quantized version of the T5-XXL text encoder.
To use sequential CPU offloading, instantiate the pipeline as such instead:
pipe = StableDiffusion3Pipeline.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
feature_extractor=feature_extractor,
image_encoder=image_encoder,
)
pipe.load_ip_adapter(ip_adapter_path)
pipe._exclude_from_cpu_offload.append("image_encoder")
pipe.enable_sequential_cpu_offload()
Community ComfyUI Support
Please refer to Slickytail/ComfyUI-InstantX-IPAdapter-SD3.
License
The model is released under stabilityai-ai-community. All copyright reserved.
Acknowledgements
This project is sponsored by HuggingFace and fal.ai. Thanks to Slickytail for supporting ComfyUI node.
Citation
If you find this project useful in your research, please cite us via
@misc{sd35-large-ipa,
author = {InstantX Team},
title = {InstantX SD3.5-Large IP-Adapter Page},
year = {2024},
}
- Downloads last month
- 61
Model tree for guiyrt/InstantX-SD3.5-Large-IP-Adapter-diffusers
Base model
stabilityai/stable-diffusion-3.5-large