Apply for community grant: Academic project (gpu and storage)

#1
by libokj - opened

I am writing to apply for GPU support to enhance GenFBDD, a web-based pipeline transforming Fragment-Based Drug Design (FBDD) by automating its core processes using state-of-the-art diffusion-based molecular generation models. We are a team of PhD students in the field of AI-assisted drug discovery and have submitted the proposal for this work to Nucleic Acids Research Web Server Issue 2025, which is currently under review.

FBDD traditionally depends on experience-demanding and intuition-driven workflows, involving fragment docking, hotspot identification, and linker design, to identify and optimize weak-binding chemical fragments into lead compounds and drug candidates. GenFBDD reimagines this process with a diffusion model-driven Dock-Link-ReDock pipeline, integrating SOTA models like DiffDock and DiffLinker to streamline the process and accelerate drug discovery.

GenFBDD is currently on CPU infrastructure, which constrains its speed and scalability, particularly for the computational demands of diffusion models in our pipeline. Introducing GPU acceleration with at least 24GB of VRAM would significantly enhance performance, enabling faster computations and making this first ever web-based generative model-empowered FBDD pipeline more accessible to researchers globally. Additionally, integrating minimal persistent storage for historical predictions—programmatically retained for up to 48 hours—would address the challenges posed by the extended runtime of submitted jobs, further improving usability.

Website for embedding this space: https://www.ciddr-lab.ac.cn/genfbdd/

Thank you for considering this application.
image.png

@hysts Happy New Year! Hope you had a great holiday season. Just wanted to check in and make sure my application didn’t get lost in the holiday shuffle 🙂. By the way, I duplicated the space and tested it on Zero GPU, and there seems to be a compatibility issue between the spaces.GPU decorator and torch_geometric. The maximum duration on Zero GPU might also not be enough for the pipeline, as it takes around 10-30 minutes per submit (on local RTX4090) depending on the dataset.

Hi @libokj , the community grant is intended for demos and doesn't cover research project support. Also, even for demos, those that take longer than 10 minutes or so to run are not eligible for the grant because only 5 or 6 people can run it in an hour. For such demos, we usually recommend creating a CPU demo and suggesting that users duplicate it and assign a paid GPU for their own use.

Hi @libokj , the community grant is intended for demos and doesn't cover research project support. Also, even for demos, those that take longer than 10 minutes or so to run are not eligible for the grant because only 5 or 6 people can run it in an hour. For such demos, we usually recommend creating a CPU demo and suggesting that users duplicate it and assign a paid GPU for their own use.

Hey, thank you for the feedback. I understand the need for apps on HF to be as accessible as possible. Is it possible that we work with reduced user-provided dataset to control the inference time per job around 5 minutes with a dedicated GPU with 16 or 24 GBs of VRAM?

Sign up or log in to comment