Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

alibabasglabΒ 
posted an update 2 days ago
view post
Post
4500
πŸŽ‰ ClearerVoice-Studio New Feature: Speech Super-Resolution with MossFormer2 ! πŸš€
We’re excited to announce that ClearerVoice-Studio now supports speech super-resolution, powered by our latest MossFormer2-based model!
What’s New?

πŸ”Š Convert Low-Resolution to High-Resolution Audio:
Transform low-resolution audio (effective sampling rate β‰₯ 16 kHz) into crystal-clear, high-resolution audio at 48 kHz.

πŸ€– Cutting-Edge Technology:
Leverages the MossFormer2 model plus HiFi-GAN, optimised for generating high-quality audio with enhanced perceptual clarity.

🎧 Enhanced Listening Experience:
Perfect for speech enhancement, content restoration, and high-fidelity audio applications.

🌟 Try It Out!
Upgrade to the latest version of ClearerVoice-Studio (https://github.com/modelscope/ClearerVoice-Studio) to experience this powerful feature. Check out the updated documentation and examples in our repository.

Let us know your thoughts, feedback, or feature requests in the Issues section.
MonsterMMORPGΒ 
posted an update 2 days ago
view post
Post
3912
It is now possible to generate 16 Megapixel (4096x4096) raw images with SANA 4K model using under 8GB VRAM, 4 Megapixel (2048x2048) images using under 6GB VRAM, and 1 Megapixel (1024x1024) images using under 4GB VRAM thanks to new optimizations

13 January 2024 Update

Installers : https://www.patreon.com/posts/from-nvidia-labs-116474081

New 4K Tutorial Video : https://youtu.be/GjENQfHF4W8

Now the APP will use Diffusers Pipeline and it has huge VRAM optimizations

You need to reinstall

The models will be downloaded into your Hugging Face cache folder when you first time generate something

How to Get Installation Logs and How to Change Hugging Face Cache Folder :
https://www.patreon.com/posts/108419878

Please make a fresh install

When you enable all 4 optimizations the VRAM usages are like below

Make sure shared VRAM is enabled because initial loading of the model need more VRAM

Enable VAE Tiling + Enable VAE Slicing + Enable Model CPU Offload +
Enable Sequential CPU Offload

1K (1024x1024) : 4 GB GPUs
2K (2048x2048) : 6 GB GPUs
4K (4096x4096) : 8 GB GPUs

Still in any case may work on your GPU test it

Just Enable VAE Tiling + Enable Model CPU Offload works great in many cases

All below attached images are generated via SANA 4K model, they are RAW and their resolution is 5376x3072

Official repo page : https://github.com/NVlabs/Sana
  • 2 replies
Β·
hexgradΒ 
posted an update 9 days ago
view post
Post
11039
πŸ“£ Looking for labeled, high-quality synthetic audio/TTS data πŸ“£ Have you been or are you currently calling API endpoints from OpenAI, ElevenLabs, etc? Do you have labeled audio data sitting around gathering dust? Let's talk! Join https://discord.gg/QuGxSWBfQy or comment down below.

If your data exceeds quantity & quality thresholds and is approved into the next hexgrad/Kokoro-82M training mix, and you permissively DM me the data under an effective Apache license, then I will DM back the corresponding voicepacks for YOUR data if/when the next Apache-licensed Kokoro base model drops.

What does this mean? If you've been calling closed-source TTS or audio API endpoints to:
- Build voice agents
- Make long-form audio, like audiobooks or podcasts
- Handle customer support, etc
Then YOU can contribute to the training mix and get useful artifacts in return. ❀️

More details at hexgrad/Kokoro-82M#21
Β·
merveΒ 
posted an update 1 day ago
view post
Post
2360
there's a new multimodal retrieval model in town 🀠
LlamaIndex released vdr-2b-multi-v1
> uses 70% less image tokens, yet outperforming other dse-qwen2 based models
> 3x faster inference with less VRAM πŸ’¨
> shrinkable with matryoshka πŸͺ†
> can do cross-lingual retrieval!
Collection: llamaindex/visual-document-retrieval-678151d19d2758f78ce910e1 (with models and datasets)
Demo: llamaindex/multimodal_vdr_demo
Learn more from their blog post here https://huggingface.co/blog/vdr-2b-multilingual πŸ“–
davanstrienΒ 
posted an update 1 day ago
view post
Post
1883
Introducing scandi-fine-web-cleaner davanstrien/scandi-fine-web-cleaner, the first model trained on FineWeb-C community annotations!

FineWeb2 is a massive multilingual dataset for pre-training language models. Like any web-scale dataset, it contains low-quality content. How can we improve it?

Over the past months, an amazing community of 400+ annotators has been labelling content quality (using Argilla) across 23 languages through the FineWeb-C initiative.

Today, I'm happy to share the first classifier trained on this data.

πŸ” What we've built:

- A lightweight classifier that efficiently removes low-quality content
- 90%+ precision demonstrated on Danish & Swedish
- Can process the 43M+ documents in Danish FineWeb2 with minimal compute

🌍 Why this matters: The approach can be reproduced for any of the 23 languages in FineWeb-C ( data-is-better-together/fineweb-c). We can improve training data quality at scale without massive compute resources by starting with community annotations and training small, efficient classifiers.

Want to build a classifier for your language? Check out the full blog post with code examples and implementation details: https://danielvanstrien.xyz/posts/2025/FineWeb-c/scandinavian-content-filtering-fineweb.html
  • 1 reply
Β·
megΒ 
posted an update 1 day ago
view post
Post
1635
πŸ’«...And we're live!πŸ’« Seasonal newsletter from ethicsy folks at Hugging Face, exploring the ethics of "AI Agents"
https://huggingface.co/blog/ethics-soc-7
Our analyses found:
- There's a spectrum of "agent"-ness
- *Safety* is a key issue, leading to many other value-based concerns
Read for details & what to do next!
With @evijit , @giadap , and @sasha
alibabasglabΒ 
posted an update about 21 hours ago
view post
Post
1041
We are thrilled to present the improved "ClearerVoice-Studio", an open-source platform designed to make speech processing easy use for everyone! Whether you’re working on speech enhancement, speech separation, speech super-resolution, or target speaker extraction, this unified platform has you covered.

** Why Choose ClearerVoice-Studio?**

- Pre-Trained Models: Includes cutting-edge pre-trained models, fine-tuned on extensive, high-quality datasets. No need to start from scratch!
- Ease of Use: Designed for seamless integration with your projects, offering a simple yet flexible interface for inference and training.

**Where to Find Us?**

- GitHub Repository: ClearerVoice-Studio (https://github.com/modelscope/ClearerVoice-Studio)
- Try Our Demo: Hugging Face Space ( alibabasglab/ClearVoice)

**What Can You Do with ClearerVoice-Studio?**

- Enhance noisy speech recordings to achieve crystal-clear quality.
- Separate speech from complex audio mixtures with ease.
- Transform low-resolution audio into high-resolution audio. A full upscaled LJSpeech-1.1-48kHz dataset can be downloaded from alibabasglab/LJSpeech-1.1-48kHz .
- Extract target speaker voices with precision using audio-visual models.

**Join Us in Growing ClearerVoice-Studio!**

We believe in the power of open-source collaboration. By starring our GitHub repository and sharing ClearerVoice-Studio with your network, you can help us grow this community-driven platform.

**Support us by:**

- Starring it on GitHub.
- Exploring and contributing to our codebase .
- Sharing your feedback and use cases to make the platform even better.
- Joining our community discussions to exchange ideas and innovations.
- Together, let’s push the boundaries of speech processing! Thank you for your support! :sparkling_heart:
fdaudensΒ 
posted an update about 23 hours ago
view post
Post
982
πŸ”₯ The AI Agent hype is real! This blog post deep dives into everything you need to know before deploying them: from key definitions to practical recommendations. A must-read for anyone building the future of autonomous systems.

πŸ“Š Key insight: A clear table breaking down the 5 levels of AI agents - from simple processors to fully autonomous systems. Essential framework for understanding where your agent stands on the autonomy spectrum

βš–οΈ Deep analysis of 15 core values reveals critical trade-offs: accuracy, privacy, safety, equity & more. The same features that make agents powerful can make them risky. Understanding these trade-offs is crucial for responsible deployment

🎯 6 key recommendations for the road ahead:
- Create rigorous evaluation protocols
- Study societal effects
- Understand ripple effects
- Improve transparency
- Open source can make a positive difference
- Monitor base model evolution

Read the blog post: https://huggingface.co/blog/ethics-soc-7 Brillant work by @meg @evijit @sasha @giadap
nroggendorffΒ 
posted an update 1 day ago
AdinaYΒ 
posted an update about 12 hours ago
view post
Post
617
MiniCPM-o2.6 πŸ”₯ an end-side multimodal LLMs released by OpenBMB from the Chinese community
Model: openbmb/MiniCPM-o-2_6
✨ Real-time English/Chinese conversation, emotion control and ASR/STT
✨ Real-time video/audio understanding
✨ Processes up to 1.8M pixels, leads OCRBench & supports 30+ languages