Alibaba_Speech_Lab_SG PRO

alibabasglab

AI & ML interests

speech enhancement, separation, and codec

Recent Activity

updated a Space about 4 hours ago
alibabasglab/ClearVoice-SR
liked a dataset 1 day ago
alibabasglab/LJSpeech-1.1-48kHz
posted an update 1 day ago
We are thrilled to present the improved "ClearerVoice-Studio", an open-source platform designed to make speech processing easy use for everyone! Whether you’re working on speech enhancement, speech separation, speech super-resolution, or target speaker extraction, this unified platform has you covered. ** Why Choose ClearerVoice-Studio?** - Pre-Trained Models: Includes cutting-edge pre-trained models, fine-tuned on extensive, high-quality datasets. No need to start from scratch! - Ease of Use: Designed for seamless integration with your projects, offering a simple yet flexible interface for inference and training. **Where to Find Us?** - GitHub Repository: ClearerVoice-Studio (https://github.com/modelscope/ClearerVoice-Studio) - Try Our Demo: Hugging Face Space (https://huggingface.co/spaces/alibabasglab/ClearVoice) **What Can You Do with ClearerVoice-Studio?** - Enhance noisy speech recordings to achieve crystal-clear quality. - Separate speech from complex audio mixtures with ease. - Transform low-resolution audio into high-resolution audio. A full upscaled LJSpeech-1.1-48kHz dataset can be downloaded from https://huggingface.co/datasets/alibabasglab/LJSpeech-1.1-48kHz . - Extract target speaker voices with precision using audio-visual models. **Join Us in Growing ClearerVoice-Studio!** We believe in the power of open-source collaboration. By starring our GitHub repository and sharing ClearerVoice-Studio with your network, you can help us grow this community-driven platform. **Support us by:** - Starring it on GitHub. - Exploring and contributing to our codebase . - Sharing your feedback and use cases to make the platform even better. - Joining our community discussions to exchange ideas and innovations. - Together, let’s push the boundaries of speech processing! Thank you for your support! :sparkling_heart:
View all activity

Organizations

Alibaba-PAI's profile picture

alibabasglab's activity

posted an update 1 day ago
view post
Post
1513
We are thrilled to present the improved "ClearerVoice-Studio", an open-source platform designed to make speech processing easy use for everyone! Whether you’re working on speech enhancement, speech separation, speech super-resolution, or target speaker extraction, this unified platform has you covered.

** Why Choose ClearerVoice-Studio?**

- Pre-Trained Models: Includes cutting-edge pre-trained models, fine-tuned on extensive, high-quality datasets. No need to start from scratch!
- Ease of Use: Designed for seamless integration with your projects, offering a simple yet flexible interface for inference and training.

**Where to Find Us?**

- GitHub Repository: ClearerVoice-Studio (https://github.com/modelscope/ClearerVoice-Studio)
- Try Our Demo: Hugging Face Space ( alibabasglab/ClearVoice)

**What Can You Do with ClearerVoice-Studio?**

- Enhance noisy speech recordings to achieve crystal-clear quality.
- Separate speech from complex audio mixtures with ease.
- Transform low-resolution audio into high-resolution audio. A full upscaled LJSpeech-1.1-48kHz dataset can be downloaded from alibabasglab/LJSpeech-1.1-48kHz .
- Extract target speaker voices with precision using audio-visual models.

**Join Us in Growing ClearerVoice-Studio!**

We believe in the power of open-source collaboration. By starring our GitHub repository and sharing ClearerVoice-Studio with your network, you can help us grow this community-driven platform.

**Support us by:**

- Starring it on GitHub.
- Exploring and contributing to our codebase .
- Sharing your feedback and use cases to make the platform even better.
- Joining our community discussions to exchange ideas and innovations.
- Together, let’s push the boundaries of speech processing! Thank you for your support! :sparkling_heart:
reacted to their post with πŸ‘ 2 days ago
view post
Post
5022
πŸŽ‰ ClearerVoice-Studio New Feature: Speech Super-Resolution with MossFormer2 ! πŸš€
We’re excited to announce that ClearerVoice-Studio now supports speech super-resolution, powered by our latest MossFormer2-based model!
What’s New?

πŸ”Š Convert Low-Resolution to High-Resolution Audio:
Transform low-resolution audio (effective sampling rate β‰₯ 16 kHz) into crystal-clear, high-resolution audio at 48 kHz.

πŸ€– Cutting-Edge Technology:
Leverages the MossFormer2 model plus HiFi-GAN, optimised for generating high-quality audio with enhanced perceptual clarity.

🎧 Enhanced Listening Experience:
Perfect for speech enhancement, content restoration, and high-fidelity audio applications.

🌟 Try It Out!
Upgrade to the latest version of ClearerVoice-Studio (https://github.com/modelscope/ClearerVoice-Studio) to experience this powerful feature. Check out the updated documentation and examples in our repository.

Let us know your thoughts, feedback, or feature requests in the Issues section.
posted an update 2 days ago
view post
Post
5022
πŸŽ‰ ClearerVoice-Studio New Feature: Speech Super-Resolution with MossFormer2 ! πŸš€
We’re excited to announce that ClearerVoice-Studio now supports speech super-resolution, powered by our latest MossFormer2-based model!
What’s New?

πŸ”Š Convert Low-Resolution to High-Resolution Audio:
Transform low-resolution audio (effective sampling rate β‰₯ 16 kHz) into crystal-clear, high-resolution audio at 48 kHz.

πŸ€– Cutting-Edge Technology:
Leverages the MossFormer2 model plus HiFi-GAN, optimised for generating high-quality audio with enhanced perceptual clarity.

🎧 Enhanced Listening Experience:
Perfect for speech enhancement, content restoration, and high-fidelity audio applications.

🌟 Try It Out!
Upgrade to the latest version of ClearerVoice-Studio (https://github.com/modelscope/ClearerVoice-Studio) to experience this powerful feature. Check out the updated documentation and examples in our repository.

Let us know your thoughts, feedback, or feature requests in the Issues section.