Update README.md
Browse files
README.md
CHANGED
@@ -23,26 +23,27 @@ pipeline_tag: zero-shot-image-classification
|
|
23 |
|
24 |
# BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
|
25 |
|
26 |
-
[BiomedCLIP](https://aka.ms/biomedclip-paper) is a biomedical vision-language foundation model that is pretrained on [PMC-15M](https://
|
27 |
It uses PubMedBERT as the text encoder and Vision Transformer as the image encoder, with domain-specific adaptations.
|
28 |
It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering.
|
29 |
BiomedCLIP establishes new state of the art in a wide range of standard datasets, and substantially outperforms prior VLP approaches:
|
30 |
|
31 |
![](biomed-vlp-eval.svg)
|
32 |
|
|
|
33 |
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
-
```bibtex
|
37 |
-
@misc{https://doi.org/10.48550/arXiv.2303.00915,
|
38 |
-
doi = {10.48550/ARXIV.2303.00915},
|
39 |
-
url = {https://arxiv.org/abs/2303.00915},
|
40 |
-
author = {Zhang, Sheng and Xu, Yanbo and Usuyama, Naoto and Bagga, Jaspreet and Tinn, Robert and Preston, Sam and Rao, Rajesh and Wei, Mu and Valluri, Naveen and Wong, Cliff and Lungren, Matthew and Naumann, Tristan and Poon, Hoifung},
|
41 |
-
title = {Large-Scale Domain-Specific Pretraining for Biomedical Vision-Language Processing},
|
42 |
-
publisher = {arXiv},
|
43 |
-
year = {2023},
|
44 |
-
}
|
45 |
-
```
|
46 |
|
47 |
## Model Use
|
48 |
|
@@ -245,9 +246,21 @@ The primary intended use is to support AI researchers building on top of this wo
|
|
245 |
|
246 |
**Any** deployed use case of the model --- commercial or otherwise --- is currently out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are not intended for deployed use cases. Please refer to [the associated paper](https://aka.ms/biomedclip-paper) for more details.
|
247 |
|
248 |
-
## Data
|
249 |
|
250 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
251 |
|
252 |
## Limitations
|
253 |
|
|
|
23 |
|
24 |
# BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
|
25 |
|
26 |
+
[BiomedCLIP](https://aka.ms/biomedclip-paper) is a biomedical vision-language foundation model that is pretrained on [PMC-15M](https://github.com/microsoft/BiomedCLIP_data_pipeline), a dataset of 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central, using contrastive learning.
|
27 |
It uses PubMedBERT as the text encoder and Vision Transformer as the image encoder, with domain-specific adaptations.
|
28 |
It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering.
|
29 |
BiomedCLIP establishes new state of the art in a wide range of standard datasets, and substantially outperforms prior VLP approaches:
|
30 |
|
31 |
![](biomed-vlp-eval.svg)
|
32 |
|
33 |
+
## Contents
|
34 |
|
35 |
+
- [Training Data](#training-data)
|
36 |
+
- [Model Use](#model-use)
|
37 |
+
- [Reference](#reference)
|
38 |
+
- [Limitations](#limitations)
|
39 |
+
- [Further Information](#further-information)
|
40 |
+
|
41 |
+
|
42 |
+
## Training Data
|
43 |
+
|
44 |
+
We have released BiomedCLIP Data Pipeline at [https://github.com/microsoft/BiomedCLIP_data_pipeline](https://github.com/microsoft/BiomedCLIP_data_pipeline), which automatically downloads and processes a set of articles from the PubMed Central Open Access dataset.
|
45 |
+
BiomedCLIP builds upon the PMC-15M dataset, which is a large-scale parallel image-text dataset generated by this data pipeline for biomedical vision-language processing. It contains 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central and covers a diverse range of biomedical image types, such as microscopy, radiography, histology, and more.
|
46 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
## Model Use
|
49 |
|
|
|
246 |
|
247 |
**Any** deployed use case of the model --- commercial or otherwise --- is currently out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are not intended for deployed use cases. Please refer to [the associated paper](https://aka.ms/biomedclip-paper) for more details.
|
248 |
|
|
|
249 |
|
250 |
+
## Reference
|
251 |
+
|
252 |
+
```bibtex
|
253 |
+
@article{zhang2024biomedclip,
|
254 |
+
title={A Multimodal Biomedical Foundation Model Trained from Fifteen Million Image–Text Pairs},
|
255 |
+
author={Sheng Zhang and Yanbo Xu and Naoto Usuyama and Hanwen Xu and Jaspreet Bagga and Robert Tinn and Sam Preston and Rajesh Rao and Mu Wei and Naveen Valluri and Cliff Wong and Andrea Tupini and Yu Wang and Matt Mazzola and Swadheen Shukla and Lars Liden and Jianfeng Gao and Angela Crabtree and Brian Piening and Carlo Bifulco and Matthew P. Lungren and Tristan Naumann and Sheng Wang and Hoifung Poon},
|
256 |
+
journal={NEJM AI},
|
257 |
+
year={2024},
|
258 |
+
volume={2},
|
259 |
+
number={1},
|
260 |
+
doi={10.1056/AIoa2400640},
|
261 |
+
url={https://ai.nejm.org/doi/full/10.1056/AIoa2400640}
|
262 |
+
}
|
263 |
+
```
|
264 |
|
265 |
## Limitations
|
266 |
|