Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
11
1
34
Santiago Castro
bryant1410
Follow
Molbap's profile picture
filevich's profile picture
rwightman's profile picture
3 followers
ยท
13 following
https://santi.uy
bryant1410
bryant1410
AI & ML interests
Vision and Language, Humor, Sarcasm.
Recent Activity
reacted
to
rwightman
's
post
with ๐
about 2 months ago
Want to validate some hparams or figure out what `timm` model to use before commiting to download or training with a large dataset? Try mini-imagenet: https://huggingface.co/datasets/timm/mini-imagenet I had this sitting on my drive and forgot where I pulled it together from. It's 100 classes of imagenet, 50k train and 10k val images (from ImageNet-1k train set), and 5k test images (from ImageNet-1k val set). 7.4GB instead of > 100GB for the full ImageNet-1k. This ver is not reduced resolution like some other 'mini' versions. Super easy to use with timm train/val scripts, checkout the dataset card. I often check fine-tuning with even smaller datasets like: * https://huggingface.co/datasets/timm/resisc45 * https://huggingface.co/datasets/timm/oxford-iiit-pet But those are a bit small to train any modest size model w/o starting from pretrained weights.
reacted
to
rwightman
's
post
with โค๏ธ
4 months ago
The `timm` leaderboard https://huggingface.co/spaces/timm/leaderboard has been updated with the ability to select different hardware benchmark sets: RTX4090, RTX3090, two different CPUs along with some NCHW / NHWC layout and torch.compile (dynamo) variations. Also worth pointing out, there are three rather newish 'test' models that you'll see at the top of any samples/sec comparison: * test_vit (https://huggingface.co/timm/test_vit.r160_in1k) * test_efficientnet (https://huggingface.co/timm/test_efficientnet.r160_in1k) * test_byobnet (https://huggingface.co/timm/test_byobnet.r160_in1k, a mix of resnet, darknet, effnet/regnet like blocks) They are < 0.5M params, insanely fast and originally intended for unit testing w/ real weights. They have awful ImageNet top-1, it's rare to have anyone bother to train a model this small on ImageNet (the classifier is roughly 30-70% of the param count!). However, they are FAST on very limited hadware and you can fine-tune them well on small data. Could be the model you're looking for?
reacted
to
rwightman
's
post
with ๐ฅ
5 months ago
The latest timm validation & test set results are now viewable by a leaderboard space: https://huggingface.co/spaces/timm/leaderboard As of yesterday, I updated all of the results for ImageNet , ImageNet-ReaL, ImageNet-V2, ImageNet-R, ImageNet-A, and Sketch sets. The csv files can be found in the GH repo https://github.com/huggingface/pytorch-image-models/tree/main/results Unfortunately the latest benchmark csv files are not yet up to date, there are some gaps in dataset results vs throughput/flop numbers impact the plots. h/t to @MohamedRashad for making the first timm leaderboard.
View all activity
Organizations
Papers
13
arxiv:
2402.15021
arxiv:
2305.18786
arxiv:
2305.12544
arxiv:
2210.02399
Expand 13 papers
models
1
bryant1410/xlm-roberta-base-finetuned-quales
Question Answering
โข
Updated
Oct 20, 2023
โข
17
datasets
1
bryant1410/moments-in-time
Updated
Jul 17, 2024
โข
2