Local ML Model Inference
The cluster focuses on running machine learning models locally for inference and training, including support for downloading pretrained models, browser execution, hardware compatibility like GPUs/TPUs, and integration with HuggingFace.
Activity Over Time
Top Contributors
Keywords
Sample Comments
This is very useful! Are you planning to support model training as well?
Cool project! Does it support local models?
Nice. Any ML inference or training by chance?
Hey google, if your model is so lightweight, let me run it!
dumb question - but - can we actually use this right now? Like download the model and run it locally?
Train and deploy Neural Nets and Transformers in the browser with one simple tag!
Build high-performance AI models with modular and re-usable building blocks 80% faster than PyTorch, Tensorflow, and others.
Does this run on GPUs and/or TPUs?
First party support for models hosted on HuggingFace for optimisation/conversion! Pretty stoked about playing with it
don't they provide (pre)trained models? someone on github does.