Local ML Model Inference

The cluster focuses on running machine learning models locally for inference and training, including support for downloading pretrained models, browser execution, hardware compatibility like GPUs/TPUs, and integration with HuggingFace.

➡️ Stable 0.7x AI & Machine Learning
4,189
Comments
17
Years Active
5
Top Authors
#5070
Topic ID

Activity Over Time

2010
2
2011
3
2012
2
2013
11
2014
14
2015
23
2016
131
2017
287
2018
221
2019
217
2020
336
2021
216
2022
363
2023
852
2024
704
2025
746
2026
61

Keywords

HuggingFace AI S3 TensorFlow ML IMU PyTorch ARM youtu.be ONNX model inference models weights run tensorflow training ml trained prediction

Sample Comments

gbieler Jan 15, 2025 View on HN

This is very useful! Are you planning to support model training as well?

grandma_tea Nov 3, 2023 View on HN

Cool project! Does it support local models?

sliken Mar 1, 2023 View on HN

Nice. Any ML inference or training by chance?

Tepix Feb 6, 2023 View on HN

Hey google, if your model is so lightweight, let me run it!

jejeyyy77 Feb 24, 2023 View on HN

dumb question - but - can we actually use this right now? Like download the model and run it locally?

eduardoleao052 Nov 1, 2024 View on HN

Train and deploy Neural Nets and Transformers in the browser with one simple tag!

Reclaimer Jan 6, 2024 View on HN

Build high-performance AI models with modular and re-usable building blocks 80% faster than PyTorch, Tensorflow, and others.

desmap Dec 4, 2020 View on HN

Does this run on GPUs and/or TPUs?

navanchauhan Jun 8, 2022 View on HN

First party support for models hosted on HuggingFace for optimisation/conversion! Pretty stoked about playing with it

singularity2001 Sep 14, 2018 View on HN

don't they provide (pre)trained models? someone on github does.