All
Search
Images
Videos
Shorts
Maps
News
More
Shopping
Flights
Travel
Notebook
Report an inappropriate content
Please select one of the options below.
Not Relevant
Offensive
Adult
Child Sexual Abuse
Bitnet
CPP
Microsoft
Bitnet
Bitnet
B1 58
Bitnet
B1 58 2B 4T On Docker Desktop
Bitnet
B1 58 2B4t
Bitnet
1 58
How to Train Bitnet
B1 58 Large
Microsoft Introduces B1 58 2B4t
Bitnet
Alexander Haber PhD
Bitnet
with C#
3B/1B Large Language Models
Bitnet
B1 58 CPU
Llama CPP Koboldcpp
1 58 Bit Model
How to Deploy Local
LLM Free
Super Weight in Large Language Models
Bit Tensor
1 Bit
Big Bits
Length
All
Short (less than 5 minutes)
Medium (5-20 minutes)
Long (more than 20 minutes)
Date
All
Past 24 hours
Past week
Past month
Past year
Resolution
All
Lower than 360p
360p or higher
480p or higher
720p or higher
1080p or higher
Source
All
Dailymotion
Vimeo
Metacafe
Hulu
VEVO
Myspace
MTV
CBS
Fox
CNN
MSN
Price
All
Free
Paid
Clear filters
SafeSearch:
Moderate
Strict
Moderate (default)
Off
Filter
Bitnet
CPP
Microsoft
Bitnet
Bitnet
B1 58
Bitnet
B1 58 2B 4T On Docker Desktop
Bitnet
B1 58 2B4t
Bitnet
1 58
How to Train Bitnet
B1 58 Large
Microsoft Introduces B1 58 2B4t
Bitnet
Alexander Haber PhD
Bitnet
with C#
3B/1B Large Language Models
Bitnet
B1 58 CPU
Llama CPP Koboldcpp
1 58 Bit Model
How to Deploy Local
LLM Free
Super Weight in Large Language Models
Bit Tensor
1 Bit
Big Bits
0:13
Microsoft made 100B parameter models run on a single CPU.bitnet.cpp: The official inference framework for 1-bit LLMs.The math behind 1-bit LLMs is what makes them revolutionary.Traditional LLMs use 16-bit floating point weights. Every parameter is a number like 0.0023847 or -1.4729.When you run inference, you multiply these floats together. Billions of times. That's why you need GPUs, they're optimized for floating point matrix multiplication.BitNet b1.58 uses ternary weights: {-1, 0, 1}.That's
22.9K views
1 month ago
x.com
Tech with Mak
6:48
1-Bit LLM 🤯 The Most Efficient Language Model Ever? BitNet Explained
60 views
4 months ago
YouTube
Subramanyam KMV
1 bit LLMs: 1-bit Bonsai vs BitNet. Large Language Models Extreme Quantization. Who wins? | Byte Goose AI
220 views
1 month ago
linkedin.com
0:28
Running massive LLMs on a single CPU - actually works #AI #LLM
1.5K views
2 weeks ago
YouTube
Amplify Imagination
Microsoft's BitNet Framework Revolutionizes AI with CPU-Only LLMs | Insights Node posted on the topic | LinkedIn
2 months ago
linkedin.com
10:40
Presentation on BitNet b1.58: The Era of 1-bit Large Language Models
1 views
2 weeks ago
YouTube
Al-Amin Farhad
6:50
BITNET - 1 Bit LLM inferencing on Mac - step by step installing Bitnet on Apple silicon #macbook
1.3K views
May 9, 2025
YouTube
Tech-Practice
6:14
Run 1 Bit LLM on Apple Silicon iPhone iPad and Macbook - MLX Bitnet
862 views
May 10, 2024
YouTube
Fahd Mirza
0:34
Microsoft researchers release bitnet.cpp, the official inference framework for 1-bit LLMs like BitNet b1.58. It has optimized kernels for fast, lossless inference on CPUs, achieving impressive speedups on ARM and x86 CPUs and significant energy reductions. https://t.co/mWmG58bFhK
75.1K views
Oct 23, 2024
x.com
Microsoft Research
7:18
The Era of 1-bit LLMs | BitNet | Microsoft | Paper Explained
3.3K views
Mar 6, 2024
YouTube
Kalyan KS
16:23
Tutorial de BitNet.cpp para LLMs de 1 Bit - para Inferencia Eficiente
2.6K views
Oct 20, 2024
YouTube
LLM Master Cursos
6:00
bitnet.cpp from Microsoft: Run LLMs locally on CPU! (hands-on)
4.4K views
Oct 29, 2024
YouTube
AI Bites
The Era of 1-bit LLMs by Microsoft | AI Paper Explained | daily.dev
3 weeks ago
daily.dev
5:37
1 Bit LLMs BitNet, ARM & the End of GPUs
1 week ago
YouTube
DEEPTECH AI LABS
20:39
BitNet b1.58 2B4T: Scalable 1-bit LLM
157 views
7 months ago
YouTube
The Times of AI
Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU.It's called BitNet. And it does what was supposed to be impossible.No GPU. No cloud. No $10K hardware… | Mariano Aloi
2 months ago
linkedin.com
14:35
1-Bit LLM: The Most Efficient LLM Possible?
375.2K views
11 months ago
YouTube
bycloud
13:45
The Era of 1-bit LLMs BitNet b1.58 No Floating point operations for LLM
1.4K views
Feb 29, 2024
YouTube
AI WITH Rithesh
Holy ****... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU.It's called BitNet. And it does what was supposed to be impossible.No GPU. No cloud. No… | George Spanidis | 45 comments
45 views
2 months ago
linkedin.com
5:21
Microsoft Just Fixed Local AI (BitNet 1.58-bit LLMs)
189 views
2 months ago
YouTube
NewTechWorld
8:25
BitNet Implementation Guide
137 views
3 months ago
YouTube
CognoAi
25:08
how to run microsoft bitnet-b1.58-2B-4T locally
2.2K views
Apr 18, 2025
YouTube
Total Technology Zonne
3:49
BitNet Explained: LLMs Run Efficiently on a Single CPU
101 views
3 months ago
YouTube
_UndrScor
14:45
BitNet b1.58 2B4T : Install on Windows Microsoft's 1-bit revolutionary LLM
5.6K views
Apr 20, 2025
YouTube
Aleksandar Haber PhD
12:25
I Built a Jarvis AI That Runs on CPU With NO GPU — Microsoft BitNet 1-Bit LLM
335 views
1 month ago
YouTube
Jownology
4:57
Microsoft Bitnet
56 views
1 month ago
YouTube
Sonsie Face
0:37
32K+ Stars • AI | BitNet — Fast 1-bit LLM Inference #shorts
1K views
2 months ago
YouTube
neural-nexus
8:13
Test BitNet : un LLM 1-bit qui tourne sur CPU ? : Setup et test en local (LLM open source)
410 views
11 months ago
YouTube
Pythonia Formation
4:07
Bitnet.cpp Explained: 6.25x Faster Lossless Inference for Ternary LLMs on Edge Devices
42 views
2 months ago
YouTube
rayyy
6:15
bitnet.cpp The Era of 1-bit LLMs is HERE!
169 views
4 months ago
YouTube
Eddy Says Hi #EddySaysHi
See more
More like this
Feedback