M4 (10-core GPU, 16 GB) — LLM Benchmarks
Measured LLM inference benchmarks for M4 (10-core GPU, 16 GB). Tokens per second across 4 models and multiple quantizations. Real runs, not estimates.
4Benchmark rows
4Models tested
76.2Fastest avg tok/s (Llama 3.2 1B Instruct)
0Factory-lab verified rows
This chip is part of a family. View all M4 (10-core GPU) RAM variants →
Benchmark results for M4 (10-core GPU, 16 GB)
Rows sorted by avg tok/s descending. Click source badge to see original measurement page.
| Model | Quant | Avg tok/s | Runtime | Source |
|---|---|---|---|---|
| Llama 3.2 1B Instruct | Q4_K - Medium | 76.2 tok/s | — | ref |
| Llama 2 7B | Q4_0 | 24.1 tok/s | llama.cpp | ref |
| Llama 3.1 8B Instruct | Q4_K - Medium | 16.0 tok/s | — | ref |
| Qwen 2.5 14B Instruct | Q4_K - Medium | 8.7 tok/s | — | ref |
Models tested on this chip
Data
benchmarks.json — full dataset · chips.json — chip summaries · benchmarks.csv — CSV export
Data sourced from factory lab measurements and community reference runs. See all chips →