M4 (10-core GPU) — LLM Benchmarks
Measured LLM inference benchmarks for M4 (10-core GPU) across all RAM configurations (16 GB, 24 GB, 32 GB). 10 benchmark rows across 4 models. Compare how RAM affects throughput. Real runs, not estimates.
10Benchmark rows
4Models tested
3RAM configurations
76.2Fastest avg tok/s
RAM configurations
Each configuration differs only in unified memory. More RAM = larger models fit. Throughput is similar across RAM tiers at the same model size.
All benchmark rows — M4 (10-core GPU)
Sorted by avg tok/s descending. Click source badge to see original measurement.
| Chip (RAM) | Model | Quant | Avg tok/s | Runtime | Source |
|---|---|---|---|---|---|
| M4 (10-core GPU, 16 GB) | Llama 3.2 1B Instruct | Q4_K - Medium | 76.2 tok/s | — | ref |
| M4 (10-core GPU, 32 GB) | Llama 3.2 1B Instruct | Q4_K - Medium | 75.6 tok/s | — | ref |
| M4 (10-core GPU, 24 GB) | Llama 3.2 1B Instruct | Q4_K - Medium | 75.4 tok/s | — | ref |
| M4 (10-core GPU, 16 GB) | Llama 2 7B | Q4_0 | 24.1 tok/s | llama.cpp | ref |
| M4 (10-core GPU, 32 GB) | Llama 3.1 8B Instruct | Q4_K - Medium | 16.8 tok/s | — | ref |
| M4 (10-core GPU, 16 GB) | Llama 3.1 8B Instruct | Q4_K - Medium | 16.0 tok/s | — | ref |
| M4 (10-core GPU, 24 GB) | Llama 3.1 8B Instruct | Q4_K - Medium | 15.9 tok/s | — | ref |
| M4 (10-core GPU, 24 GB) | Qwen 2.5 14B Instruct | Q4_K - Medium | 9.2 tok/s | — | ref |
| M4 (10-core GPU, 16 GB) | Qwen 2.5 14B Instruct | Q4_K - Medium | 8.7 tok/s | — | ref |
| M4 (10-core GPU, 32 GB) | Qwen 2.5 14B Instruct | Q4_K - Medium | 8.6 tok/s | — | ref |
Models tested on M4 (10-core GPU)
Data
benchmarks.json — full dataset · chips.json — chip summaries · benchmarks.csv — CSV export
Data sourced from factory lab measurements and community reference runs. See all chips →