M2 Pro (16-core GPU) — LLM Benchmarks
Measured LLM inference benchmarks for M2 Pro (16-core GPU) across all RAM configurations (16 GB, 32 GB). 5 benchmark rows across 3 models. Compare how RAM affects throughput. Real runs, not estimates.
5Benchmark rows
3Models tested
2RAM configurations
91.5Fastest avg tok/s
RAM configurations
Each configuration differs only in unified memory. More RAM = larger models fit. Throughput is similar across RAM tiers at the same model size.
All benchmark rows — M2 Pro (16-core GPU)
Sorted by avg tok/s descending. Click source badge to see original measurement.
| Chip (RAM) | Model | Quant | Avg tok/s | Runtime | Source |
|---|---|---|---|---|---|
| M2 Pro (16-core GPU, 32 GB) | Llama 3.2 1B Instruct | Q4_K - Medium | 91.5 tok/s | — | ref |
| M2 Pro (16-core GPU, 16 GB) | Llama 3.2 1B Instruct | Q4_K - Medium | 91.1 tok/s | — | ref |
| M2 Pro (16-core GPU, 16 GB) | Llama 3.1 8B Instruct | Q4_K - Medium | 24.3 tok/s | — | ref |
| M2 Pro (16-core GPU, 32 GB) | Llama 3.1 8B Instruct | Q4_K - Medium | 23.8 tok/s | — | ref |
| M2 Pro (16-core GPU, 16 GB) | Qwen 2.5 14B Instruct | Q4_K - Medium | 13.4 tok/s | — | ref |
Models tested on M2 Pro (16-core GPU)
Data
benchmarks.json — full dataset · chips.json — chip summaries · benchmarks.csv — CSV export
Data sourced from factory lab measurements and community reference runs. See all chips →