← All benchmarks

M4 (10-core GPU, 16 GB) — LLM Benchmarks

Measured LLM inference benchmarks for M4 (10-core GPU, 16 GB). Tokens per second across 4 models and multiple quantizations. Real runs, not estimates.

4Benchmark rows
4Models tested
76.2Fastest avg tok/s (Llama 3.2 1B Instruct)
0Factory-lab verified rows

This chip is part of a family. View all M4 (10-core GPU) RAM variants →

Benchmark results for M4 (10-core GPU, 16 GB)

Rows sorted by avg tok/s descending. Click source badge to see original measurement page.

ModelQuantRAM req.ContextAvg tok/sPrompt tok/sRuntimeSource
Llama 3.2 1B InstructQ4_K - Medium76.2 tok/s1091.1 tok/sref
Llama 2 7BQ4_03.6 GB51224.1 tok/s221.3 tok/sllama.cppref
Llama 3.1 8B InstructQ4_K - Medium16.0 tok/s166.8 tok/sref
Qwen 2.5 14B InstructQ4_K - Medium8.7 tok/s83.1 tok/sref

benchmarks.json — full dataset  ·  chips.json — chip summaries  ·  benchmarks.csv — CSV export

Data sourced from factory lab measurements and community reference runs. See all chips →