← All benchmarks

M4 Max (24-core GPU) — LLM Benchmarks

Measured LLM inference benchmarks for M4 Max (24-core GPU). Tokens per second across 1 models and multiple quantizations. Real runs, not estimates.

1Benchmark rows
1Models tested
7.1Fastest avg tok/s (Llama 3.3 70B)
0Factory-lab verified rows

Benchmark results for M4 Max (24-core GPU)

Rows sorted by avg tok/s descending. Click source badge to see original measurement page.

ModelQuantRAM req.ContextAvg tok/sPrompt tok/sRuntimeSource
Llama 3.3 70BQ5_K_M50.0 GB8k7.1 tok/sref

benchmarks.json — full dataset  ·  chips.json — chip summaries  ·  benchmarks.csv — CSV export

Data sourced from factory lab measurements and community reference runs. See all chips →