← All benchmarks

Llama 3.1 8B Instruct — Apple Silicon Benchmarks

Measured inference speed for Llama 3.1 8B Instruct across 52 Apple Silicon chips. Tokens per second at multiple quantization levels. Real runs, not estimates.

Quantizations measured: Q4_K - Medium

52Benchmark rows
52Chip tiers covered
63.3Fastest avg tok/s (M3 Ultra (80-core GPU, 256 GB))
Minimum RAM observed

Benchmark results for Llama 3.1 8B Instruct

Rows sorted by avg tok/s descending. Click source badge to see original measurement page.

ChipQuantRAM req.ContextAvg tok/sPrompt tok/sRuntimeSource
M3 Ultra (80-core GPU, 256 GB)Q4_K - Medium63.3 tok/s1062.2 tok/sref
M3 Ultra (80-core GPU, 512 GB)Q4_K - Medium62.7 tok/s1109.5 tok/sref
M5 Max (32-core GPU, 36 GB)Q4_K - Medium61.6 tok/s630.3 tok/sref
M2 Ultra (60-core GPU, 64 GB)Q4_K - Medium59.5 tok/s703.3 tok/sref
M4 Max (40-core GPU, 48 GB)Q4_K - Medium55.1 tok/s663.4 tok/sref
M1 Ultra (64-core GPU, 128 GB)Q4_K - Medium54.3 tok/s667.6 tok/sref
M4 Max (40-core GPU, 128 GB)Q4_K - Medium51.6 tok/s617.6 tok/sref
M1 Ultra (48-core GPU, 128 GB)Q4_K - Medium48.9 tok/s534.4 tok/sref
M4 Max (32-core GPU, 36 GB)Q4_K - Medium48.1 tok/s570.3 tok/sref
M4 Max (40-core GPU, 64 GB)Q4_K - Medium47.1 tok/s557.1 tok/sref
M2 Max (38-core GPU, 96 GB)Q4_K - Medium46.4 tok/s484.1 tok/sref
M3 Max (40-core GPU, 128 GB)Q4_K - Medium45.8 tok/s587.1 tok/sref
M2 Max (38-core GPU, 32 GB)Q4_K - Medium44.7 tok/s473.7 tok/sref
M1 Max (32-core GPU, 64 GB)Q4_K - Medium37.8 tok/s380.7 tok/sref
M3 Max (30-core GPU, 96 GB)Q4_K - Medium37.7 tok/s456.8 tok/sref
M3 Max (30-core GPU, 36 GB)Q4_K - Medium37.5 tok/s443.5 tok/sref
M1 Max (32-core GPU, 32 GB)Q4_K - Medium35.4 tok/s359.9 tok/sref
M4 Pro (20-core GPU, 64 GB)Q4_K - Medium32.9 tok/s349.4 tok/sref
M4 Pro (20-core GPU, 48 GB)Q4_K - Medium32.7 tok/s361.9 tok/sref
M4 Pro (20-core GPU, 24 GB)Q4_K - Medium32.5 tok/s359.6 tok/sref
M1 Max (24-core GPU, 64 GB)Q4_K - Medium32.1 tok/s306.2 tok/sref
M2 Max (30-core GPU, 32 GB)Q4_K - Medium31.2 tok/s325.0 tok/sref
M4 Pro (16-core GPU, 24 GB)Q4_K - Medium30.5 tok/s298.0 tok/sref
M4 Pro (16-core GPU, 48 GB)Q4_K - Medium30.2 tok/s302.4 tok/sref
M2 Pro (19-core GPU, 32 GB)Q4_K - Medium26.3 tok/s261.8 tok/sref
M3 Max (40-core GPU, 64 GB)Q4_K - Medium25.4 tok/s377.0 tok/sref
M2 Pro (16-core GPU, 16 GB)Q4_K - Medium24.3 tok/s224.6 tok/sref
M2 Pro (16-core GPU, 32 GB)Q4_K - Medium23.8 tok/s208.7 tok/sref
M5 (10-core GPU, 32 GB)Q4_K - Medium22.3 tok/s210.7 tok/sref
M3 Pro (18-core GPU, 36 GB)Q4_K - Medium22.1 tok/s283.8 tok/sref
M1 Pro (16-core GPU, 16 GB)Q4_K - Medium21.9 tok/s204.5 tok/sref
M1 Pro (16-core GPU, 32 GB)Q4_K - Medium21.7 tok/s200.3 tok/sref
M3 Pro (14-core GPU, 36 GB)Q4_K - Medium21.5 tok/s223.4 tok/sref
M3 Pro (18-core GPU, 18 GB)Q4_K - Medium20.8 tok/s279.2 tok/sref
M1 Pro (14-core GPU, 16 GB)Q4_K - Medium20.1 tok/s177.0 tok/sref
M1 Pro (14-core GPU, 32 GB)Q4_K - Medium20.0 tok/s173.1 tok/sref
M3 Pro (14-core GPU, 18 GB)Q4_K - Medium19.1 tok/s199.5 tok/sref
M2 (8-core GPU, 8 GB)Q4_K - Medium18.3 tok/s148.8 tok/sref
M4 (10-core GPU, 32 GB)Q4_K - Medium16.8 tok/s166.1 tok/sref
M4 (10-core GPU, 16 GB)Q4_K - Medium16.0 tok/s166.8 tok/sref
M4 (10-core GPU, 24 GB)Q4_K - Medium15.9 tok/s149.0 tok/sref
M4 (8-core GPU, 16 GB)Q4_K - Medium15.3 tok/s134.0 tok/sref
M1 Ultra (GPU count not published, 128 GB)Q4_K - Medium15.2 tok/s71.6 tok/sref
M2 (10-core GPU, 16 GB)Q4_K - Medium14.7 tok/s140.7 tok/sref
M2 (10-core GPU, 24 GB)Q4_K - Medium14.7 tok/s141.7 tok/sref
M1 (8-core GPU, 8 GB)Q4_K - Medium14.6 tok/s133.9 tok/sref
M3 (10-core GPU, 16 GB)Q4_K - Medium13.5 tok/s138.7 tok/sref
M1 (7-core GPU, 8 GB)Q4_K - Medium13.4 tok/s108.7 tok/sref
M2 (8-core GPU, 16 GB)Q4_K - Medium12.9 tok/s114.0 tok/sref
M3 (GPU count not published, 16 GB)Q4_K - Medium11.8 tok/s33.6 tok/sref
M3 (10-core GPU, 24 GB)Q4_K - Medium10.2 tok/s109.9 tok/sref
M1 (7-core GPU, 16 GB)Q4_K - Medium9.4 tok/s83.9 tok/sref

benchmarks.json — full dataset  ·  models.json — model summaries  ·  benchmarks.csv — CSV export

See all models →