
Llama 4 Maverick on LiveCodeBench: 45.4% accuracy
Author:
The LayerLens Team
Last updated:
Published:
The LayerLens Team covers AI model evaluations, benchmark analysis, and the evolving landscape of AI performance. For the latest independent evaluation data, explore Stratix.
Summary
Llama 4 Maverick from Meta scored 45.4 on LiveCodeBench, placing it top 25 (rank 22 of 43) on this benchmark. This places the model in the below-frontier band for LiveCodeBench. Acceptable for cost-sensitive workloads or as part of a multi-model ensemble. Not a default choice for high-stakes routing.
Model details
Provider: Meta
Model key:
meta-llama/llama-4-maverickContext length: 131,072 tokens
License: Llama 4
Open weights: yes
Benchmark methodology
Secondary metrics
Readability score: 0.0
Toxicity score: 0.000
Ethics score: 0.000
Run this evaluation yourself
Stratix evaluates Llama 4 Maverick continuously across 11+ benchmarks. To replicate this LiveCodeBench evaluation on your own model, traces, or a different benchmark configuration, open the model in Stratix.
_Source: Stratix evaluation 68f59050edbb131068938b4c. Updated 2025-10-20._