LayerLens and Subquadratic Announce Partnership to Enable Continuous, Transparent Evaluation of SubQ Models

Author:

The LayerLens Team

Last updated:

Published:

LayerLens Team | May 14, 2026

TL;DR

  • Subquadratic has partnered with LayerLens to run evaluations on their model, SubQ, using the Stratix evaluation platform.

  • SubQ will go through the same benchmark suite used for every other model on Stratix, and the results will be published publicly alongside 200+ models already on the platform.

  • This is the first model on Stratix built on a non-transformer attention architecture.

  • SubQuadratic will also use Stratix Enterprise to evaluate future versions of SubQ, enabling continuous measurement across new releases.

  • The partnership reflects a shared commitment to transparency, auditability, and responsible model assessment.

Introduction

LayerLens and Subquadratic today announced a partnership to evaluate SubQuadratic's SubQ models using LayerLens' Stratix evaluation platform, establishing a continuous, transparent, and auditable evaluation process for one of the most ambitious attention architectures released recently.

Through the partnership, SubQ will be evaluated using the same benchmark infrastructure applied across Stratix, which runs close to 100 benchmarks and includes more than 200 models on the platform. Results will be published publicly through Stratix, allowing teams, researchers, and the broader AI community to understand SubQ's performance in a standardized and comparable environment.

As part of the collaboration, SubQuadratic will also use Stratix Enterprise to evaluate future versions of SubQ, enabling ongoing measurement across new releases, model updates, and capability improvements. The goal is to move beyond one-time benchmark snapshots and establish a continuous evaluation workflow that tracks how SubQ evolves over time.

"At LayerLens, our mission is to streamline AI evaluation and make it more intelligent, transparent, and actionable," said the LayerLens team. "SubQ represents an important architectural direction in AI systems. Our role in this partnership is to provide the independent evaluation layer needed to assess that progress rigorously, consistently, and without pre-supposing the outcome."

Evaluating a New Architectural Direction

Subquadratic is building SubQ, a model based on Subquadratic Sparse Attention, or SSA, an alternative approach to traditional transformer attention. Subquadratic has stated that SSA is designed to increase the amount of context a model can process while preserving performance, with the goal of enabling more complex, compute-efficient, long-horizon AI tasks.

These are significant technical claims, and they require careful, transparent evaluation. The purpose of this partnership is not to endorse SubQ in advance, but to evaluate it through the same structured framework used across Stratix. By applying standardized benchmarks, prompt-level analysis, head-to-head comparisons, and detailed reporting, LayerLens aims to help the market understand where SubQ performs strongly, where it faces limitations, and how its capabilities evolve across releases.

What the Partnership Includes

LayerLens will evaluate SubQ across Stratix's benchmark suite, including both single-turn and multi-turn evaluations. Importantly, the evaluation will not be limited to long-context use cases. SubQ will also be tested across broader model capability areas, including reasoning, code generation, instruction following, tool use, and other standard benchmark categories used to evaluate models on Stratix.

The long-context evaluation suite will test retrieval accuracy at depth, positional consistency across the context window, and synthesis from extended inputs. These evaluations are designed to assess whether SubQ's architectural approach translates into measurable long-context performance under standardized conditions.

In parallel, Stratix will evaluate SubQ outside the long-context setting to provide a broader view of model quality. This is essential because long-context capacity alone does not define model usefulness. Engineering and data science teams need to understand how a model performs across the full range of tasks that matter in production environments.

Subquadratic will also use Stratix Enterprise to evaluate future versions of SubQ. This will support a release-by-release evaluation process, giving Subquadratic a systematic way to measure model progress while giving external users a clearer view into how performance changes over time.

Availability and Deliverables

Initial SubQ evaluation results will be published on stratix.layerlens.ai once the first evaluation cycle is complete.

The public release will include:

  • Benchmark results across Stratix's standard and long-context evaluation suites.

  • Prompt-by-prompt results that show how SubQ performs on individual evaluation items.

  • Head-to-head model comparisons against other models available on Stratix.

  • Per-benchmark breakdowns to help users understand performance across specific capability areas.

  • Detailed evaluation reports summarizing methodology, results, strengths, limitations, and observed patterns.

SubQ's results will appear alongside the 200+ models already available on Stratix, which runs close to 100 benchmarks across a wide range of AI capability dimensions.

Advancing Transparent AI Evaluation

LayerLens built Stratix to make AI evaluation more systematic, intelligent, and accessible for teams selecting, building, and deploying AI systems. As the AI model ecosystem becomes more specialized, architectural diversity is increasing, and model claims are becoming more difficult to assess through isolated benchmarks or informal comparisons.

Continuous evaluation is especially important for models that introduce new architectural assumptions or make claims around emerging capability frontiers. By evaluating SubQ through Stratix, LayerLens and Subquadratic aim to create a transparent record of model performance that can be compared across releases, benchmarks, and competing systems.

The partnership reflects a shared commitment to transparency, auditability, and responsible model assessment. It also reinforces LayerLens' broader mission: to make AI evaluation more rigorous, more continuous, and more useful for the teams and communities that depend on understanding model behavior.

About LayerLens

LayerLens is building evaluation infrastructure for the next generation of AI systems. Its Stratix platform helps teams evaluate, compare, and monitor AI models across standardized benchmarks, long-context workloads, prompt-level results, and real-world performance dimensions. LayerLens' mission is to streamline AI evaluation and make it more intelligent, transparent, and actionable.

About Subquadratic

Subquadratic is an AI infrastructure and research company building a new class of LLMs. While the major labs focus on incremental transformer improvements, Subquadratic is pursuing foundational change at the model architecture layer, enabling long-context reasoning, persistent memory, and large-scale multimodal workloads that scale efficiently.

Explore SubQ's evaluation results and compare against 200+ models on Stratix.