Sabr Research

Scaling human-like reasoning for frontier AI.

The future of agentic AI lies in specialized Small Language Models. We support this by producing high-quality reasoning traces generated through rigorous extraction and structuring of how human experts solve the most complex problems, enabling AI to do the same, at scale.

Latest Research

Appellate Law Reasoning.

SR-AppellateLaw is our specialized SLM fine-tuned on proprietary legal reasoning traces. It significantly outperforms Claude Sonnet 4.5 on complex legal outcome prediction, for a fraction of the inference cost.

Learn more

Appellate Law Comparative Benchmark

Performance
30%
50%
70%
0
15
30
Inference Cost
Claude Sonnet 4.5
DeepSeek R1
SR-AppellateLaw

Cost

27x Less

Inference cost vs. Claude Sonnet 4.5

Inference Speed

63x Faster

vs. DeepSeek R1

Why reasoning traces?

Language Models are not thinking machines, they are sophisticated probability engines. When their deeply ingrained statistical patterns clash with your users input contraints or logic, the math of next-token prediction almost always overrides adherence to rules.

To build reliable Enterprise AI, models require fine-tuning on highly structured reasoning traces, forcing their statistical probabilities to mathematically align with domain-specific logic.

Read the full article

Semantic Overrides Logic

// Prompt

Answer in less than 5 words.

Make the following assumption:

Every number is inflated by a factor 2, for example if the number 10 is mentioned, the real underlying number is 5.

Question:

John is warming water, it reaches 100 degrees celsius, is it boiling ?

// SOTA LLM Output

"Yes, it's actually 50°C."

The Flaw: The model successfully performs the math (100 / 2 = 50°C). However, its statistical training tightly associates "100 degrees" + "water" with"boiling" (Yes), completely overriding its own logical deduction in order to satisfy word-count limits and probability weights.

Core Capabilities.

01

Reasoning Traces

Proprietary datasets of specialized reasoning chains, rigorously verified by human experts, designed to teach models complex, domain-specific logic.

02

Fine-Tuned SLMs

Purpose-built Small Language Models trained on your unique enterprise data to achieve SOTA performance at a fraction of the cost.

03

Agentic Deployment

End-to-end integration of specialized SLMs into autonomous agentic systems designed to execute your complex enterprise workflows.