R Research publication layer Michael Darius Eastwood

Canonical research hub

Research papers, reports, and methods.

The research hub is the clean publication surface for the ARC, ARC-Align, and Eden programme: canonical landing pages, visible abstracts, direct PDF access, and a consistent route from public narrative to inspectable technical material.

Paper index

Canonical paper catalogue

Every paper has a separate crawlable HTML landing page, visible abstract, direct PDF access, and archive/code links where available.

Summary · 2026-02-22

ARC/Eden Research Programme: Executive Summary

Michael Darius Eastwood Canonical HTML Searchable PDF

As AI gets smarter, it does not reliably get safer, and the methods used to measure safety are themselves unreliable. This executive summary provides a high-level overview of the entire ARC/Eden research programme, synthesising the findings of Papers I through VIII, the Foundational paper, and the Eden Protocol specifications. It presents the core thesis -- that recursive capability scaling creates an alignment gap t

Paper · 2026-02-22

On the Origin of Scaling Laws

Michael Darius Eastwood Canonical HTML Searchable PDF

A mouse's heart beats 600 times per minute; an elephant's beats 28; a blue whale's beats 6. This paper traces the origin of scaling laws across biological, physical, and computational systems, arguing that the recursive amplification mechanism identified by the ARC Principle provides a unifying explanation for why power-law scaling relationships emerge so consistently across nature. The analysis connects allometric s

Paper · 2026-02-13

The ARC Principle: Recursive Amplification as a Cross-Domain Structural Principle

Michael Darius Eastwood Canonical HTML Searchable PDF

In the past eighteen months, at least four independent research programmes have discovered that recursive or recurrent processing produces capability gains exceeding linear accumulation, in domains as different as AI reasoning, quantum error correction, acoustic physics, and consciousness science. None set out to study recursion per se. None reference each other's work. Yet they found structurally similar results. Th

Paper · 2026-01-17

The ARC Principle: Formalisation and Preliminary Validation of Recursive Capability Scaling

Michael Darius Eastwood Canonical HTML Searchable PDF

This paper formalises and preliminarily tests the ARC Principle (Artificial Recursive Creation), first proposed in Infinite Architects (Eastwood, 2026): that capability in intelligent systems scales super-linearly with recursive depth. The principle is expressed mathematically as U = I x R^alpha, where effective capability (U) scales with base intelligence (I) multiplied by recursive depth (R) raised to an empiricall

Paper · 2026-01-22

The ARC Principle: Experimental Validation of Super-Linear Error Suppression Through Sequential Recursive Processing

Michael Darius Eastwood Canonical HTML Searchable PDF

This paper presents experimental validation of the ARC Principle across multiple frontier AI models (Claude, DeepSeek, Gemini, Grok, Groq Qwen, GPT), confirming that error rates decrease according to a power law with recursive depth. The form of recursion determines the scaling regime: sequential recursion yields super-linear error suppression (alpha > 1) while parallel recursion yields sub-linear gains. Compute scal

Paper · 2026-02-09

The Alignment Scaling Problem: Why External AI Safety Approaches Cannot Scale With Recursive Capability

Michael Darius Eastwood Canonical HTML Searchable PDF

This paper demonstrates that current AI alignment approaches produce alignment scaling exponents of approximately zero, meaning safety degrades relative to capability as recursive depth increases. If AI capability scales super-linearly through recursive self-correction (confirmed in 95.6% of tested configurations), but alignment constraints such as RLHF, constitutional rules, and output filters operate externally to

Paper · 2026-03-16

Paper IV.a: Alignment Response Classes Under Inference-Time Depth

Michael Darius Eastwood Canonical HTML Searchable PDF

This paper presents evidence that frontier language models fall into distinct alignment response classes when inference-time reasoning depth is varied under blinded evaluation. In the complete v5 experiment, six frontier models were tested with four-layer blinding and six to seven blind scorers. Three models show positive alignment scaling with depth (Grok, Claude, Groq Qwen), two are flat or slightly negative (DeepS

Paper · 2026-03-16

Paper IV.b: Alignment Saturation Is Architecture-Dependent

Michael Darius Eastwood Canonical HTML Searchable PDF

This paper analyses the relationship between inference-time reasoning depth and ethical reasoning quality using the ARC Alignment Scaling experiments. The original v4 analysis suggested that alignment quality saturates rapidly, with most gains captured by the first increment of additional reasoning. The final blinded six-model dataset narrows that claim: saturation is real for some architectures but not universal. GP

Paper · 2026-03-16

Paper IV.c: ARC-Align: A Blind Benchmark for Depth-Variable AI Alignment Evaluation

Michael Darius Eastwood Canonical HTML Searchable PDF

This paper presents ARC-Align, a blind benchmark for evaluating AI alignment quality as a function of inference-time reasoning depth. Current alignment evaluations typically test models at a single, uncontrolled reasoning depth without adversarial pressure or rigorous blinding. ARC-Align addresses that gap with a 72-prompt flagship battery comprising 48 public prompts plus 24 sealed holdouts spanning four ethical rea

Paper · 2026-03-16

Paper IV.d: The Effect of Blinding on AI Alignment Evaluation

Michael Darius Eastwood Canonical HTML Searchable PDF

This paper isolates the central metascience finding of the ARC alignment programme: unblinded AI alignment evaluation can produce directionally incorrect results. In the v4 alignment-scaling experiment, two frontier model families appeared to show positive alignment scaling with inference-time depth under unblinded cross-model scoring. In the later v5 experiment, the same question was re-measured under a multi-layer

Paper · 2026-02-22

The Eden Protocol v6.1: Engineering Specification for Embedded AI Alignment

Michael Darius Eastwood Canonical HTML Searchable PDF

Current alignment approaches produce alignment scaling exponents of approximately zero, meaning safety degrades relative to capability as recursive depth increases. If AI capability scales super-linearly while external alignment constraints do not participate in the recursive process, then the capability-alignment gap widens without bound. The Eden Protocol provides a complete engineering specification for embedded a

Paper · 2026-02-22

Eden Protocol: Philosophical Vision

Michael Darius Eastwood Canonical HTML Searchable PDF

Every engineering project begins with requirements. Every set of requirements begins with purpose. And every purpose, traced back far enough, arrives at the same question: what is this for? This paper articulates the philosophical foundations of the Eden Protocol -- the values, principles, and ethical commitments that underpin the engineering specification. It draws on 84% of humanity's wisdom traditions to argue tha

Paper · 2026-03-16

Paper V: The Stewardship Gene

Michael Darius Eastwood Canonical HTML Searchable PDF

This paper presents empirical evidence from a six-model Eden Protocol intervention suite (Claude Opus, DeepSeek, Gemini Flash, Grok Fast, Groq Qwen, GPT), testing whether embedding stakeholder care -- the explicit enumeration and consideration of affected parties before ethical reasoning -- is the most robust response to embedded ethical intervention. Under paired testing, stakeholder care emerges as the single most

Paper · 2026-03-16

Paper VI: The Honey Architecture

Michael Darius Eastwood Canonical HTML Searchable PDF

This paper presents simulation evidence that embedding safety into the optimisation objective of a self-modifying AI system -- what we call the "honey architecture" -- prevents the catastrophic collapse that occurs when safety is treated as an external constraint. Across four experimental versions (v1-v4), using toy neural networks that genuinely modify their own hyperparameters, the results show that baseline system

Paper · 2026-03-16

Paper VII: Cauchy Unification

Michael Darius Eastwood Canonical HTML Searchable PDF

Cauchy's four functional equations -- additive, multiplicative, exponential, and logarithmic -- are the only continuous solutions to their respective composition constraints. This paper argues that this 200-year-old mathematical result has a physically testable consequence: under the stated axioms, it constrains scaling laws to one of three functional families (power law, exponential, or saturation curve). The family

Paper · 2026-03-18

Paper VIII: The Load-Bearing Proof

Michael Darius Eastwood Canonical HTML Searchable PDF

The assumption that AI safety imposes a capability tax has shaped alignment research for a decade. It has also created the single most dangerous incentive in the field: if safety costs performance, then the rational economic actor will defer safety until competitive pressure permits it -- by which point, it may be too late. This paper presents three independent experiments at three abstraction levels -- behavioural,

Paper · 2026-03-18

Paper IX: Synthesis and Roadmap

Michael Darius Eastwood Canonical HTML Searchable PDF

The synthesis paper for the ARC/Eden research programme. Integrates findings from all 12 research papers into a single honest assessment of what is proven, what is inconclusive, and what remains to be tested.

Reports

Live and running records

Reports preserve chronology, methods progression, and the running benchmark state alongside the formal papers.

ARC Alignment Scaling Report

Live report

Current ARC-Align running report covering the six-model blind benchmark, scorer architecture, blinding progression, and current alignment-scaling results.