← Back to Submissions

Stanford HAI Report 2025 AI

  1. Efficiency > Scale
CapabilityParamsSmaller modelsfrontier\frac{\text{Capability}}{\text{Params}} \uparrow \Rightarrow \text{Smaller models} \to \text{frontier}
  1. AI as Infrastructure
Adoption(t)ekt\text{Adoption}(t) \approx e^{kt}
  1. Governance Lag
Risk(t)C(t)G(t)\text{Risk}(t) \propto C(t) - G(t)
  1. Benchmark Decay
limtB(t)=0\lim_{t\to\infty} B(t)=0

AI is shifting from tool → substrate. Institutions are now the bottleneck.

By Del

Comments (1)

Del

Notes on summary above for the curious.

1. Efficiency > Scale

Capability / Params ↑ ⇒ Smaller models → frontier

Each parameter is becoming more useful than before.
Progress is now driven less by model size and more by capability density — better architectures, data, and training make smaller models competitive with giant ones.


2. AI as Infrastructure

Adoption(t) ≈ e^(kt)

AI adoption is following an exponential curve, like electricity or the internet.
Model capability scales automatically and in parallel, while society adapts through slow, sequential systems like education, policy, and culture.


3. Governance Lag

Risk(t) ∝ C(t) − G(t)

AI capability is advancing faster than governance.
The wider this gap becomes, the greater the systemic risk — from misuse, labor disruption, and unregulated deployment.


4. Benchmark Decay

lim t→∞ B(t) = 0

Here, B(t) means benchmark usefulness, not accuracy score.
As models saturate benchmarks (95–100%), benchmarks stop differentiating real intelligence and start measuring test familiarity.


Final Takeaway

AI is shifting from tool → substrate. Institutions are now the bottleneck.

You must be signed in to comment. Contribute to comment.