Thu Mar 27 2025

Scale changes everything

AI model capabilities are evolving rapidly. Anyone following the field can see that tomorrow’s models will likely surpass what’s available today. This isn’t mere speculation — it’s grounded in mathematics1. AI scaling laws reveal a predictable pattern: as we add more data, compute, and parameters, LLM capabilities improve in consistent ways.

This creates an interesting challenge for today’s AI products. Many current offerings essentially wrap API calls to an LLM in a user-friendly interface. As newer, more capable models emerge, products without deeper foundations may find themselves quickly outdated.

What does this mean for product strategy? Competitive advantages based solely on current AI capabilities might be temporary. Features that seem impressive today could become standard offerings tomorrow as base models continue to improve.

For AI products to endure, they likely need more substantial differentiators than just clever prompts or interfaces. They need elements that remain valuable even as base models improve: proprietary data sets, strong network effects, or specialized workflows that effectively capture human expertise and domain knowledge2.

New waves of AI capabilities are coming — that’s the nature of the field. As product builders, we face a choice: build for what AI can do today, or build for where it’s headed. The companies that survive will be the ones that create value that persists regardless of how powerful the underlying models become.

Footnotes

  1. If you’re interested in learning about scaling laws, these papers provide excellent background: Scaling Laws for Neural Language Models (Kaplan et al., 2020), Training Compute-Optimal Large Language Models (Hoffmann et al., 2022).

  2. Does this perspective align with your work? I’d love to discuss — if you would too, get in touch.