AI model capabilities are evolving rapidly. Anyone following the field can see that tomorrow’s models will likely surpass those available today1. AI scaling laws reveal a predictable pattern: as we add more data, compute, and parameters, LLM capabilities improve in consistent ways.
This creates an interesting challenge for today’s AI products and product thinkers. Many current offerings essentially wrap API calls to an LLM in a user-friendly interface. As more capable models emerge, products with no moat will simply become obsolete.
What does this mean for product strategy? Competitive advantages based solely on current AI capabilities are temporary, and features that seem impressive today could simply be rolled into the base offerings of foundation model labs like OpenAI and Anthropic.
For AI products to endure, they need to be more than LLM wrappers. They need elements that remain valuable even as base models improve: proprietary data sets, strong network effects, or specialized workflows that effectively capture human expertise and domain knowledge2.
As product builders, we face the choice of building for what AI can do today, or building for where it’s headed. The companies that survive will be the ones that create value that persists regardless of how powerful the underlying models become.
Footnotes
-
If you’re interested in learning about scaling laws, these papers provide excellent background: Scaling Laws for Neural Language Models (Kaplan et al., 2020), Training Compute-Optimal Large Language Models (Hoffmann et al., 2022). ↩
-
Does this perspective align with your work? I’d love to discuss — if you would too, get in touch. ↩