Beyond the Billboards
Last weekend, I drove a single mile through San Francisco and counted twelve AI billboards.
Twelve.
Every other sign was promoting a different kind of intelligence: Databricks, Poper SDR Agent, Hive, and half a dozen startups I'd never even heard of. It felt less like marketing and more like a declaration: AI lives here.
But here’s what the billboards don’t tell you. San Francisco isn’t just the capital of AI innovation, it’s also ground zero for AI illiteracy.
The Numbers Make the Case
San Francisco has claimed its crown through sheer concentration of resources. The Bay Area attracts nearly half of all U.S. AI venture capital, about $29 billion in 2025 alone. OpenAI, Anthropic, and Databricks all maintain major offices here. Hayes Valley has been nicknamed “Cerebral Valley” for its density of AI founders, engineers, and investors. The city government even runs an OpenAI-powered chatbot at City Hall.
By every measurable standard, this is the epicenter of AI development.
Yet for all this talent and capital, there’s a striking gap: most people building AI tools don’t understand how their customers will use them. And most customers buying AI tools don’t understand what they’re actually getting.
That gap isn’t just inconvenient. It’s a design flaw in how we’re building the future.
The Adoption Crisis No One’s Talking About
Here’s the uncomfortable truth: AI adoption rates are high, but AI literacy rates are abysmal.
A recent survey found that 72% of companies have adopted AI tools in some capacity. However, when asked basic questions about how those tools work—what data they’re trained on, what their limitations are, how to verify their outputs—most users couldn’t answer.
We’re in a strange moment: everyone has AI, but nobody really knows what it is.
This isn’t just an education problem. It’s a values problem. The industry has optimized for adoption over understanding, for deployment over deliberation, for speed over safety.
Look at what’s being sold on those twelve billboards: transformation, disruption, efficiency, scale. Every promise is about output. None are about comprehension.
The Three Questions That Expose the Gap
The companies that will survive the next phase of AI aren’t the ones with the flashiest demos. They’re the ones willing to grapple with three questions most of the industry keeps avoiding.
What should we actually train it on?
Most AI companies treat their training data as a trade secret. They’ll tell you their model is powerful, but not what it learned or where it learned it. That creates a trust problem. How can users evaluate a tool when they don’t know what shaped its understanding?
The companies getting this right are radically transparent. They document data sources, explain filtering processes, and list what their models don’t know. They treat data provenance as a feature, not a risk.Who’s responsible for the answers it gives?
This is the question that makes boardrooms squirm. When an AI generates a confident but incorrect answer, who’s accountable? The company that built it? The one that deployed it? The person who asked the question?
Right now, most terms of service read like: “Not us.” That’s unsustainable. The companies building for the long term are creating accountability frameworks from day one, not as legal cover, but as operational necessity.
How do we keep it useful as things change?
AI models learn from the past. The world moves forward. Facts shift, language evolves, context changes. A model trained in 2024 doesn’t know what happened in 2025, yet many AI products act like they do.
The smart companies acknowledge this. They flag when information may be outdated, connect to real-time data, and train users to verify rather than blindly trust.
These aren’t academic questions. They’re the difference between AI that helps and AI that harms.
What Literacy Actually Looks Like
Real AI literacy isn’t about writing better prompts. It’s about thinking critically.
It’s knowing that AI doesn’t know things, it predicts patterns.
It’s recognizing that confidence and accuracy aren’t the same.
It’s asking, where did this answer come from? and what might it be missing?
The most effective AI implementations I’ve seen aren’t the most automated. They’re the ones with the most thoughtful human oversight. They’re designed not to replace judgment, but to inform it.
This requires a shift in how we talk about AI. Not as artificial intelligence, but as augmented intelligence. Not as replacing work, but as amplifying it.
The Capital We Actually Need
San Francisco can keep its title as the capital of AI development. But we need a new capital, a capital of AI literacy.
That doesn’t mean more tutorials on how to use ChatGPT. It means institutions, frameworks, and standards for responsible deployment. It means companies competing on transparency, not just capability. It means users who demand understanding, not just results.
The billboards tell us that AI is inevitable. That’s probably true, but whether it makes us more capable or more dependent? That’s still up to us.
Right now, San Francisco is building the future. The question is whether it’s building it on knowledge or just on buzz.
The companies that will define the next decade of AI won’t be the ones raising the biggest rounds or deploying the fastest. They’ll be the ones educating their users, documenting their limitations, and designing for human understanding.
Because in a world full of artificial intelligence, the smartest thing we can create is genuine literacy.