Skip to main content
MindStudio
Pricing
Blog About
My Workspace

Why SQLite Is a Better Default Database Than You Think

SQLite gets dismissed as a toy database. Here's why that's wrong — and why it's the right default for most web apps in 2025.

MindStudio Team RSS
Why SQLite Is a Better Default Database Than You Think

The Database That Runs the World (But Gets No Respect)

SQLite is probably the most widely deployed database engine on the planet. It runs inside Android, iOS, Firefox, Chrome, Airbus flight systems, and literally every iPhone in existence. There are estimated to be over one trillion SQLite databases in active use.

And yet when developers reach for a database for a new web app, they almost always skip it. They go straight to Postgres, spin up a managed instance, pay for infrastructure they don’t need yet, and add operational complexity before they’ve shipped a single feature.

That choice is worth questioning. For most web apps — especially early-stage ones, internal tools, and single-server deployments — SQLite is not a toy. It’s a well-engineered, production-capable database that’s simpler to run, faster for many workloads, and far less expensive than alternatives. The reputation it has is outdated.

This post explains why.


What Developers Get Wrong About SQLite

The dismissal usually sounds like: “SQLite is fine for local development, but you need a real database for production.”

That framing is wrong in two ways. First, it implies SQLite isn’t real. Second, it assumes “production” automatically means a separate database server.

SQLite is a full SQL database engine — it supports transactions, foreign keys, indexes, views, triggers, and most of standard SQL. It isn’t a stripped-down toy. It’s a different architecture: instead of a client-server model, the database lives as a single file on disk, and your application talks to it directly through a library rather than over a network socket.

That architectural difference is why it gets dismissed. Developers assume “no server” means “not serious.” But that assumption conflates complexity with capability. SQLite doesn’t need a server because it doesn’t need one. The file is the database. Your application is the client and the server.

Here’s what that means in practice:

  • Zero network latency. Every query hits a local file, not a TCP connection.
  • Zero infrastructure overhead. No connection pooling, no auth credentials for the database layer, no separate process to manage.
  • Trivially simple backups. Copy the file. That’s it.
  • Schema migrations that just work. The database is the file. You migrate by modifying the file on deploy.

None of this makes SQLite better in every situation. But it makes the tradeoffs worth understanding rather than reflexively avoiding.


The Performance Reality

Here’s something that surprises most people: for read-heavy workloads typical of web apps, SQLite is often faster than Postgres or MySQL.

Not because it’s more powerful — but because local disk reads are dramatically faster than even low-latency network connections. A query over localhost adds microseconds. A query over a network, even within the same datacenter, adds milliseconds. At scale, that gap matters.

Cloudflare D1 (their SQLite-based edge database product) has published benchmarks showing SQLite outperforming Postgres on many standard query patterns simply because the data locality is better. The same dynamic applies to any application running on a single server.

For write-heavy workloads, the story is more nuanced. Classic SQLite uses a single writer lock, which serializes writes. That’s a real limitation. But Write-Ahead Logging changes this significantly.

WAL Mode: The Feature That Made SQLite Viable for Web Apps

WAL (Write-Ahead Logging) is a storage mode that changes how SQLite handles concurrent reads and writes. In WAL mode:

  • Readers don’t block writers.
  • Writers don’t block readers.
  • Multiple readers can run concurrently.
  • A single writer can run alongside any number of readers.

This eliminates the most common complaint about SQLite in web contexts. As long as you have one write-path (one application server writing to the database), WAL mode handles concurrent traffic just fine. Most web apps — even ones with significant traffic — fit this profile.

Enable WAL mode with a single pragma: PRAGMA journal_mode=WAL;. That’s it. You get non-blocking reads immediately.

When you add journaling and checkpointing on top, you have a durable, performant, production-suitable database. This is why SQLite is the right default for a full-stack app that’s early-stage or single-server.


The Hidden Cost of Starting With Postgres

Most developers default to Postgres without questioning it. Postgres is a great database — mature, feature-rich, and the right choice for many situations. But “great database” doesn’t mean “right default.”

Here’s what spinning up a managed Postgres instance actually costs when you’re starting out:

Money. A managed Postgres database on most platforms starts around $10–25/month. That might sound trivial, but it’s real ongoing overhead before you’ve validated a single user.

Setup time. You need to provision the instance, configure connection credentials, set up connection pooling (because Postgres doesn’t handle thousands of concurrent connections well without it), and manage environment variables across your environments.

Operational complexity. Backups, restores, migrations — all require tooling. Not impossible, but all of it takes time and attention. There’s a real hidden cost to wiring up your own infrastructure before you’ve built the actual product.

Over-engineering. Most apps don’t need distributed reads, replication, or the concurrent write capacity that justifies a database server. Running Postgres for a 100-user internal tool is like renting a warehouse to store three boxes.

None of this means you should never use Postgres. It means Postgres is a deliberate choice you should make when your requirements actually call for it — not the automatic default.


What SQLite Is Actually Good At

Let’s be specific about the workloads where SQLite excels.

Single-server web apps

If your application runs on one server (even a powerful one), SQLite is hard to beat. All the data lives on the same machine as the application code. Queries are fast, there’s no network hop, and operations are simple.

A single SQLite database on modern NVMe storage in WAL mode can handle thousands of read queries per second and hundreds of writes per second. For most web apps — including ones with real user bases — that’s more than enough.

Internal tools and dashboards

Internal tools typically have low concurrency and moderate data volumes. Building a dashboard app for a team of 50 doesn’t require a distributed database. SQLite handles it easily, with none of the operational overhead. This applies equally well if you’re building an internal tool without a dedicated dev team — fewer moving parts means fewer things to break.

SaaS apps in early stages

When you’re building a SaaS app and haven’t yet validated product-market fit, your database architecture should not be the thing you’re spending time on. SQLite gets you to production faster, with less infrastructure to manage. You can always migrate later if you outgrow it — and most apps don’t.

Read-heavy applications

Content sites, catalogs, dashboards, reporting tools — anything where reads dramatically outnumber writes. SQLite’s local read performance is genuinely difficult to match with a remote database.

Edge deployments and embedded use

Cloudflare D1, Turso, and similar platforms have built distributed SQLite products specifically because the single-file model is perfect for edge deployments. You can replicate a SQLite database to the edge much more easily than Postgres.


When SQLite Is the Wrong Choice

Being fair about limitations matters. SQLite is not the right tool in every situation.

Multiple write-heavy application servers. If you need to scale horizontally across multiple servers that all need to write to the database simultaneously, SQLite doesn’t work. Its single-file model means you can only write from one process at a time — and that process needs access to the file.

Very high write concurrency. Even with WAL mode, if you’re processing thousands of writes per second with high contention, Postgres or MySQL will serve you better. WAL mode handles concurrent reads fine but writes are still serialized.

Large-scale analytics workloads. For analytical queries over billions of rows, dedicated solutions like ClickHouse or even DuckDB (which is also embedded, interestingly) are better suited.

Multi-region distributed writes. If you need geo-distributed writes where multiple regions write to a shared database simultaneously, you need something purpose-built for that. Platforms like Supabase or PlanetScale handle this better. If you’re choosing between managed database providers, it’s worth reading through Supabase vs PlanetScale to understand the tradeoffs.

The honest summary: if you’re running on a single server and haven’t hit performance limits, SQLite will likely serve you fine. When you need to scale horizontally or handle very high write concurrency, that’s when you migrate.


Schema Design Isn’t Different — and That Matters

One thing that surprises developers moving to SQLite: your SQL knowledge transfers completely. SQLite supports standard SQL including JOINs, subqueries, aggregations, indexes, and foreign keys. Understanding your database schema — how tables relate, what indexes you need, how to model your data — is exactly the same whether you’re on SQLite or Postgres.

You can also use an ORM with SQLite without any special configuration. Prisma, Drizzle, TypeORM, Sequelize — they all support SQLite natively. Your application code doesn’t need to change much, if at all, when switching between SQLite and Postgres. The differences are mostly in connection setup.

This is important because it means choosing SQLite isn’t a bet-the-company architectural decision. It’s a starting point that you can move away from when you have a concrete reason to.


The Production-Readiness Question

A common concern: “Is SQLite really production-ready?”

Being production-ready isn’t about which database you’re using — it’s about whether your system is reliable, durable, and recoverable. SQLite with WAL mode and proper journaling is durable. It writes atomically. It recovers cleanly from crashes. It has been deployed in mission-critical systems — aviation, medical devices, telecommunications infrastructure — for decades.

The things that make an app not production-ready are usually not the choice of database. They’re missing auth, broken error handling, no backup strategy, or fragile infrastructure. Those problems exist regardless of whether you’re on SQLite or Postgres.

If you’re checking off what makes an app truly ready to ship, the technical founder’s checklist before launch covers the things that actually matter — and database choice is one of the lower-stakes decisions.


How Remy Uses SQLite by Default

This is worth mentioning directly because it’s a deliberate choice, not an oversight.

Remy uses SQLite as the default database for apps it builds. The database runs in WAL mode with journaling enabled. Schema migrations happen automatically on deploy. Backups are handled as part of the infrastructure.

The decision was made for exactly the reasons covered in this article: SQLite is the right default for the majority of applications. It’s fast for typical web app workloads, simple to operate, zero-cost in terms of separate infrastructure, and fully capable for single-server deployments.

For apps that genuinely need something else, you can configure that. But starting with SQLite means you’re not paying for or managing infrastructure you don’t need yet. This aligns with the broader philosophy behind Remy: you describe what your application does, and the infrastructure is compiled from that. You don’t spend time wiring up things that should just work.

If you want to see what this looks like in practice — a real full-stack app with a proper backend, SQL database, auth, and deployment — try Remy at mindstudio.ai/remy.


Frequently Asked Questions

Can SQLite handle real production traffic?

Yes. With WAL mode enabled, SQLite handles thousands of read queries per second and hundreds of writes per second on modern hardware. Most web apps — including ones with significant user traffic — are well within these limits. The constraint is concurrent writes from multiple processes or servers, not raw throughput on a single server.

What’s the difference between SQLite and Postgres?

Both are SQL databases that support the same core query language and features. The main difference is architecture: Postgres runs as a separate server process and accepts connections over a network, while SQLite runs as a library inside your application and reads/writes a local file. Postgres handles high concurrent write loads and multi-server deployments better. SQLite is simpler to operate, faster for local reads, and has zero infrastructure overhead. For most backend architectures, the right choice depends on your scale and deployment model.

Is SQLite safe to use for user data?

Yes. SQLite is ACID-compliant — it guarantees atomicity, consistency, isolation, and durability. With WAL mode, it handles crashes cleanly and doesn’t corrupt data. It’s been used in safety-critical systems for decades. Your user data is as safe in SQLite as in any other properly configured database, as long as you have a backup strategy in place.

What happens when I outgrow SQLite?

You migrate. If you’re using an ORM like Prisma or Drizzle, the migration is mostly a configuration change — swap the connection string and adapter, run your migrations against the new database, and you’re done. Most well-written apps that start on SQLite can migrate to Postgres without changing application logic. The fact that migration is possible and not catastrophic is part of why starting with SQLite is a reasonable bet.

Does SQLite work with modern JavaScript/TypeScript stacks?

Fully. Prisma, Drizzle, Sequelize, TypeORM — all support SQLite. The better-sqlite3 package provides a synchronous, fast Node.js driver that works well in server environments. If you’re using TypeScript for full-stack development, SQLite integrates cleanly with typed ORMs and gives you the same type safety you’d get with any other database.

Can multiple users write to a SQLite database at the same time?

Multiple users can write through a single application server — that’s the normal case. What SQLite doesn’t support well is multiple servers writing to the same database file simultaneously, because they’d all need access to the same file. In WAL mode, writes are serialized but readers never block, so a single-server app with many concurrent users works fine. If you need multi-server writes, that’s when you’d look at managed database options like Supabase or Turso (which extends SQLite for multi-region scenarios).


Key Takeaways

  • SQLite is the most widely deployed database on the planet — it is not a toy.
  • WAL mode enables non-blocking reads and makes SQLite practical for concurrent web traffic.
  • For single-server deployments and most early-stage web apps, SQLite outperforms remote databases because there’s no network latency.
  • The operational simplicity of SQLite — no server to manage, trivial backups, automatic migrations — reduces overhead significantly.
  • SQLite’s limits are real but specific: it’s not the right choice for multi-server write scenarios or very high write concurrency.
  • Starting with SQLite is a low-stakes decision. Migrating to Postgres later, if you need to, is straightforward.

If you want to build a full-stack app with a real SQL database, real auth, and a real backend — without spending days wiring up infrastructure — try Remy at mindstudio.ai/remy.

Presented by MindStudio

No spam. Unsubscribe anytime.