Npgsql + Connection Pooling Done Right in ASP.NET Core

DataSource, DI, Timeouts, and Resiliency

If your first Postgres article was about unlearning SQL Server assumptions, this one is about avoiding your first production incident.

Because when .NET teams struggle with PostgreSQL, it’s rarely the SQL.

It’s almost always:

The good news:
Npgsql gives you everything you need to do this properlyif you use the modern APIs.

This article shows how.


The Core Problem: Postgres Connections Are Not Free

In SQL Server:

  • connections are cheap

  • idle connections don’t hurt much

  • you can often “get away with it”

In PostgreSQL:

So connection discipline is not optional.


Old Way vs New Way (Important)

❌ Old pattern (still common in examples)

services.AddDbContext<AppDbContext>(options => options.UseNpgsql(connectionString));

This works, but:

  • hides pooling configuration

  • creates multiple internal pools

  • makes tuning harder

  • limits observability

✅ Modern pattern (recommended)

Use NpgsqlDataSource explicitly.

This is the foundation for:

  • predictable pooling

  • shared configuration

  • better performance

  • clearer DI lifetimes


What Is NpgsqlDataSource?

Think of NpgsqlDataSource as:

  • a configured, reusable connection factory

  • with one pool

  • shared across your app

It replaces:

  • ad-hoc NpgsqlConnection

  • implicit pooling

  • hidden defaults

And it integrates cleanly with:


Step 1: Register the DataSource Correctly

Minimal, production-safe setup

builder.Services.AddNpgsqlDataSource( builder.Configuration.GetConnectionString("Default"), dataSourceBuilder => { dataSourceBuilder.EnableParameterLogging(false); });

This registers:

  • a singleton NpgsqlDataSource

  • with one shared pool

  • managed by DI

⚠️ Never register it as scoped or transient.


Step 2: Configure Pooling Explicitly

Postgres defaults are conservative, but you should still be explicit.

Recommended baseline

Connection string:

Pooling=true; Minimum Pool Size=0; Maximum Pool Size=100; Timeout=15; Command Timeout=30;

What these actually mean

  • Maximum Pool Size
    Hard cap. When exceeded, requests wait.

  • Timeout
    How long to wait for a connection before failing.

  • Command Timeout
    How long a query may execute.

Fail fast > hang forever.


Step 3: EF Core Integration (The Right Way)

Instead of letting EF create its own connections:

builder.Services.AddDbContext<AppDbContext>((sp, options) => { var dataSource = sp.GetRequiredService<NpgsqlDataSource>(); options.UseNpgsql(dataSource); });

Why this matters

  • EF now uses your configured pool

  • No duplicate pools

  • One place to tune

  • Predictable behaviour under load


Step 4: Dapper and Raw SQL (Same Pool)

One of the biggest wins of NpgsqlDataSource is shared infrastructure.

public class UserRepository { private readonly NpgsqlDataSource _dataSource; public UserRepository(NpgsqlDataSource dataSource) { _dataSource = dataSource; } public async Task<User?> GetAsync(Guid id) { await using var conn = await _dataSource.OpenConnectionAsync(); return await conn.QuerySingleOrDefaultAsync<User>( "select * from users where id = @id", new { id }); } }

EF Core, Dapper, raw SQL: same pool, same limits.


Timeouts: Your First Line of Defense

SQL Server habit

“Let it run. It’ll finish.”

Postgres reality

Blocked queries + no timeouts = pool exhaustion.

Always set:

  • connection timeout

  • command timeout

EF Core command timeout

options.UseNpgsql(dataSource, o => { o.CommandTimeout(30); });

30 seconds is already generous for most OLTP workloads.


Resiliency: What to Retry (and What Not To)

Enable transient retry (carefully)

options.UseNpgsql(dataSource, o => { o.EnableRetryOnFailure( maxRetryCount: 3, maxRetryDelay: TimeSpan.FromSeconds(2), errorCodesToAdd: null); });

Important rule

Retries are safe for:

  • transient network issues

  • failovers

  • connection resets

Retries are not safe for:

  • non-idempotent writes

  • side-effect-heavy operations

If you retry:

  • use idempotency keys

  • design defensively


Pool Exhaustion: The Silent Killer

Symptoms

  • Requests hang

  • CPU is low

  • Database looks “fine”

  • Everything times out at once

Causes

  • Too many concurrent requests

  • Long-running queries

  • Missing timeouts

  • Leaked connections

How to avoid it

  • Keep queries short

  • Index properly

  • Use async everywhere

  • Fail fast on timeouts

  • Set sensible pool limits


Observability: Know When You’re in Trouble

At minimum, monitor:

  • active connections

  • idle connections

  • wait times

  • slow queries

Postgres:

select * from pg_stat_activity;

App-side:

  • log timeouts

  • log retry events

  • surface pool wait failures

If you don’t observe your pool, you don’t control it.


PgBouncer: When (Not If) You Need It

When you scale horizontally:

  • many app instances

  • limited Postgres connection budget

You will eventually need:

  • PgBouncer (transaction pooling mode)

  • smaller app-side pools

  • more aggressive timeouts

But:

PgBouncer does not replace good application pooling.

It amplifies it.


Common Anti-Patterns (Avoid These)

❌ Creating NpgsqlConnection manually everywhere
❌ Multiple independent pools per app
❌ No command timeouts
❌ “Just increase max connections”
❌ Long-running queries in web requests

All of these work… until they don’t.


Recommended Defaults (Production Baseline)

For most ASP.NET Core APIs:

  • NpgsqlDataSource singleton

  • Max pool size: 50–100

  • Command timeout: 15–30s

  • Retry count: low (2–3)

  • Async everywhere

  • PgBouncer planned, not reactive


Why This Matters More Than Any Index

You can fix:

  • slow queries

  • missing indexes

  • bad plans

But if your connection strategy is wrong, everything fails at once.

Getting pooling right:

  • increases throughput

  • improves latency

  • prevents cascading failures

  • makes scaling predictable


Summary

If you remember only one thing:

Treat Postgres connections as a limited resource — and manage them explicitly.

NpgsqlDataSource is not optional anymore.
It is the foundation for sane, scalable Postgres usage in .NET.

Comments

Popular posts from this blog

Using a Semantic Model as a Reasoning Layer (Not Just Metadata)

A Thought Experiment: What If Analytics Models Were Semantic, Not Structural?

Dev Tunnels with Visual Studio 2022 and Visual Studio 2026