All Posts

May 14, 2025

6 min read
MongoDBNode.jsPerformanceDatabase

MongoDB Connection Pooling in Node.js: My First Dev Task (2018)

I still remember my first week at that promising startup back in 2018. New to the job, eager to impress, and completely unaware of what I was about to learn. My manager called me into a meeting and said: "Our app is slowing down during peak hours. I want you to implement connection pooling for our MongoDB instance."

I nodded confidently while panic-googling "what is connection pooling" on my phone under the table. If you're in the same boat I was, let me save you some time and share what I learned the hard way.

The Problem I Was Solving

Our Node.js application was experiencing significant performance issues. During morning rushes, the app would slow to a crawl, and sometimes even crash. After some investigation, I discovered what was happening:

  1. Each new user request was creating a fresh MongoDB connection
  2. These connections are expensive (taking 100-200ms to establish)
  3. MongoDB was hitting connection limits during peak traffic
  4. Connections weren't being properly closed, creating memory leaks

The solution? Connection pooling - one of those things that sounds complicated but is surprisingly simple to implement and makes a massive difference.

What Connection Pooling Actually Is

In simple terms, connection pooling is like having a small team of workers ready to go instead of hiring and training a new worker every time you have a task.

Without pooling, the flow looks like this:

  • User makes request → Open database connection → Run query → Close connection → Respond
  • Next user? Repeat the whole process again

With pooling:

  • App starts → Create a pool of reusable connections
  • User makes request → Borrow connection from pool → Run query → Return connection to pool → Respond
  • Next user? Just grab an available connection from the pool

This eliminates the overhead of repeatedly establishing connections, which is especially important for MongoDB since the connection process involves several network roundtrips.

How I Implemented It in Node.js and MongoDB

When I was given this task, I was using the native MongoDB Node.js driver. Here's how I set up the connection pool:

javascript
// Before pooling - what I was doing wrong const { MongoClient } = require('mongodb'); async function getMongoConnection() { const client = new MongoClient('mongodb://localhost:27017'); await client.connect(); return client.db('finance_app'); } // Each request was doing this: async function getUserBalance(userId) { const db = await getMongoConnection(); const result = await db.collection('accounts').findOne({ userId }); await db.client.close(); // Often this wasn't even happening properly return result.balance; }

And here's how I implemented the pooling solution:

javascript
// After implementing pooling const { MongoClient } = require('mongodb'); // Global connection pool let clientPromise; async function connectToDatabase() { if (clientPromise) return clientPromise; const client = new MongoClient('mongodb://localhost:27017', { // Pool size - I started with 10 and tuned based on our load maxPoolSize: 10, // How long a connection request can wait before timing out waitQueueTimeoutMS: 2000, // Keep connections alive keepAlive: true }); // Create the initial connection pool when the app starts clientPromise = client.connect(); return clientPromise; } // Now each request uses the pool async function getUserBalance(userId) { const client = await connectToDatabase(); const db = client.db('finance_app'); const result = await db.collection('accounts').findOne({ userId }); // No need to close the connection - it returns to the pool return result.balance; }

The key difference? I was connecting once at application startup rather than for every request, and the driver managed the connection pool for me.

What I Learned Through Trial and Error

My initial implementation wasn't perfect. Here are some lessons I learned while debugging and optimizing:

1. Connection Pool Size Matters

My first attempt used the default pool size (5 connections), which wasn't enough for our load. After monitoring our peak concurrent users, I adjusted it to 20:

javascript
const client = new MongoClient('mongodb://localhost:27017', { maxPoolSize: 20, // Other options... });

But be careful - too large a pool can overwhelm your MongoDB server. It's a balancing act.

2. Handle Connection Failures Gracefully

My early implementation didn't handle connection failures well:

javascript
// How I improved error handling async function connectToDatabase() { if (clientPromise) { // Added error recovery logic try { // Test if the existing pool is still valid await (await clientPromise).db('admin').command({ ping: 1 }); return clientPromise; } catch (e) { console.log('Connection pool error, recreating...'); clientPromise = null; } } // Create new connection pool with retry logic const client = new MongoClient('mongodb://localhost:27017', { maxPoolSize: 20, waitQueueTimeoutMS: 2000, keepAlive: true, retryWrites: true, retryReads: true, serverSelectionTimeoutMS: 5000 }); clientPromise = client.connect(); return clientPromise; }

3. Monitor Pool Metrics

I added monitoring to understand our connection usage patterns:

javascript
// Setup basic monitoring on our connection pool setInterval(async () => { try { const client = await clientPromise; const poolStats = await client.db('admin').command({ serverStatus: 1 }); console.log({ connections: poolStats.connections, timestamp: new Date() }); } catch (e) { console.error('Failed to get connection stats:', e); } }, 60000); // Check every minute

This helped me tune the pool size and identify issues before users were affected.

Results That Made Me Look Good

After implementing connection pooling:

  1. Average response time dropped from 270ms to 85ms
  2. CPU usage on our MongoDB server decreased by about 40%
  3. The app handled 3x more concurrent users before showing signs of stress
  4. Morning crashes became a thing of the past

My manager was impressed, and I got a lot of credit for a relatively simple change.

When to Use Mongoose Instead

For many of our microservices, we eventually switched to Mongoose, which handles connection pooling automatically:

javascript
// With Mongoose, connection pooling is even simpler const mongoose = require('mongoose'); // Connection pool is automatically established mongoose.connect('mongodb://localhost:27017/finance_app', { maxPoolSize: 20, serverSelectionTimeoutMS: 5000, socketTimeoutMS: 45000 }); // Create models const Account = mongoose.model('Account', { userId: String, balance: Number }); // Use the connection pool transparently async function getUserBalance(userId) { const account = await Account.findOne({ userId }).exec(); return account.balance; }

Mongoose abstracts away a lot of the connection management details, which I appreciated as our app grew more complex.

Common Pitfalls I Hit

Some mistakes I made that you can avoid:

  1. Not handling connection errors - When MongoDB briefly went down, our pool didn't recover automatically
  2. Pool size too small - Under-provisioning connections led to request queuing
  3. Pool size too large - Over-provisioning wasted server resources
  4. Missing slow query monitoring - Some queries were blocking connections for too long
  5. Not considering serverless environments - Our AWS Lambda functions needed different pooling strategies

Conclusion

Implementing connection pooling was my introduction to performance optimization, and it taught me that sometimes the simplest changes have the biggest impact. Now it's one of the first things I check when I join a new project or troubleshoot performance issues.

If you take one thing away from this post, let it be this: don't create new database connections for every request. Set up proper connection pooling from day one, and your app (and future you) will thank you for it.