-
-
Notifications
You must be signed in to change notification settings - Fork 163
Overall example
All examples are written for ExpressJS and Redis store, but the same idea can be applied for all limiters with any Koa, Hapi, Nest, pure NodeJS application, etc.
- Create rate limiter
- Minimal protection against password brute-force
- Login endpoint protection
- Websocket connection prevent flooding
- Dynamic block duration
- Different limits for authorized users
- Different limits for different parts of application
- In memory Block Strategy example
- Insurance Strategy
- Third-party API, crawler, bot rate limiting
Any store limiter like Mongo, MySQL, etc can be used for distributed environment as well.
const express = require('express');
const Redis = require('ioredis');
const redisClient = new Redis({ enableOfflineQueue: false });
const app = express();
const rateLimiterRedis = new RateLimiterRedis({
storeClient: redisClient,
points: 10, // Number of points
duration: 1, // Per second
});
const rateLimiterMiddleware = (req, res, next) => {
rateLimiterRedis.consume(req.ip)
.then(() => {
next();
})
.catch(_ => {
res.status(429).send('Too Many Requests');
});
};
app.use(rateLimiterMiddleware);
Rate limiter consumes 1 point by IP for every request to an application. This limits a user to make only 10 requests per second. It works in distributed environments as it stores all limits on Redis.
Memory limiter can be used if application is launched as single process.
Cluster limiter is available for application launched on single server.
Disallow too many wrong password tries. Block user account for some period of time on limit reached.
The idea is simple:
- get number of wrong tries and block request if limit reached.
- if correct password, reset wrong password tries count.
- if wrong password, count = count + 1.
const http = require('http');
const express = require('express');
const Redis = require('ioredis');
const { RateLimiterRedis } = require('rate-limiter-flexible');
// You may also use Mongo, Memory or any other limiter type
const redisClient = new Redis({ enableOfflineQueue: false });
const maxConsecutiveFailsByUsername = 5;
const limiterConsecutiveFailsByUsername = new RateLimiterRedis({
redis: redisClient,
keyPrefix: 'login_fail_consecutive_username',
points: maxConsecutiveFailsByUsername,
duration: 60 * 60 * 3, // Store number for three hours since first fail
blockDuration: 60 * 15, // Block for 15 minutes
});
async function loginRoute(req, res) {
const username = req.body.email;
const rlResUsername = await limiterConsecutiveFailsByUsername.get(username);
if (rlResUsername !== null && rlResUsername.consumedPoints > maxConsecutiveFailsByUsername) {
const retrySecs = Math.round(rlResUsername.msBeforeNext / 1000) || 1;
res.set('Retry-After', String(retrySecs));
res.status(429).send('Too Many Requests');
} else {
const user = authorise(username, req.body.password); // should be implemented in your project
if (!user.isLoggedIn) {
try {
await limiterConsecutiveFailsByUsername.consume(username);
res.status(400).end('email or password is wrong');
} catch (rlRejected) {
if (rlRejected instanceof Error) {
throw rlRejected;
} else {
res.set('Retry-After', String(Math.round(rlRejected.msBeforeNext / 1000)) || 1);
res.status(429).send('Too Many Requests');
}
}
}
if (user.isLoggedIn) {
if (rlResUsername !== null && rlResUsername.consumedPoints > 0) {
// Reset on successful authorisation
await limiterConsecutiveFailsByUsername.delete(username);
}
res.end('authorised');
}
}
}
const app = express();
app.post('/login', async (req, res) => {
try {
await loginRoute(req, res);
} catch (err) {
res.status(500).end();
}
});
Note, this approach may be an issue for your users, if somebody knows your service applies it. It can be scheduled to send 5 password tries every 15 minutes and block user account for infinity. It should not be a problem for MVP or early stages of a startup.
If you wish to avoid that possible issue, you may:
- Implement trusted device approach additionally. Save some token on the client after successful authorisation and check for exact username before limiting against brute-force.
- Apply limiting by IP in short and long period of time like in this example.
- Apply
Login endpoint protection
approach from the below example.
It should be protected against brute force attacks. Additionally, it should be rate limited, if rate limits are not set on reverse-proxy or load balancer. This example describes one possible way to protect against brute-force and does not include global rate limiting.
Create 2 limiters. The first counts number of consecutive failed attempts and allows maximum 10 by username and IP pair. The second blocks IP for a day on 100 failed attempts per day.
const http = require('http');
const express = require('express');
const Redis = require('ioredis');
const { RateLimiterRedis } = require('rate-limiter-flexible');
const redisClient = new Redis({ enableOfflineQueue: false });
const maxWrongAttemptsByIPperDay = 100;
const maxConsecutiveFailsByUsernameAndIP = 10;
const limiterSlowBruteByIP = new RateLimiterRedis({
storeClient: redisClient,
keyPrefix: 'login_fail_ip_per_day',
points: maxWrongAttemptsByIPperDay,
duration: 60 * 60 * 24,
blockDuration: 60 * 60 * 24, // Block for 1 day, if 100 wrong attempts per day
});
const limiterConsecutiveFailsByUsernameAndIP = new RateLimiterRedis({
storeClient: redisClient,
keyPrefix: 'login_fail_consecutive_username_and_ip',
points: maxConsecutiveFailsByUsernameAndIP,
duration: 60 * 60 * 24 * 90, // Store number for 90 days since first fail
blockDuration: 60 * 60, // Block for 1 hour
});
const getUsernameIPkey = (username, ip) => `${username}_${ip}`;
async function loginRoute(req, res) {
const ipAddr = req.ip;
const usernameIPkey = getUsernameIPkey(req.body.email, ipAddr);
const [resUsernameAndIP, resSlowByIP] = await Promise.all([
limiterConsecutiveFailsByUsernameAndIP.get(usernameIPkey),
limiterSlowBruteByIP.get(ipAddr),
]);
let retrySecs = 0;
// Check if IP or Username + IP is already blocked
if (resSlowByIP !== null && resSlowByIP.consumedPoints > maxWrongAttemptsByIPperDay) {
retrySecs = Math.round(resSlowByIP.msBeforeNext / 1000) || 1;
} else if (resUsernameAndIP !== null && resUsernameAndIP.consumedPoints > maxConsecutiveFailsByUsernameAndIP) {
retrySecs = Math.round(resUsernameAndIP.msBeforeNext / 1000) || 1;
}
if (retrySecs > 0) {
res.set('Retry-After', String(retrySecs));
res.status(429).send('Too Many Requests');
} else {
const user = authorise(req.body.email, req.body.password); // should be implemented in your project
if (!user.isLoggedIn) {
// Consume 1 point from limiters on wrong attempt and block if limits reached
try {
const promises = [limiterSlowBruteByIP.consume(ipAddr)];
if (user.exists) {
// Count failed attempts by Username + IP only for registered users
promises.push(limiterConsecutiveFailsByUsernameAndIP.consume(usernameIPkey));
}
await Promise.all(promises);
res.status(400).end('email or password is wrong');
} catch (rlRejected) {
if (rlRejected instanceof Error) {
throw rlRejected;
} else {
res.set('Retry-After', String(Math.round(rlRejected.msBeforeNext / 1000)) || 1);
res.status(429).send('Too Many Requests');
}
}
}
if (user.isLoggedIn) {
if (resUsernameAndIP !== null && resUsernameAndIP.consumedPoints > 0) {
// Reset on successful authorisation
await limiterConsecutiveFailsByUsernameAndIP.delete(usernameIPkey);
}
res.end('authorized');
}
}
}
const app = express();
app.post('/login', async (req, res) => {
try {
await loginRoute(req, res);
} catch (err) {
res.status(500).end();
}
});
The example can be simplified if replace two get
requests in the beginning to two consume
calls, but there are concerns. First, consume calls are more expensive. Imagine, somebody DDoSes the login endpoint and a database got millions of upsert requests. Second, if there is a consume call by random username allowed, it can overflow the storage with junk keys.
See more examples of login endpoint protection in "Brute-force protection Node.js examples" article
The most simple is rate limiting by IP.
const app = require('http').createServer();
const io = require('socket.io')(app);
const { RateLimiterMemory } = require('rate-limiter-flexible');
app.listen(3000);
const rateLimiter = new RateLimiterMemory(
{
points: 5, // 5 points
duration: 1, // per second
});
io.on('connection', (socket) => {
socket.on('bcast', async (data) => {
try {
await rateLimiter.consume(socket.handshake.address); // consume 1 point per event from IP
socket.emit('news', { 'data': data });
socket.broadcast.emit('news', { 'data': data });
} catch(rejRes) {
// no available points to consume
// emit error or warning message
socket.emit('blocked', { 'retry-ms': rejRes.msBeforeNext });
}
});
});
It may be issue if there are many users behind one IP address. If there is some login procedure or uniqueUserId
, use it to limit on per user basis. Otherwise, you may try to limit by socket.id
and limit number of allowed re-connections from the same IP.
If websocket server is launched as cluster
or PM2
, you should use RateLimiterCluster or RateLimiterCluster with PM2.
Cluster and PM2 limiter is also enough if you use sticky load balancing. However, if cluster master process is restarted, all counters are reset.
Consider RateLimiterRedis or any other store limiter for multiple websocket server nodes.
Well known authorisation protection technique is increasing block duration on consecutive failed attempts.
Here is the logic:
- maximum 5 fails per 15 minutes. Consume one point on failed login attempt.
- if there are no remaining points, increment a counter N for a user who failed.
- block authorisation for the user during some period of time depending on N.
- clear counter N on successful login.
const Ioredis = require('ioredis');
const { RateLimiterRedis } = require('rate-limiter-flexible');
const redisClient = new Ioredis({});
const loginLimiter = new RateLimiterRedis({
storeClient: redisClient,
keyPrefix: 'login',
points: 5, // 5 attempts
duration: 15 * 60, // within 15 minutes
});
const limiterConsecutiveOutOfLimits = new RateLimiterRedis({
storeClient: redisClient,
keyPrefix: 'login_consecutive_outoflimits',
points: 99999, // doesn't matter much, this is just counter
duration: 0, // never expire
});
function getFibonacciBlockDurationMinutes(countConsecutiveOutOfLimits) {
if (countConsecutiveOutOfLimits <= 1) {
return 1;
}
return getFibonacciBlockDurationMinutes(countConsecutiveOutOfLimits - 1) + getFibonacciBlockDurationMinutes(countConsecutiveOutOfLimits - 2);
}
async function loginRoute(req, res) {
const userId = req.body.email;
const resById = await loginLimiter.get(userId);
let retrySecs = 0;
if (resById !== null && resById.remainingPoints <= 0) {
retrySecs = Math.round(resById.msBeforeNext / 1000) || 1;
}
if (retrySecs > 0) {
res.set('Retry-After', String(retrySecs));
res.status(429).send('Too Many Requests');
} else {
const user = authorise(req.body.email, req.body.password); // should be implemented in your project
if (!user.isLoggedIn) {
if (user.exists) {
try {
const resConsume = await loginLimiter.consume(userId);
if (resConsume.remainingPoints <= 0) {
const resPenalty = await limiterConsecutiveOutOfLimits.penalty(userId);
await loginLimiter.block(userId, 60 * getFibonacciBlockDurationMinutes(resPenalty.consumedPoints));
}
} catch (rlRejected) {
if (rlRejected instanceof Error) {
throw rlRejected;
} else {
res.set('Retry-After', String(Math.round(rlRejected.msBeforeNext / 1000)) || 1);
res.status(429).send('Too Many Requests');
}
}
}
res.status(400).end('email or password is wrong');
}
if (user.isLoggedIn) {
await limiterConsecutiveOutOfLimits.delete(userId);
res.end('authorized');
}
}
}
Note, this example may be not a good fit. If a hacker makes attack on user's account by email, real user should have a way to prove, that he is real. Also, see more flexible example of login protection here.
Sometimes it is reasonable to make the difference between authorized and not authorized requests. For example, an application must provide public access as well as serve for registered and authorized users with different limits.
const express = require('express');
const Redis = require('ioredis');
const redisClient = new Redis({ enableOfflineQueue: false });
const app = express();
const rateLimiterRedis = new RateLimiterRedis({
storeClient: redisClient,
points: 300, // Number of points
duration: 60, // Per 60 seconds
});
// req.userId should be set by someAuthMiddleware. It is up to you, how to do that
app.use(someAuthMiddleware);
const rateLimiterMiddleware = (req, res, next) => {
// req.userId should be set
const key = req.userId ? req.userId : req.ip;
const pointsToConsume = req.userId ? 1 : 30;
rateLimiterRedis.consume(key, pointsToConsume)
.then(() => {
next();
})
.catch(_ => {
res.status(429).send('Too Many Requests');
});
};
app.use(rateLimiterMiddleware);
This example not ideally clean, because in some weird cases userId
may be equal to remoteAddress
. Make sure this never happens.
It consumes 30 points for every not authorized request or 1 point, if application recognises a user by ID.
This can be achieved by creating of independent limiters.
const express = require('express');
const Redis = require('ioredis');
const redisClient = new Redis({ enableOfflineQueue: false });
const app = express();
const rateLimiterRedis = new RateLimiterRedis({
storeClient: redisClient,
points: 300, // Number of points
duration: 60, // Per 60 seconds
});
const rateLimiterRedisReports = new RateLimiterRedis({
keyPrefix: 'rlreports',
storeClient: redisClient,
points: 10, // Only 10 points for reports per user
duration: 60, // Per 60 seconds
});
// req.userId should be set by someAuthMiddleware. It is up to you, how to do that
app.use(someAuthMiddleware);
const rateLimiterMiddleware = (req, res, next) => {
const key = req.userId ? req.userId : req.ip;
if (req.path.indexOf('/report') === 0) {
const pointsToConsume = req.userId ? 1 : 5;
rateLimiterRedisReports.consume(key, pointsToConsume)
.then(() => {
next();
})
.catch(_ => {
res.status(429).send('Too Many Requests');
});
} else {
const pointsToConsume = req.userId ? 1 : 30;
rateLimiterRedis.consume(key, pointsToConsume)
.then(() => {
next();
})
.catch(_ => {
res.status(429).send('Too Many Requests');
});
};
}
app.use(rateLimiterMiddleware);
Different limiters can be set on per endpoint level as well. It is all up to requirements.
There is no need to increment counter on store, if it is already blocked in current duration. It is also helpful agains DDoS attacks.
const express = require('express');
const Redis = require('ioredis');
const redisClient = new Redis({ enableOfflineQueue: false });
const app = express();
const rateLimiterRedis = new RateLimiterRedis({
storeClient: redisClient,
points: 300, // Number of points
duration: 60, // Per 60 seconds,
inMemoryBlockOnConsumed: 300, // If userId or IP consume >=300 points per minute
});
// req.userId should be set by someAuthMiddleware. It is up to you, how to do that
app.use(someAuthMiddleware);
const rateLimiterMiddleware = (req, res, next) => {
// req.userId should be set
const key = req.userId ? req.userId : req.ip;
const pointsToConsume = req.userId ? 1 : 30;
rateLimiterRedis.consume(key, pointsToConsume)
.then(() => {
next();
})
.catch(_ => {
res.status(429).send('Too Many Requests');
});
};
app.use(rateLimiterMiddleware);
UserId is blocked in memory with inMemoryBlockOnConsumed option, when 300 or more points are consumed. Block expires when points are reset in store.
More details on in-memory Block Strategy here
There may be many reasons to take care of cases when limits store like Redis is down:
- you have just started your project and do not want to spend time on setting up Redis Cluster or any other stable infrastructure just to handle limits more stable.
- you do not want to spend more money on setting up 2 or more instances of database.
- you need to limit access to an application and you want just sleep well over weekend.
This example demonstrates memory limiter as insurance. Yes, it would work wrong if redis is down and redis limiter has 300 points for all NodeJS processes and then it works in memory with the same 300 points per process not overall. We can level that.
const express = require('express');
const Redis = require('ioredis');
const redisClient = new Redis({ enableOfflineQueue: false });
const app = express();
const rateLimiterMemory = new RateLimiterMemory({
points: 60, // 300 / 5 if there are 5 processes at all
duration: 60,
});
const rateLimiterRedis = new RateLimiterRedis({
storeClient: redisClient,
points: 300, // Number of points
duration: 60, // Per 60 seconds,
inMemoryBlockOnConsumed: 301, // If userId or IP consume >=301 points per minute
inMemoryBlockDuration: 60, // Block it for a minute in memory, so no requests go to Redis
insuranceLimiter: rateLimiterMemory,
});
// req.userId should be set by someAuthMiddleware. It is up to you, how to do that
app.use(someAuthMiddleware);
const rateLimiterMiddleware = (req, res, next) => {
// req.userId should be set
const key = req.userId ? req.userId : req.ip;
const pointsToConsume = req.userId ? 1 : 30;
rateLimiterRedis.consume(key, pointsToConsume)
.then(() => {
next();
})
.catch(_ => {
res.status(429).send('Too Many Requests');
});
};
app.use(rateLimiterMiddleware);
Added insurance rateLimiterMemory
is used only when Redis can not process request by some reason. Any limiter from this package can be used as insurance limiter. You can have another Redis up and running for a case if the first is down as well.
More details on Insurance Strategy here
RateLimiterQueue limits number of requests and queues extra requests.
const {RateLimiterMemory, RateLimiterQueue} = require('rate-limiter-flexible');
const fetch = require('node-fetch');
const limiterFlexible = new RateLimiterMemory({
points: 1,
duration: 2,
});
const limiterQueue = new RateLimiterQueue(limiterFlexible, {
maxQueueSize: 100,
});
for(let i = 0; i < 200; i++) {
limiterQueue.removeTokens(1)
.then(() => {
fetch('https://github.com/animir/node-rate-limiter-flexible')
.then(() => {
console.log(Date.now())
})
.catch(err => console.error(err))
})
.catch(() => {
console.log('queue is full')
})
}
In this example, it makes one request per two seconds. maxQueueSize
is set to 100, so if you run this code, you should see something like:
...
queue is full
queue is full
queue is full
queue is full
queue is full
queue is full
1569046899363
1569046901391
1569046903491
1569046905192
...
You can omit maxQueueSize
option to queue as many requests as possible.
Read more on RateLimiterQueue
Get started
Middlewares and plugins
Migration from other packages
Limiters:
- Redis
- Memory
- DynamoDB
- Prisma
- MongoDB (with sharding support)
- PostgreSQL
- MySQL
- BurstyRateLimiter
- Cluster
- PM2 Cluster
- Memcached
- RateLimiterUnion
- RateLimiterQueue
Wrappers:
- RLWrapperBlackAndWhite Black and White lists
Knowledge base:
- Block Strategy in memory
- Insurance Strategy
- Comparative benchmarks
- Smooth out traffic peaks
-
Usage example
- Minimal protection against password brute-force
- Login endpoint protection
- Websocket connection prevent flooding
- Dynamic block duration
- Different limits for authorized users
- Different limits for different parts of application
- Block Strategy in memory
- Insurance Strategy
- Third-party API, crawler, bot rate limiting