Features Blog Docs GitHub Get Started

flashQ vs BullMQ: Why We Built a Redis-Free Alternative

BullMQ is the most popular job queue for Node.js, and for good reason. It's battle-tested, feature-rich, and has an excellent API. We used it ourselves for years before building flashQ.

So why build an alternative? The short answer: Redis.

Redis is an incredible piece of software, but for job queues—especially AI workloads—it introduces friction that we wanted to eliminate. This article explores the key differences between flashQ and BullMQ, and helps you decide which is right for your project.

Quick Comparison

Feature flashQ BullMQ
External dependency None Redis required
API compatibility BullMQ-compatible -
Max payload size 10 MB ~5 MB practical
Push throughput 1.9M/sec ~50K/sec
Processing throughput 280K/sec ~30K/sec
Latency (p99) <1ms 5-10ms
Job dependencies âś… âś…
Rate limiting âś… Built-in âś…
Priorities âś… âś…
Delayed jobs âś… âś…
Cron jobs âś… âś… (repeatable)
Written in Rust TypeScript + Redis
Persistence PostgreSQL (optional) Redis
License MIT MIT

The Redis Problem

Don't get us wrong—Redis is amazing. It's fast, reliable, and incredibly versatile. But using it as a job queue backend comes with costs:

1. Operational Overhead

Running Redis in production means:

For a startup or small team, this is significant overhead just to run background jobs.

2. Memory Costs

Redis stores everything in RAM. At scale, this gets expensive:

A moderately sized AI queue can easily cost $500-1000/month just in Redis.

3. Network Latency

Every job operation requires a network round-trip:

Your App → Network → Redis → Network → Your App
         ~0.5ms     ~0.1ms    ~0.5ms
                    Total: ~1-2ms per operation

This adds up when you're processing thousands of jobs per second.

4. Payload Limitations

While Redis technically supports values up to 512MB, performance degrades significantly above a few MB. For AI workloads that need to pass embeddings (1536+ floats), images, or long context windows, this becomes a problem.

How flashQ Solves These Problems

Zero External Dependencies

flashQ is a single binary. Download it, run it, done:

# That's it. No Redis, no Docker, no configuration.
./flashq-server

For persistence, you can optionally connect to PostgreSQL. But for development or smaller workloads, the in-memory mode works perfectly.

Native Performance

flashQ is written in Rust and optimized for job queue workloads:

The result: 10x higher throughput and sub-millisecond latency.

Large Payloads

flashQ natively supports payloads up to 10MB, making it perfect for:

API Compatibility

We designed flashQ's API to be compatible with BullMQ. If you're migrating, most code works unchanged:

// BullMQ
import { Queue, Worker } from 'bullmq';
import Redis from 'ioredis';

const connection = new Redis();
const queue = new Queue('my-queue', { connection });

// flashQ
import { Queue, Worker } from 'flashq';

const queue = new Queue('my-queue'); // No connection needed!

The Queue and Worker APIs are nearly identical:

// Adding jobs (same API)
await queue.add('task-name', { data: 'value' }, {
  priority: 1,
  delay: 5000,
  attempts: 3,
  backoff: { type: 'exponential', delay: 1000 }
});

// Processing jobs (same API)
const worker = new Worker('my-queue', async (job) => {
  console.log(job.name, job.data);
  return { result: 'done' };
});

Feature Parity

flashQ supports most BullMQ features:

Feature flashQ Notes
Job priorities âś… Same API
Delayed jobs âś… Same API
Job retries âś… Exponential backoff
Dead letter queue âś… Automatic after max attempts
Job dependencies âś… depends_on option
Rate limiting âś… Token bucket per queue
Concurrency control âś… Per-queue limits
Progress tracking âś… job.updateProgress()
Cron/Repeatable âś… 6-field cron expressions
Pause/Resume âś… Same API
Events âś… completed, failed, progress
Flows âś… Parent-child job relationships

When to Choose BullMQ

BullMQ is still a great choice if:

When to Choose flashQ

flashQ is the better choice if:

Migration Guide

Migrating from BullMQ to flashQ is straightforward:

1. Install flashQ

npm install flashq

2. Start the flashQ server

docker run -d -p 6789:6789 flashq/flashq

3. Update your imports

// Before
import { Queue, Worker } from 'bullmq';

// After
import { Queue, Worker } from 'flashq';

4. Remove Redis connection

// Before
const queue = new Queue('my-queue', { connection: redis });

// After
const queue = new Queue('my-queue');

That's it! Most of your code should work unchanged.

📝 Note

Some advanced BullMQ features like job groups (Pro) and rate limiting per worker don't have direct equivalents in flashQ. Check the documentation for the full feature list.

Conclusion

Both BullMQ and flashQ are excellent job queues. BullMQ is mature, well-documented, and has a large community. flashQ offers simplicity, performance, and is purpose-built for modern AI workloads.

If you're starting a new project—especially one involving AI—we encourage you to give flashQ a try. The lack of Redis ops alone might make your life significantly easier.

Ready to try flashQ?

Migrate from BullMQ in 5 minutes.

Get Started →