Features Blog Docs GitHub Get Started

flashQ + Next.js: Background Jobs in Vercel

Next.js is fantastic for building modern web applications, but it has a fundamental limitation: serverless functions have a maximum execution time (typically 10-30 seconds on Vercel). This makes it challenging to handle long-running tasks like AI processing, email sending, or webhook handling.

Enter flashQ. By offloading work to a background job queue, you can keep your API routes fast while processing heavy tasks asynchronously. In this guide, we'll build a complete solution for adding background jobs to your Next.js application.

Architecture Overview

Here's how the pieces fit together:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚    Next.js      β”‚     β”‚    flashQ       β”‚     β”‚    Worker       β”‚
β”‚    (Vercel)     │────▢│    Server       │◀────│    (Railway)    β”‚
β”‚                 β”‚     β”‚    (Railway)    β”‚     β”‚                 β”‚
β”‚  - API Routes   β”‚     β”‚                 β”‚     β”‚  - AI Tasks     β”‚
β”‚  - Web App      β”‚     β”‚  - Job Queue    β”‚     β”‚  - Emails       β”‚
β”‚                 β”‚     β”‚  - Persistence  β”‚     β”‚  - Webhooks     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The key insight: your Next.js app only enqueues jobs (fast), while a separate worker process executes them (can take as long as needed).

Project Setup

1. Create a Next.js App

# Create new Next.js project
npx create-next-app@latest my-app --typescript --tailwind --app
cd my-app

# Install flashQ
npm install flashq

2. Environment Variables

Create a .env.local file:

# flashQ server connection
FLASHQ_HOST=your-flashq-server.railway.app
FLASHQ_PORT=6789
FLASHQ_TOKEN=your-secret-token

# Or use HTTP for serverless environments
FLASHQ_HTTP_URL=https://your-flashq-server.railway.app

3. Create the Queue Client

// lib/queue.ts
import { Queue } from 'flashq';

let queue: Queue | null = null;

export function getQueue(): Queue {
  if (!queue) {
    queue = new Queue('tasks', {
      connection: {
        host: process.env.FLASHQ_HOST!,
        port: parseInt(process.env.FLASHQ_PORT || '6789'),
        token: process.env.FLASHQ_TOKEN,
      },
      // Use HTTP mode for serverless (no persistent TCP connections)
      useHttp: true,
    });
  }
  return queue;
}
πŸ’‘ Why HTTP Mode?

Serverless functions can't maintain persistent TCP connections. HTTP mode makes a fresh request for each operation, which works perfectly with Vercel's execution model.

Creating API Routes

Example 1: AI Content Generation

// app/api/generate/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { getQueue } from '@/lib/queue';

export async function POST(request: NextRequest) {
  const { prompt, userId } = await request.json();

  // Validate input
  if (!prompt || !userId) {
    return NextResponse.json(
      { error: 'Missing required fields' },
      { status: 400 }
    );
  }

  // Enqueue the job (returns immediately)
  const queue = getQueue();
  const job = await queue.add('generate-content', {
    prompt,
    userId,
    createdAt: new Date().toISOString(),
  }, {
    // Job options
    priority: 10,
    attempts: 3,
    backoff: { type: 'exponential', delay: 1000 },
  });

  // Return job ID for tracking
  return NextResponse.json({
    success: true,
    jobId: job.id,
    message: 'Content generation started',
  });
}

Example 2: Check Job Status

// app/api/jobs/[id]/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { getQueue } from '@/lib/queue';

export async function GET(
  request: NextRequest,
  { params }: { params: { id: string } }
) {
  const queue = getQueue();

  // Get job status
  const job = await queue.getJob(params.id);

  if (!job) {
    return NextResponse.json(
      { error: 'Job not found' },
      { status: 404 }
    );
  }

  // Get progress and result
  const state = await queue.getState(params.id);
  const progress = await queue.getProgress(params.id);
  const result = state === 'completed' ? await queue.getResult(params.id) : null;

  return NextResponse.json({
    id: job.id,
    state,
    progress,
    result,
    data: job.data,
  });
}

Example 3: Send Email Endpoint

// app/api/email/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { getQueue } from '@/lib/queue';

export async function POST(request: NextRequest) {
  const { to, subject, template, data } = await request.json();

  const queue = getQueue();
  const job = await queue.add('send-email', {
    to,
    subject,
    template,
    data,
  }, {
    attempts: 5,  // Email delivery can be flaky
    backoff: { type: 'exponential', delay: 5000 },
  });

  return NextResponse.json({ queued: true, jobId: job.id });
}

Creating the Worker

The worker runs separately from your Next.js app. You can deploy it on Railway, Fly.io, or any server that supports long-running processes.

// worker/index.ts
import { Worker } from 'flashq';
import OpenAI from 'openai';
import { Resend } from 'resend';

const openai = new OpenAI();
const resend = new Resend(process.env.RESEND_API_KEY);

// Create worker for the tasks queue
const worker = new Worker('tasks', async (job) => {
  console.log(`Processing job ${job.id}: ${job.name}`);

  switch (job.name) {
    case 'generate-content':
      return await handleContentGeneration(job);

    case 'send-email':
      return await handleSendEmail(job);

    default:
      throw new Error(`Unknown job type: ${job.name}`);
  }
}, {
  connection: {
    host: process.env.FLASHQ_HOST,
    port: parseInt(process.env.FLASHQ_PORT || '6789'),
    token: process.env.FLASHQ_TOKEN,
  },
  concurrency: 5,
});

// Handler: AI Content Generation
async function handleContentGeneration(job) {
  const { prompt, userId } = job.data;

  // Update progress
  await job.updateProgress(10, 'Starting generation...');

  // Call OpenAI
  const response = await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [{ role: 'user', content: prompt }],
    max_tokens: 2000,
  });

  await job.updateProgress(90, 'Saving result...');

  // Save to database (pseudo-code)
  await db.content.create({
    userId,
    content: response.choices[0].message.content,
    jobId: job.id,
  });

  await job.updateProgress(100, 'Complete!');

  return {
    content: response.choices[0].message.content,
    tokens: response.usage?.total_tokens,
  };
}

// Handler: Send Email
async function handleSendEmail(job) {
  const { to, subject, template, data } = job.data;

  const result = await resend.emails.send({
    from: 'noreply@example.com',
    to,
    subject,
    html: renderTemplate(template, data),
  });

  return { emailId: result.id };
}

// Event handlers
worker.on('completed', (job, result) => {
  console.log(`βœ“ Job ${job.id} completed`);
});

worker.on('failed', (job, error) => {
  console.error(`βœ— Job ${job.id} failed:`, error.message);
});

console.log('Worker started, waiting for jobs...');

Frontend Integration

React Hook for Job Tracking

// hooks/useJob.ts
import { useState, useEffect } from 'react';

export function useJob(jobId: string | null) {
  const [status, setStatus] = useState<{
    state: string;
    progress: number;
    result: any;
  } | null>(null);

  useEffect(() => {
    if (!jobId) return;

    const pollStatus = async () => {
      const res = await fetch(`/api/jobs/${jobId}`);
      const data = await res.json();
      setStatus(data);

      // Keep polling until completed or failed
      if (data.state !== 'completed' && data.state !== 'failed') {
        setTimeout(pollStatus, 1000);
      }
    };

    pollStatus();
  }, [jobId]);

  return status;
}

// Usage in component
function GenerateButton() {
  const [jobId, setJobId] = useState<string | null>(null);
  const job = useJob(jobId);

  const handleGenerate = async () => {
    const res = await fetch('/api/generate', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ prompt: 'Write a poem', userId: '123' }),
    });
    const { jobId } = await res.json();
    setJobId(jobId);
  };

  return (
    <div>
      <button onClick={handleGenerate}>Generate Content</button>
      {job && (
        <div>
          <p>Status: {job.state}</p>
          <progress value={job.progress} max={100} />
          {job.result && <pre>{JSON.stringify(job.result, null, 2)}</pre>}
        </div>
      )}
    </div>
  );
}

Deployment

Deploy flashQ Server to Railway

# railway.toml
[build]
builder = "dockerfile"

[deploy]
startCommand = "./flashq-server"
healthcheckPath = "/health"
healthcheckTimeout = 30
# Dockerfile
FROM debian:bookworm-slim
WORKDIR /app
COPY flashq-server .
RUN chmod +x flashq-server
EXPOSE 6789 6790
CMD ["./flashq-server"]

Deploy Worker to Railway

# worker/Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
CMD ["node", "dist/index.js"]

Vercel Environment Variables

In your Vercel dashboard, add these environment variables:

FLASHQ_HOST=your-flashq.railway.app
FLASHQ_PORT=6789
FLASHQ_TOKEN=your-secret-token

Common Patterns

Webhook Processing

// app/api/webhook/stripe/route.ts
export async function POST(request: NextRequest) {
  const event = await request.json();

  // Enqueue for processing (respond to Stripe quickly!)
  await getQueue().add('stripe-webhook', event, {
    jobId: event.id,  // Idempotency
  });

  return NextResponse.json({ received: true });
}

Scheduled Tasks with Cron

// Set up cron jobs when worker starts
await queue.addCron('daily-report', {
  queue: 'tasks',
  schedule: '0 0 9 * * *',  // 9 AM daily
  data: { type: 'daily-report' },
});

await queue.addCron('cleanup', {
  queue: 'tasks',
  schedule: '0 0 0 * * 0',  // Weekly on Sunday
  data: { type: 'cleanup' },
});
πŸš€ Pro Tip

Use job IDs for idempotency on webhooks. If Stripe retries a webhook, the second request will be a no-op since the job ID already exists.

Conclusion

With flashQ, you can add powerful background processing to your Next.js app without the complexity of managing Redis. The key points:

  • Use HTTP mode for serverless environments
  • Enqueue fast, process separately - keep API routes under timeout limits
  • Track progress with job IDs and polling
  • Deploy worker separately on Railway, Fly.io, or your own server

Build Something Amazing

Get started with flashQ and Next.js in minutes.

Get Started β†’
ESC