Framework-Specific Guides March 3, 2026 · 5 min read

Laravel Queue Workers in Production: A Survival Guide

Laravel Queue Workers in Production: A Survival Guide

You've built your Laravel app with AI assistance, your queues are handling everything from sending emails to processing images, and now you're ready to deploy. But then reality hits: production queue workers are beasts that need proper taming.

If you've ever had queue workers mysteriously stop processing jobs, crash under load, or drain your server resources, this guide is for you. Let's turn your queue chaos into a well-oiled machine.

The Queue Worker Reality Check

First, let's be honest about what queue workers actually are: long-running PHP processes that can be fragile, memory-hungry, and prone to silent failures. In development, restarting php artisan queue:work when something breaks is no big deal. In production? That's downtime, angry users, and 3 AM debugging sessions.

Here's what most developers get wrong: they treat queue workers like fire-and-forget background scripts. Wrong. They're critical infrastructure that needs the same attention as your web servers.

Process Management: Your First Line of Defense

Supervisor: The Old Reliable

Supervisor is the battle-tested process manager that keeps your queue workers alive. Here's a production-ready configuration:

[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /path/to/your/app/artisan queue:work --sleep=3 --tries=3 --max-time=3600
directory=/path/to/your/app
autostart=true
autorestart=true
startretries=3
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=/var/log/supervisor/laravel-worker.log
stopwaitsecs=3600

Key settings explained:

  • --max-time=3600: Workers restart every hour to prevent memory bloat
  • numprocs=2: Run multiple workers for reliability and throughput
  • stopwaitsecs=3600: Give jobs time to finish before force-killing
  • startretries=3: Don't give up too easily on failed starts

Systemd: The Modern Alternative

If you prefer systemd (and you should on modern Linux systems), here's the service file:

[Unit]
Description=Laravel queue worker
After=network.target

[Service]
User=www-data
Group=www-data
Restart=always
ExecStart=/usr/bin/php /path/to/your/app/artisan queue:work --sleep=3 --tries=3 --max-time=3600
WorkingDirectory=/path/to/your/app

[Install]
WantedBy=multi-user.target

Memory Management: The Silent Killer

Laravel queue workers are notorious for memory leaks. Even with --max-time, you need to be proactive:

Set Memory Limits

php artisan queue:work --memory=512 --max-time=3600

This kills workers before they hit 512MB of memory usage. Adjust based on your job complexity.

Monitor Memory Usage

Add this to your monitoring stack:

// In your job class
public function handle()
{
    // Your job logic here
    
    // Log memory usage for large jobs
    if (memory_get_usage(true) > 100 * 1024 * 1024) { // 100MB
        Log::warning('High memory usage in job', [
            'job' => static::class,
            'memory' => memory_get_usage(true),
        ]);
    }
}

Error Handling and Resilience

Failed Job Strategy

Don't let failed jobs disappear into the void:

// config/queue.php
'failed' => [
    'driver' => 'database-uuids',
    'database' => env('DB_CONNECTION', 'mysql'),
    'table' => 'failed_jobs',
],

Set up automated alerts for failed jobs:

// In a scheduled command
use Illuminate\Support\Facades\DB;

$recentFailures = DB::table('failed_jobs')
    ->where('failed_at', '>', now()->subMinutes(5))
    ->count();

if ($recentFailures > 10) {
    // Alert your team
    Notification::route('slack', config('services.slack.webhook'))
        ->notify(new QueueFailureAlert($recentFailures));
}

Graceful Degradation

For critical jobs, implement circuit breaker patterns:

class ProcessPaymentJob implements ShouldQueue
{
    public function handle()
    {
        if ($this->attempts() > 1) {
            // Fallback to synchronous processing
            Log::warning('Queue delays detected, processing payment synchronously');
            $this->processPaymentSync();
            return;
        }
        
        $this->processPaymentAsync();
    }
}

Scaling Strategies

Queue Prioritization

Not all jobs are created equal:

# High priority worker for critical jobs
php artisan queue:work --queue=critical,high,default --sleep=1 --tries=3

# Separate worker for heavy background tasks
php artisan queue:work --queue=heavy --sleep=5 --tries=1 --timeout=1800

Horizontal Scaling

When vertical scaling isn't enough, distribute workers across multiple servers:

// Use Redis for shared queue state
'redis' => [
    'driver' => 'redis',
    'connection' => 'default',
    'queue' => env('REDIS_QUEUE', 'default'),
    'retry_after' => 90,
    'block_for' => null,
],

Monitoring and Observability

Essential Metrics

Track these metrics for queue health:

// Custom artisan command for queue metrics
class QueueMetricsCommand extends Command
{
    public function handle()
    {
        $queueSize = Queue::size();
        $failedJobs = DB::table('failed_jobs')->count();
        $processingJobs = Cache::get('jobs_processing', 0);
        
        // Send to your monitoring service
        $this->sendMetric('queue.size', $queueSize);
        $this->sendMetric('queue.failed', $failedJobs);
        $this->sendMetric('queue.processing', $processingJobs);
    }
}

Health Checks

Implement queue health endpoints:

// routes/api.php
Route::get('/health/queues', function () {
    $queueSize = Queue::size();
    $isHealthy = $queueSize < 1000; // Adjust threshold
    
    return response()->json([
        'healthy' => $isHealthy,
        'queue_size' => $queueSize,
        'timestamp' => now(),
    ], $isHealthy ? 200 : 503);
});

Deployment Best Practices

Zero-Downtime Deployments

Queues complicate deployments. Here's the safe approach:

#!/bin/bash
# In your deployment script

# 1. Stop accepting new jobs
php artisan queue:pause-all

# 2. Wait for current jobs to finish (with timeout)
timeout 300 php artisan queue:monitor --stop-when-empty

# 3. Deploy your code
git pull && composer install --no-dev --optimize-autoloader

# 4. Restart workers (they'll pick up new code)
sudo supervisorctl restart laravel-worker:*

# 5. Resume job processing
php artisan queue:resume-all

Configuration Management

Use environment-specific queue configurations:

// config/queue.php
'connections' => [
    'production' => [
        'driver' => 'redis',
        'connection' => 'default',
        'queue' => 'production',
        'retry_after' => 300,
        'block_for' => null,
    ],
],

The DeployMyVibe Advantage

Managing all this infrastructure is exactly why services like DeployMyVibe exist. We handle the queue worker setup, monitoring, scaling, and maintenance so you can focus on building features, not babysitting processes.

Our managed Laravel hosting includes:

  • Pre-configured Supervisor with optimal settings
  • Automatic worker scaling based on queue depth
  • Built-in monitoring and alerting
  • Zero-downtime deployment pipelines
  • 24/7 infrastructure management

Wrapping Up

Queue workers in production aren't just about running php artisan queue:work. They're critical infrastructure that needs proper process management, monitoring, error handling, and scaling strategies.

The good news? Once you get this setup right, your queues become a superpower. Jobs process reliably, your app stays responsive, and you can sleep peacefully knowing your background tasks are handled.

Remember: treat your queue workers like the production services they are, and they'll serve you well. Neglect them, and they'll remind you at the worst possible moment.

Need help setting up bulletproof queue infrastructure? That's exactly what we built DeployMyVibe to solve. Let us handle the DevOps complexity while you focus on shipping features.

Alex Hackney

Alex Hackney

DeployMyVibe

Ready to deploy?

Stop reading about it. Start shipping.

View Pricing