9 min read

Why Cron Jobs Fail Silently in Production and How to Fix It

Learn the most common reasons why cron jobs fail silently in production and how to debug, fix, and prevent them

Cron jobs are one of the easiest tools we use in production, but they can also be one of the most annoying when something goes wrong. You set up a script, schedule it, and then forget about it, thinking it will work. Then, after a few days or weeks, you find out that it didn’t run at all. No warnings. No records. No errors. Nothing but silence.

This guide will show the most common reasons why cron jobs fail silently in production. It will give you real-life examples of what went wrong, and show you exactly how to fix each problem.

Contents

Common Causes of Silent Cron Failures and How to Fix Them

Here are some of the reasons why your cron jobs might be failing silently in production and how to fix them:

1. Cron Doesn’t Use Your Shell Environment

This is the most common reason cron jobs fail silently. When you run a script manually on your terminal, it uses the following:

  • Your shell environment (this could be Bash, Zsh, etc. depending on your Linux system)
  • Your path (that is $PATH)
  • Your environment variables, aliases, and functions.

However, cron jobs do not use any of these. Instead, it runs with limited environment, which means that a command that works well in your terminal might not work when cron runs it.

For example, this command works when run manually on the terminal:

aws s3 sync /data s3://my-bucket

But when run as a cron job, it fails because aws is not in cron’s PATH.

/bin/sh: 1: aws: not found

Fix: Always put the full paths to the command inside the cron script:

* * * * * /usr/local/bin/aws s3 sync /data s3://my-bucket

If you’re not sure of the location of the command, check the path with the which command:

which aws

Additionally, you can also log the PATH cron is using to confirm it’s correct:

echo $PATH >> /tmp/cron_path.log

2. Cron Uses /bin/sh Not Bash

Cron jobs use /bin/sh by default. It does not use Bash. This means if you use Bash specific syntax on your cron job, it will fail.

Here are common syntax elements that will break when used in cron jobs:

  • [[...]]
  • source
  • arrays
  • set -o pipefail
  • set -e

For example, this command will work interactively on your terminal but fail when run as a cron job:

[[ -f /tmp/file ]] && echo "Exists"

Fix: You need to tell cron explicitly which shell to use. You can do this by adding a shebang (#!) at the top of your script:

#!/bin/bash

[[ -f /tmp/file ]] && echo "Exists"

Also, you can force Bash as default directly in your crontab file:

* * * * * /bin/bash /opt/scripts/cleanup.sh

3. Cron Output Is Being Discarded

Cron sends output of the cron jobs to the local mailer. Most modern servers do not have email configured as a result, the outputs from cron jobs are discarded. Due to this, you often have jobs that actually failed but you never saw them.

Fix: One of the most effective ways is to redirect stdout and stderr to a log file:

* * * * * /opt/scripts/cleanup.sh >> /var/log/cleanup.log 2>&1

To make it even better, always rotate the log using logrotate to prevent it from growing endlessly. It’s important to note that cron jobs should be run with explicit logging.

4. Permissions and Ownership Issues

Cron jobs often fail because the user running the job doesn’t have permission to run the script or get to the files it needs.

This usually happens when:

  • The script is not executable
  • The user can’t write to a directory
  • The job relies on Docker or system-level commands
  • The script was created by another user

Fix: Make sure your script is executable and owned correctly:

chmod +x /opt/scripts/cleanup.sh
chown cronuser:cronuser /opt/scripts/cleanup.sh

Always do a quick test as the cron user to confirm permissions are correct:

su - cronuser -c "/opt/scripts/cleanup.sh"

If it fails, it will fail in cron too.

5. Relative Paths Break Under Cron

You have to tell Cron where your script is. It doesn’t run from your Git repo, project folder, or home directory.

For example, this command will fail silently if run as a cron job:

* * * * * cd scripts && ./cleanup.sh

This is because cron has no idea what scripts directory is.

Fix: You should always use absolute paths in your cron jobs:

* * * * * /opt/scripts/cleanup.sh

You can also explicitly set the working directory:

* * * * * cd /opt/scripts && ./cleanup.sh

Absolute paths are safer and far more predictable in production.

6. Missing Environment Variables

If you have environment variables in your script, cron does not load them by default. Cron does not load shell profile files like .bashrc, .zshrc, or .profile. This is one major reason why backup jobs, cloud uploads, or API calls fail silently in production.

For example, this command will work manually on the terminal:

export AWS_ACCESS_KEY_ID=xxx
export AWS_SECRET_ACCESS_KEY=yyy
./backup.sh

But will fail under cron because those variables are not defined in cron’s environment.

Fix: There are three options to fix this.

Option 1: Source a known environment file

* * * * * . /etc/profile && /opt/scripts/backup.sh

Option 2: Load variables inside the script

export AWS_ACCESS_KEY_ID="xxx"
export AWS_SECRET_ACCESS_KEY="yyy"

Option 3: Use a .env file with strict permissions and source it explicitly

* * * * * . /opt/scripts/.env && /opt/scripts/backup.sh

7. You’re Editing the Wrong Crontab

Even experienced engineers fall for this. The crontab for root user is different from the crontab for other users. So the job exists, but it’s not running because it’s in the wrong crontab.

Fix: When setting up cron jobs, always verify which crontab you’re editing:

crontab -l
sudo crontab -l
sudo crontab -u devops -l

If the job isn’t listed there, it’s not running there.

8. The Cron Daemon Isn’t Running

There are times when the cron daemon itself isn’t running. There are many reasons why this could happen, like the service crashing or being taken offline for maintenance. This will stop all of your cron jobs from running, and you might not even know about it.

Fix: You can check if the cron daemon is running:

systemctl status cron
# or
systemctl status crond

If the cron daemon is not running, you can start the daemon:

sudo systemctl start cron
sudo systemctl enable cron

9. Long-Running Jobs Overlap and Kill Each Other

Cron doesn’t care if your last job is still running.

A job can overlap with itself and fail in ways that are hard to predict if it takes longer than its scheduled time.

For example, if you have an active cron job:

* * * * * /opt/scripts/report.sh

If report.sh takes 90 seconds, multiple instances stack up.

Fix: Use a lock file to prevent multiple instances from running at the same time:

#!/usr/bin/env bash

LOCKDIR="/tmp/report.lock"

if ! mkdir "$LOCKDIR" 2>/dev/null; then
    echo "Report is already running"
    exit 1
fi

trap "rmdir '$LOCKDIR'" EXIT

# Run the report
/opt/scripts/report.sh

This ensures only one instance runs at a time.

10. Cron Jobs Inside Docker Aren’t Running at All

Cron does not run inside a Docker container by default. You need to explicitly install cron, start the cron daemon and make sure it supports foreground mode.

Fix: Make sure your cron runs in foreground:

# Install cron
RUN apt-get update && apt-get install -y cron

# Start cron in foreground
CMD ["cron", "-f"]

Also verify crontab is present in the container:

cat /etc/crontab

11. Failure to Add Error Logging

Some scripts fail, exit, and don’t leave a log of what happened. When you do not implement proper error handling, you will never have visibility into what went wrong.

Fix: It’s important to add strict mode to every cron script:

#!/usr/bin/env bash
set -euo pipefail

# Your script logic here

And write down important details about the execution:

LOGFILE="/var/log/cron-debug.log"

{
  echo "----- $(date) -----"
  echo "User: $(whoami)"
  echo "Running: $0"
  echo "PATH: $PATH"
} >> "$LOGFILE" 2>&1

This one change fixes most silent failures.

How CloudRay Helps You Avoid Silent Cron Failures

Cron jobs fail without anyone knowing because they were never meant to be seen, audited, or debugged. Once a job is scheduled, you have no way of knowing if it ran, failed halfway through, or never ran at all.

CloudRay has a different way of doing scheduled automation.

CloudRay lets you schedule scripts from one place instead of having to deal with crontabs on different servers. It also keeps a full history of when each script was run. You can see everything that happened when a script ran: the start time, the end time, the exit status, and the output in real time.

CloudRay’s Run Logs keep track of every scheduled run, making it easy to check on past runs, fix problems, and make sure a job really ran. You don’t have to manually send output to log files or SSH into servers to find out what went wrong.

CloudRay also fixes a lot of the common problems with cron that we talked about earlier:

  • Scripts run in a controlled environment for execution.

  • Logs are automatically saved

  • Failures are easy to see right away

  • Execution history is kept on all servers

You still write Bash scripts the same way. Instead of just hoping that a cron job worked, you can check to see if it did and know exactly what went wrong when it didn’t.

Get Started with CloudRay
Olusegun Durojaye

Olusegun Durojaye

CloudRay Engineering Team