BlogCloud & Infrastructure
Cloud & Infrastructure

Linux Server Running Out of Inodes (Not Disk Space) β€” How to Diagnose and Fix

Your server says "No space left on device" but df shows 60 percent free space. The real problem is inode exhaustion β€” you have too many files. Here is how to diagnose, fix, and prevent it.

A

Alex Thompson

CEO & Cloud Architecture Expert at ZeonEdge with 15+ years building enterprise infrastructure.

February 2, 2026
13 min read

You try to create a file, deploy an application, or start a service, and Linux returns the error "No space left on device." You check disk usage with df -h and see that you have 60 percent free space. You check again, try different commands, and get confused β€” there is plenty of disk space, yet the system insists the device is full.

The problem is not disk space. It is inodes. Every file and directory on a Linux filesystem requires an inode β€” a data structure that stores metadata about the file (permissions, ownership, timestamps, and pointers to the actual data blocks). The number of inodes is fixed when the filesystem is created, and when you run out of inodes, you cannot create new files even if you have terabytes of free space.

What Is an Inode?

An inode (index node) is a fundamental concept in Unix and Linux filesystems. Think of a filesystem as a library: the inode is the catalog card for each book. The catalog card does not contain the book itself β€” it contains information about the book (title, author, location) and tells you where to find it on the shelves. Similarly, an inode does not contain the file data β€” it contains metadata and pointers to the disk blocks where the data is stored.

Each inode stores: the file type (regular file, directory, symlink, socket, etc.), permissions and ownership (user, group, mode), timestamps (created, modified, accessed), file size, the number of hard links pointing to it, and pointers to the data blocks on disk.

Crucially, the inode does NOT store the filename. Filenames are stored in directory entries, which map a name to an inode number. This is why hard links work β€” multiple directory entries (names) can point to the same inode (file data).

Diagnosing Inode Exhaustion

Check inode usage with df -i:

$ df -i
Filesystem     Inodes   IUsed  IFree IUse% Mounted on
/dev/sda1     6553600 6553600      0  100% /
tmpfs         1024000       5 1023995    1% /dev/shm

If IUse% is at 100% for your root filesystem (or whichever filesystem is full), you have found the problem. Compare this with disk space usage:

$ df -h
Filesystem     Size  Used Avail Use% Mounted on
/dev/sda1       50G   20G   28G  42% /

This server has 28 GB of free disk space but zero free inodes β€” it cannot create a single new file.

Finding the Inode Consumers

To fix the problem, you need to find which directories contain the most files. Use this command to count files per directory:

# Count inodes used by each top-level directory
sudo find / -xdev -printf '%h
' | sort | uniq -c | sort -rn | head -30

This command may take several minutes on a large filesystem. It lists the 30 directories with the most files. Common culprits include:

/var/spool/mail or /var/mail β€” Millions of undelivered email notification files accumulate when a mail service is misconfigured. Each email is a separate file, and a server receiving spam or bounced mail can generate millions of tiny files.

/tmp β€” Applications that create temporary files without cleaning them up. PHP session files, compiled templates, temporary uploads, and cache files can accumulate to millions of entries.

/var/lib/docker/overlay2 β€” Docker container filesystem layers. Each container and image layer creates thousands of files. A server running many containers or building many images can exhaust inodes through overlay2 alone.

/var/log β€” Log files that are being rotated into millions of small files instead of being properly compressed and removed.

node_modules or vendor directories β€” A single Node.js project can create 50,000 to 100,000 files in node_modules. Ten projects on the same server means 500,000 to 1,000,000 files just from JavaScript dependencies.

Cache directories β€” Package manager caches (pip, npm, apt), thumbnail caches, font caches, and application-specific caches can accumulate millions of small files.

Fixing Inode Exhaustion

Once you identify the directories consuming the most inodes, clean them up:

# Remove old temp files (older than 7 days)
sudo find /tmp -type f -mtime +7 -delete

# Remove old PHP sessions
sudo find /var/lib/php/sessions -type f -mtime +1 -delete

# Remove old mail queue files
sudo find /var/spool/mail -type f -mtime +30 -delete

# Remove old log files (keep last 7 days)
sudo find /var/log -type f -name "*.log.*" -mtime +7 -delete
sudo find /var/log -type f -name "*.gz" -mtime +30 -delete

# Clean npm cache
npm cache clean --force

# Clean Docker
docker system prune -a --force

For the node_modules problem, use a build server or Docker containers for builds rather than installing dependencies directly on the server. If you must have node_modules on the server, ensure each deployment cleans up after itself.

Preventing Inode Exhaustion

Prevention is far better than remediation. Implement these measures on every server:

Configure log rotation properly. Ensure logrotate is configured for every application that writes log files. Set rotate 7 to keep only 7 rotated files, and use compress and delaycompress to compress old logs:

# /etc/logrotate.d/myapp
/var/log/myapp/*.log {
    daily
    missingok
    rotate 7
    compress
    delaycompress
    notifempty
    create 0640 www-data adm
    sharedscripts
    postrotate
        systemctl reload myapp 2>/dev/null || true
    endscript
}

Configure tmpwatch or systemd-tmpfiles. Automatically clean temporary directories:

# /etc/tmpfiles.d/cleanup.conf
# Remove files in /tmp older than 10 days
d /tmp 1777 root root 10d
# Remove files in /var/tmp older than 30 days
d /var/tmp 1777 root root 30d

Monitor inode usage. Add inode monitoring to your monitoring stack. With Prometheus and node_exporter, the node_filesystem_files_free metric tracks available inodes. Set alerts when inode usage exceeds 80 percent.

Use a filesystem with dynamic inodes. XFS and Btrfs allocate inodes dynamically, so they cannot run out of inodes as long as there is free disk space. When provisioning new servers, consider using XFS instead of ext4 if inode exhaustion is a recurring concern:

# Check current filesystem type
df -T /
# Format with XFS (on new partitions only!)
sudo mkfs.xfs /dev/sdb1

Inode exhaustion is a silent failure that catches experienced administrators off guard. By monitoring inodes alongside disk space and implementing proper cleanup policies, you prevent one of the most confusing errors in Linux server administration.

ZeonEdge provides Linux server administration, monitoring setup, and proactive maintenance services. Learn more about our infrastructure services.

A

Alex Thompson

CEO & Cloud Architecture Expert at ZeonEdge with 15+ years building enterprise infrastructure.

Related Articles

Cloud & Infrastructure

DNS Deep Dive in 2026: How DNS Works, How to Secure It, and How to Optimize It

DNS is the invisible infrastructure that makes the internet work. Every website visit, every API call, every email delivery starts with a DNS query. Yet most developers barely understand how DNS works, let alone how to secure it. This exhaustive guide covers DNS resolution, record types, DNSSEC, DNS-over-HTTPS, DNS-over-TLS, split-horizon DNS, DNS-based load balancing, failover strategies, and common misconfigurations.

Marcus Rodriguezβ€’42 min read
DevOps

CI/CD Pipeline Design Patterns in 2026: From Basic Builds to Advanced Deployment Strategies

A well-designed CI/CD pipeline is the backbone of modern software delivery. This comprehensive guide covers pipeline architecture patterns β€” from simple linear pipelines to complex multi-stage workflows with parallel testing, canary deployments, blue-green strategies, GitOps, security scanning, and infrastructure-as-code integration. Learn how to build pipelines that are fast, reliable, and secure.

Marcus Rodriguezβ€’40 min read
Cloud & Infrastructure

Linux Server Hardening for Production in 2026: The Complete Security Checklist

A default Linux server installation is a playground for attackers. SSH with password auth, no firewall, unpatched packages, and services running as root. This exhaustive guide covers every hardening step from initial setup through ongoing maintenance β€” SSH configuration, firewall rules, user management, kernel hardening, file integrity monitoring, audit logging, automatic updates, and intrusion detection.

Alex Thompsonβ€’42 min read

Ready to Transform Your Infrastructure?

Let's discuss how we can help you achieve similar results.