How to Fix “No Space Left on Device” on Linux
How to Fix Disk Space Issues on Linux VPS

The No space left on deviceБ error on Linux usually appears when you least expect it. A service may refuse to start, log files stop updating, package installations fail, or the system can’t create new files.

In most cases, the first reaction is to run df -h, see a filesystem at 100%, and start deleting files as quickly as possible. Sometimes that works, but just as often it only hides the real cause of the problem.

This guide explains what the error actually means, how to diagnose it using standard Linux tools, and how to free disk space safely without putting system stability at risk.



What “No Space Left on Device” Really Means


Despite how it sounds, this error does not always mean your disk is completely full.

On Linux systems, the message usually points to one of two situations. The filesystem may have run out of available disk blocks (physical storage). Alternatively, the filesystem may have run out of inodes, even though disk space is still available.

The df command can help identify both cases—but only if you interpret its output correctly. Treating this error as a simple cleanup task often leads to temporary fixes and the same problem returning later.



Check Disk Usage with df (What to Look At First)


The first step is to check disk usage with the standard command:

df -h

This displays disk usage in a human-readable format. When reviewing the output, focus on which filesystem is actually full rather than looking only at the root filesystem. Pay close attention to the Use% column and the corresponding mount points.

A common mistake is assuming that / is the problem while ignoring separate mount points such as /var or /home. If /var is full, logging, updates, and background services can fail—even if other filesystems still have free space.

Before moving on, make sure you’ve clearly identified which filesystem has reached its limit.



Find What Is Actually Eating Disk Space


Once the affected filesystem is known, the next step is to determine what is consuming the space.

You can get a high-level overview by running:

du -sh /*
This command shows how much disk space each top-level directory is using. On Linux servers, one directory is very often responsible for unexpected disk usage: /var.

Checking it separately helps narrow the issue:

du -sh /var/*

Disk space problems are commonly caused by log files growing without proper rotation, package caches that were never cleaned, temporary files left behind by applications, or application data written faster than anticipated. At this stage, the goal isn’t to delete anything—it’s to understand why disk usage increased.



When Disk Space Is Free but Errors Persist (Inodes Issue)


In some cases, df -h shows available disk space, yet the system continues to return the same error. When that happens, inode exhaustion is a likely cause.

Check inode usage with:

df -i

If the IUse% value reaches 100%, the filesystem has run out of inodes. An inode stores metadata about a file, and every file consumes exactly one inode regardless of its size. That means a large number of very small files can exhaust inodes long before disk space itself is fully used.

This scenario is especially common on Linux VPS systems, where inode counts are fixed at filesystem creation and applications may generate thousands—or even millions—of small files. In that situation, deleting a few large files won’t help much. The fix usually requires removing large numbers of unnecessary small files instead.



Common Disk Space Killers on Linux Servers


Certain patterns appear repeatedly when investigating disk space issues.

Log files are one of the most frequent causes, particularly when log rotation is misconfigured or missing entirely. Package caches can also consume significant space over time, especially on systems that receive frequent updates. Temporary directories may silently accumulate files that are no longer needed, and container-based workloads can leave unused images, layers, or volumes behind. This often becomes noticeable shortly after you install Docker on Ubuntu 24.04, especially on servers with limited disk space and no cleanup policies in place.

Individually, none of these are “bad.” They become a problem when disk usage isn’t monitored and growth goes unnoticed.



How to Safely Free Disk Space


Before deleting any files, it helps to understand why they exist and whether they are still needed.

Safe cleanup actions typically include removing old or rotated log files, clearing package caches using the appropriate package manager, deleting obsolete temporary files, and removing application data that is no longer referenced. What you should avoid is blind deletion of system directories, removing files actively used by running services, or executing cleanup commands without verifying their impact.

Freeing disk space should be a controlled process based on diagnosis, not guesswork.



Why This Happens So Often on VPS


Disk space issues occur frequently on Linux VPS environments because storage limits and inode counts are constrained by design. Monitoring is often minimal, and problems can go unnoticed until the filesystem hits its limit.

Once that happens, failures tend to cascade: logs stop writing, databases fail to update, and services begin returning misleading errors. In most cases, the underlying issue is simple—but delayed detection turns it into a critical problem.



Conclusion


The df no space left on device error is not mysterious. It indicates that a filesystem limit has been reached—disk space, inodes, or both.

Using df is only the starting point. The real solution comes from understanding what filled the filesystem and addressing the cause directly. A few extra minutes spent diagnosing the root cause can prevent hours of downtime and recovery later.