Your production crashed because "no space left on device" and you have no guess what to delete fast? Do not worry! Here I will show you largest useless points of data and how to find and kill them.

☝ Save the post for future, it happens!

Install ncdu to check who uses space:

apt install ncdu
🤔 Yes, you will need ~100kB of free space to install ncdu: just find some trash in your home directory

Run it:

ncdu /

I suggest you not spend time now, and enter /var/log folder instantly:

Use ncdu to find largest folders and files

In most cases it is systemd's journal folder and btmp file.

Trim current large journal immediately:

journalctl --vacuum-size=50M
😣 So, after several months of work our CI server had 3.6 GB of useless for us data created by OS default tools. Now it is empty.

Also let's force to trim it automatically when journal will exceed 50 M. To limit max used space for journal, open a config file:

 nano /etc/systemd/journald.conf

And add SystemMaxUse under [Journal] section:

[Journal]
SystemMaxUse=50M
🛌 Sleep better after doing it

Fix large btmp file in /var/log/

This file most time contains logs of unsuccessful brute-force attempts. To remove file content on one time basis:

echo '' > /var/log/btmp

To fix it in future:

  • add same command under daily CRON
  • Configure fail2ban or alternative to fix root cause of the issue and ban brute forcers earlier

Are you using docker?

One of the largest points of useless data.

Remove most of useless docker data:

docker system prune -f

Also you might want to remove unused images:

docker rmi $(docker images -f "dangling=true" -q)
👋 Stay clean and have a fun