Linux Disk Space Analysis with ncdu and du
If you diligently keep tabs on your disk space utilization, or have a monitoring platform like LogicMonitor that bugs you about it (shameless plug), you might appreciate this quick guide on freeing up disk space in Linux. We’ll look at some commands and techniques you can use to quickly find large files to remove.
ncdu
If you’ve never used ncdu
(NCurses Disk Usage) before, well, you should install it and use it. If you’re on Ubuntu/Debian you should be able to pull it in with a quick sudo apt-get install ncdu
. If your package manager doesn’t have it, you can get it here but consider switching distros unless you’re some kind of an expert. Actually, I don’t really care about what distro you run, but it just seems like something that should be in the repos. Anyways, enough whining, let’s delete some files.
Start
ncdu
is pretty damn simple. Pass it a directory, or just let it use the current directory.
$ ncdu ./test
It will crank through recursively and calculate the size of everything. This could take a bit of time on larger/slower drives. It was able to jam through ~850 GiB on my SSD in under a minute, so don’t fret. It will be worth the wait, and it seems (very scientific) to be much faster than something like du
, but it could be just that it shows progress better. ncdu
is easier to work with interactively (duh), but we’ll look at du
later too.
Here’s what you’ll see:
ncdu 1.12 ~ Use the arrow keys to navigate, press ? for help
--- /home/michael/test ------------------------------------------------------------------------------------
264.0 KiB [##########] /bin
72.0 KiB [## ] feed.xml
e 4.0 KiB [ ] /test
0.0 B [ ] 3
0.0 B [ ] 2
0.0 B [ ] 1
Clearly we want to start in /bin
if our goal is to make space. The directory is only a few hundred KiB, but it’s clearly the largest, and that’s the point of ncdu
. We can drill into /cgibin
and see its contents sorted the same way.
If we want to (for example), delete feed.xml
we can hit the d
key while it’s selected. You can also get more information about the file (like the actual vs apparent disk usage) by typing i
. You can toggle directory content counts with c
.
If you’re a little trigger shy and want to just browse without actually deleting anything, you can pass -r
when you first start ncdu
to put it into read-only mode:
$ ncdu -r ./test
Anyways, from there you just dig through the largest folders and find the stuff you’re willing to part with. You’ll have free space in no time. There are more helpful keyboard shortcuts later in this article. You can see them all by typing ?
.
Navigate
You can navigate with standard vim bindings:
- Up, k - Move cursor up
- Down, j - Move cursor down
- Left, h - Go up one directory
- Right l - Enter directory
Sort
- s - Sort by size
- n - Sort by name
- C - Sort by items (e.g. directory vs file)
You can tap them repeatedly to change the sort ascending or descending
Anyways, ncdu
is a great tool for finding large files and cleaning up disk space on Linux. However, if you’re looking for something more geared towards scripting, and less interactive, try out du
. Read on to find out more.
du
du
is the disk usage tool that ncdu
takes after. It will crank through a folder and spit out file and folder sizes.
Here it is run against the same directory from the ncdu
examples above. We’ll pass -h
to get human readable sizes.
$ du -h ./test
264K ./test/bin
4.0K ./test/test
344K ./test
As you can see, it’s almost identical to ncdu
’s output (as you might expect). Of course, we’re not getting sizes for individual files:
$ du -ah ./test
4.0K ./test/bin/ticker.sh
4.0K ./test/bin/ticker.cgi
4.0K ./test/bin/hello.py
248K ./test/bin/index.cgi
264K ./test/bin
72K ./test/feed.xml
0 ./test/1
4.0K ./test/test
0 ./test/2
0 ./test/3
344K ./test
Whoa! That’s a lot of stuff. We just want this directory though, not the whole recursive list of every file (useful as that may be):
$ du -ahd1 ./test
264K ./test/bin
72K ./test/feed.xml
0 ./test/1
4.0K ./test/test
0 ./test/2
0 ./test/3
344K ./test
Now that’s more like it! We added -a
to show all files (not just directories), and we passed -d1
to tell du
to only go down a depth of 1 folder.
However, notice anything? Yeah, that sorting, it sucks! I want to see the big stuff at the top so I can delete it. I don’t want to sort through this visually. No problem, we can use sort
. I’ve gotten some bewildered reactions even from salty old unix nerds with this little trick:
$ du -ahd1 ./test | sort -hr
344K ./test
264K ./test/bin
72K ./test/feed.xml
4.0K ./test/test
0 ./test/3
0 ./test/2
0 ./test/1
Awesome! It even sorted the human readable output, what is this wizardry? Well, it’s sort -h
, which stands for “Human readable”. The -r
is to reverse the sort to match ncdu, but you can drop it to have the big stuff at the bottom. You can pipe to head
or tail
to narrow things down.
There’s still something goofy though, it’s showing the size of the whole ./test
directory, which we don’t really care about, and which isn’t shown in ncdu
. No worries, simply change the way we pass args to du
and we’re good to go:
$ du -h ./test/* | sort -hr
264K ./test/bin
72K ./test/feed.xml
4.0K ./test/test
0 ./test/3
0 ./test/2
0 ./test/1
So, we ditched -a
and -d1
because it will interfere with our glob and give us the whole recursive directory. Basically we’re passing “every directory in ./test” instead of “./test and the directories within it”.
Conclusion
As you can see, both ncdu
and du
have their merits when it comes to finding large files on Linux to free up space.
ncdu
- casual, interactive usedu
- hardcore scripting, reportingsort -h
- sort human readable
If you’re not sure, try both! You’ll certainly run across use cases for both in the future.
Happy deleting!