Snapshots in ZFS (ZOL)
Posted on January 22, 2017 • 5 minutes • 1040 words • Suggest Changes
One of the impressive tools in ZOL (ZFS On Linux) is snapshots. Snapshots are a way to make a ‘instant backup’ of the files on the system. The name backup, is not completely correct. If the system dies, the snapshots won’t save you, since snapshots are on the same storage as the data, they don’t eat any unneeded extra space.
A snapshot is the state of the pool/dataset when you created the snapshot. In ZFS snapshots are cheap and fast, so its smart to use the feature to your advantige. The snapshot will remember how the files looked when you create the snapshot, and will only require extra space when changes to those files are made.
How to use
Creating a snapshot :
zfs snapshot [email protected] zfs snapshot pool/[email protected]
Destroy a snapshot :
zfs destroy [email protected] zfs destroy pool/[email protected]
**Rename **a snapshot :
zfs rename pool/[email protected] pool/[email protected]_name
**List **snapshots :
zfs list -t snapshot zfs list -t snapshot -r pool/dataset
**Rollback **from a snapshot :
zfs rollback pool/[email protected]
note : take care when doing a rollback.
Lets see this in a simple example. I first created a demo filesystem.__
zfs create tank/demo
Lets create a first snapshot on this empty system :
zfs snapshot tank/[email protected]
Getting a list of the current snapshots : _zfs list -t snapshot _There is an option to show snapshots on _zfs list _but thats makes it even more cloudy to work.
# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT tank/[email protected] 0 - 35.9K -
Now to generate some data, in ZFS (most COW FS) _fallocate _won’t work. So I use a fancy trick with head but in essence this just generates a file of approx. 10M.
head -c 10M </dev/urandom >data head -c 10M </dev/urandom >data2 head -c 10M </dev/urandom >data3
Nothing special going on here :
# ls -lh total 31M -rw-r--r-- 1 root root 10M Jan 22 19:07 data -rw-r--r-- 1 root root 10M Jan 22 19:07 data2 -rw-r--r-- 1 root root 10M Jan 22 19:07 data3 # zfs list NAME USED AVAIL REFER MOUNTPOINT tank 9.96T 11.1T 3.48G /tank tank/demo 30.1M 11.1T 30.0M /tank/demo
As expected, when you see zfs list there is about 30M in use. Now since this is data that was not in our original (empty) set. The snapshot should not store any changes.
# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT tank/[email protected] 20.2K - 35.9K -
note : the 20.2k are meta-data changes.
Now its time to take a snapshot on the important data we just generated :
# zfs snapshot tank/[email protected] # zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT tank/[email protected] 20.2K - 35.9K - tank/[email protected] 0 - 30.0M -
Now lets try snapshots, I did a move, a deletion, a overwrite (update) and a new file creation.
mv data data_moved rm -rf data2 cp data_moved data3 head -c 10M </dev/urandom >data4 # zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT tank/[email protected] 20.2K - 35.9K - tank/[email protected] 20.0M - 30.0M -
Exactly what we expected, the snapshot took a “backup” of data2 and data3 which where removed and overwritten, since data1 was not changed or removed, it does not occupy extra space and data4 is a new file, so it was not there during creation of the snapshot.
Now how do we get the data back ? There are two options, first for small issue’s like accidental removals, you can use the hidden .zfs directory, for larger issue’s you can mount the snapshot on a seperate folder. In every directory on ZFS there is a hidden folder _.zfs. _Note that even ls -la will not pick that directory up.
# ls -ls .zfs/snapshot/ total 1 0 dr-xr-xr-x 1 root root 0 Jan 22 19:25 initial 1 drwxr-xr-x 2 root root 5 Jan 22 19:07 today # ls -ls .zfs/snapshot/today/ total 30729 10243 -rw-r--r-- 1 root root 10485760 Jan 22 19:07 data 10243 -rw-r--r-- 1 root root 10485760 Jan 22 19:07 data2 10243 -rw-r--r-- 1 root root 10485760 Jan 22 19:07 data3
Voila, there are our data files, prior to removal/overwrite/rename. Note that with this method the snapshot is auto-mounted to /tank/demo/.zfs/snapshot/today as you might have guessed, you can also manually mount snapshots to a location. This can be done in the ’normal’ Linux way :
# mkdir today_snapshot # mount -t zfs tank/[email protected] today_snapshot # ls -l today_snapshot/ total 30729 -rw-r--r-- 1 root root 10485760 Jan 22 19:07 data -rw-r--r-- 1 root root 10485760 Jan 22 19:07 data2 -rw-r--r-- 1 root root 10485760 Jan 22 19:07 data3
To unmount just use umount
umount tank/[email protected]
Our snapshot name was not chosen very long-term minded, you can rename them to something more verbose :
zfs rename tank/[email protected] tank/[email protected]
The last thing we want to do, is remove a snapshot, lets remove the initial snapshot, since there was no data :
zfs destroy tank/[email protected]
There is another option, which I don’t trust much beside this example and I would only advice it to use with extreme care. Rollback, can be done straight back to a state of the system at the time of the snapshot creation, however only back to the latest snapshot. If you want to go back further, you need to destroy the snapshots in between.
zfs rollback -r tank/[email protected]
Note the -r will destroy any snapshots between current state and @22jan2017 snapshot.
Snapshots are so cheap, people tend to forget to make them. However, cheap is not free, so making a cron to generate a snapshot every month/week/day/hour will eventually put your tank full with snapshots. Therefor I suggest using a snapshot systems that creates and prunes snapshots. I chose sanoid, a good system, this article can help you install sanoid.
With multiple datasets, and each having multiple snapshots, its sometimes difficult to get a good overview of howmany snapshots each dataset has. You can use this shell cmd to check :
zfs list -t snapshot | tail -n +2 | sed ’s/@/\t/g’ | cut -f 1 | uniq -c
Which generates something like :
# zfs list -t snapshot | tail -n +2 | sed 's/@/\t/g' | cut -f 1 | uniq -c 30 huginn/complgen3 2 jbod1/breva 1 jbod1/users 5 jbod1/users/flylab 5 jbod1/users/proteo 2 jbod1/users/servers 5 jbod1/users/userA