I’m going to make a backup of 2TB SSD today. I will use clonezilla mainly because that’s all I know. But do you recommend any other ways for any reason?
I want to keep the process simple and easy. And I will likely take backup once a month or so repeatedly. It doesn’t have to be ready all the time. If you need more clarification, ask away.
dd if=/dev/sda0 conv=sync,noerror bs=128K status=progress | gzip -c file.gz
You can add an additional pipe in there if you need to ssh it to another machine if you don’t have room on the original.
If zstd is available, it is a lot more efficient and performant over gzip.
True. I’ve done that command for so long that I’ve kinda gotten gzip hardwired into my fingers.
The added info from
pv
is also nice ^^
https://www.fsarchiver.org/quickstart/ It’s faster and more efficient than just
dd
:)You can use Rescuezilla which is basically a GUI fir Clonezilla, but easier to use 😋
Thanks guys. I went with Rescuezilla in the end. So far so good.
Does the data change a lot? Does it need to be a block-based backup (e.g. bootable)? Otherwise, you could go with rsync or restic or borg to only refresh your backup copy with the changed files. This should be far quicker than taking a complete backup of the whole SSD.
btrfs
orzfs
send/receive. Harder to do if already established, but by far the most elegant, especially with atomic snapshots to allow versioning without duplicate data.Veeam endpoint free version is nice because it doesn’t require a reboot.
HD Clone X cloner Or MSP360 (Cloud Berry) Standalone Backup
Both cost money but not a lot and are very reliable.
My method requires that the drives be plugged in at all times, but it’s completely automatic.
I use rsync from a central ‘backups’ container that pulls folders from other containers and machines. These are organized in
/BACKUPS/(machine/container)_hostname/...
The
/BACKUPS/
folder is then pushed to an offsite container I have sitting at a friends place across town.For example, I backup my home folder on my desktop which looks like this on the backup container
/BACKUPS/Machine_Apollo/home/dork/
This setup is not impervious to bitflips a far as I’m aware (it has never happened). If a bit flip happens upstream, it will be pushed to backups and become irrecoverable.
I see. This is more of a file system backup right? Do you recommend it over full disk backup for any reason? I can think of saving space.
I recommend it over a full disk backup because I can automate it. I can’t automate full disk backups as I can’t run dd reliably from a system that is itself already running.
It’s mostly just to ensure that I have config files and other stuff I’ve spent years building be available in the case of a total collapse so I don’t have to rebuilt from scratch. In the case of containers, those have snapshots. Anytime I’m working on one, I drop a snapshot first so I can revert if it breaks. That’s essentially a full disk backup but it’s exclusive to containers.
edit: if your goal is to minimize downtime in case of disk failure, you could just use RAID