I’m going to make a backup of 2TB SSD today. I will use clonezilla mainly because that’s all I know. But do you recommend any other ways for any reason?

I want to keep the process simple and easy. And I will likely take backup once a month or so repeatedly. It doesn’t have to be ready all the time. If you need more clarification, ask away.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    15 days ago

    dd if=/dev/sda0 conv=sync,noerror bs=128K status=progress | gzip -c file.gz

    You can add an additional pipe in there if you need to ssh it to another machine if you don’t have room on the original.

    • tiz@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 days ago

      Thanks guys. I went with Rescuezilla in the end. So far so good.

  • mbirth@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    15 days ago

    Does the data change a lot? Does it need to be a block-based backup (e.g. bootable)? Otherwise, you could go with rsync or restic or borg to only refresh your backup copy with the changed files. This should be far quicker than taking a complete backup of the whole SSD.

  • taiidan@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 days ago

    btrfs or zfs send/receive. Harder to do if already established, but by far the most elegant, especially with atomic snapshots to allow versioning without duplicate data.

  • zorflieg@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    HD Clone X cloner Or MSP360 (Cloud Berry) Standalone Backup

    Both cost money but not a lot and are very reliable.

  • drkt@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    15 days ago

    My method requires that the drives be plugged in at all times, but it’s completely automatic.

    I use rsync from a central ‘backups’ container that pulls folders from other containers and machines. These are organized in

    /BACKUPS/(machine/container)_hostname/...

    The /BACKUPS/ folder is then pushed to an offsite container I have sitting at a friends place across town.

    For example, I backup my home folder on my desktop which looks like this on the backup container

    /BACKUPS/Machine_Apollo/home/dork/

    This setup is not impervious to bitflips a far as I’m aware (it has never happened). If a bit flip happens upstream, it will be pushed to backups and become irrecoverable.

    • tiz@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      I see. This is more of a file system backup right? Do you recommend it over full disk backup for any reason? I can think of saving space.

      • drkt@scribe.disroot.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        15 days ago

        I recommend it over a full disk backup because I can automate it. I can’t automate full disk backups as I can’t run dd reliably from a system that is itself already running.

        It’s mostly just to ensure that I have config files and other stuff I’ve spent years building be available in the case of a total collapse so I don’t have to rebuilt from scratch. In the case of containers, those have snapshots. Anytime I’m working on one, I drop a snapshot first so I can revert if it breaks. That’s essentially a full disk backup but it’s exclusive to containers.

        edit: if your goal is to minimize downtime in case of disk failure, you could just use RAID