My Digital Nomad's Toolkit. Portable Homelab Overview
#HomeLab, #Self-Hosting, #Gadgets
I've finally built my portable home lab instance and now I can hardly hold back my excitement about it.
My homelab usecase is pretty basic:
- Media storage including macOS Photos app library
- Timemachine backups
- Archives from the old computers
- Playground for all kinds of Self-Hosted things
That's basically it. I would also need a multiuser support for family members and probably some sort of an advanced [[3-2-1 Backup]] features. Also See: Why Backups Matter
Homelab Alternatives
Ridiculously, but if you have a few TBs of data, care about privacy and safety, and take into account price, there are not so many elegant and streamlined storage alternatives: [[Why not just use a cloud storage]].
The closest alternative is probably buying a NAS, but it's not even close in terms of flexibility, including flexibility of the budget. NAS portability is limited and even if you decide to take only hard drives with you, the only place you can move is another NAS: [[NAS as a Homelab Alternative]]
The thing that neither NAS nor Cloud storage may suggest is a playground: a space for experiments and projects that only a home lab can provide.
My Portable Homelab Setup
My homelab is filthy portable. Still, it gives a certain level of redundancy and infinite flexibility with simple consumer-level components.
Hardware
Choosing between Raspberry PI and anything else I picked a mini pc. I got a Minisforum GK41 with 4 cores Intel Celeron J4125, 8GB RAM, and 256GB SSD.
For the price of a Raspberry PI, it gives a nice case with an active cooling system, SATA and M2 interfaces, and a bunch of ports like USB, HDMI, LAN, etc.
The silicon with x86 arch also gives a bit more flexibility than ARM in terms of software compatibility. It's a bit less energy efficient than Raspberry, but also a bit more powerful.
For storage purposes, I picked a pair of 4TB external USB HDDs from Seagate.
Host OS, Virtualization, LXC Containers
My pick is the Proxmox virtualization host system running Debian under the hood. For the home lab instance, I picked one of the available LXC turnkey containers which is also Debian.
For many people including me, the home lab is a playground for experiments. Virtualization creates an extra abstraction layer between the hardware and the server instance itself giving a lot of room for trials and errors.
It allows me to magically create instances, make snapshots, and restore everything just like it works on a DigitalOcean. I can safely play with different configurations, OS, and anything else without any risk of breaking everything.
For Proxmox host OS I have almost no custom configuration. Almost all is done on the homelab instance level. So if I need to move to another host machine for any reason, I can install it all very quickly and restore my homelab instance entirely from a backup.
Proxmox is incredibly powerful. At the same time visualization overhead with LXC containers is almost unnoticeable and runs smoothly on my minimal hardware.
Filesystem
Sometimes ZFS seems a little bit overhyped. However, I consider it to be such an interesting piece of engineering that I couldn't resist picking it for my file storage. ZFS is the default for the Proxmox host so I picked it either.
My external HDD drives make up a ZFS storage pool (RAID1) which is attached as an external tank to the host. Then they are passed through the host and mounted right to the homelab guest instance.
ZFS has a list of downsides which is worth a separate post. Nevertheless, in my opinion, the advantages far outweigh the disadvantages.
ZFS takes the best from traditional volume management and filesystem layers. It uses a copy-on-write transactional mechanism which makes it different than traditional filesystems and RAID arrays.
Snapshots
Its copy-on-write magic allows it to make filesystem-level snapshots with checksum and data corruption checks out of the box.
In practice, it's a time machine on a filesystem level.
It's handy to make incremental backups and send them with built-in tools to an external drive or remote server for backup purposes.
Software RAID
ZFS provides redundancy features with different sorts of software RAID configurations. There is a self-healing toolset that allows it to find and fix the consequences of a bitrot.
Datasets
Another cool feature is ZFS's datasets. On the one hand, datasets are simply directories. On another hand, they behave like nested filesystems which you can configure separately for your own needs. Set capacity, encryption, compression, etc.
Encryption
Even though ZFS encryption is considered slower than its peers, like LUKS. Nevertheless, I think it's nice to have native encryption on a filesystem level. ZFS is flexible here and allows to encrypt of only specific datasets with a passphrase or with an encryption key.
ZFS allows one to make encrypted snapshots and send them to the untrusted remote ZFS storage without even a need to load a key on the remote server.
Software
Unlike many homelab enthusiasts, I'm not having dozens of software running on my server.
Host Config
As I've already mentioned I don't need it 24/7 running. I make it run on schedule, typically in the evening when I need to make a backup.
For that purpose, I use a combination of power on by RTC to wake the host up. To shut it down I use a scheduled cron job which suspends containers and VMs gracefully and then shuts down the host itself.
Storage is encrypted with encryption keys automatically mounted from a USB stick. If I suddenly need to secure things up I just pull out the stick.
Homelab Instance Config
Samba
I only have Samba that publishes proper directories as network drives for each user in my home network. Some of those drives are simply marked as time machine compatible.
Avahi
I use Avahi which makes the instance available via .local
domains.
Remote Access
Currently, my homelab is accessible only within a home network. I just don't need remote access with my use case.
Experience
Initially, I had a list of concerns.
I wasn't sure that the Photos library would work on a slow external HDD over the network.
It did! Not blazingly fast, but it's ok and usable.
I wasn't sure that the MacOS time machine would work with an external network attached drive with ZFS under the hood.
It works! My biggest surprise about it was that I didn't even notice any slow-downs compared to time machine backups to an external HDD over USB.
My review: 10 out of 10.
It works exactly how I wanted it to:
- Starts up in the evening
- MacBook connects to it and makes a backup in the background
- Shuts down
I don't even need to push the time machine manually, as it connects to the homelab when it's available and does all the stuff.
Perfecto!
Further Plans
My long-term plan is to set up a proper off-site backup to be one more step further on the way to a 3-2-1 backup strategy.
I will probably run a similar node in a different location or use a 3rd party cloud storage.
Comments