Publishing and ZFS Tip
By Austin Lane
I think I found a satisfactory solution to moving markdown docs from my writing tool to hosting server. It’s roundabout and complicated, considering Flatnotes and Hugo sit on the same raspberry pi, but at the end of the day it works.
I configured a workflow in n8n to access the Pi with SSH and execute the publishing script, same as I would do it manually. This workflow has a Webhook trigger that will kickoff that process, I kept it simple and just depends on someone accessing the URL. Since it is only accessible on my internal network (and ultimately just runs this script) I’m not too concerned about locking it down with Auth headers or anything. For the final step, I found that I can use MacOS/iOS Shortcuts to call a URL. I simply made one that hits that web hook and returns the response, and now it sits in my toolbar and phone menu. Once I’m happy with an entry I can hit the shortcut and it’ll just pop up on the site. From a user perspective, dead simple!
Next on my todo list is to start figuring out a homelab NVR for security cameras so I’m not dependent on Ring or some other external service, and my footage isn’t being accessed by law enforcement or others without my consent. I’m currently testing out a deployment of Scrypted as it was recommended for easy connection to a lot of cameras and integration with HomeKit. I’d like to still get notifications for motion on my front door camera.
One thing I wasn’t sure about was how to actually provide storage for the NVR. I thought about just pointing it to my NAS, but it looks like it will entirely fill up available space with recordings, and I would rather not have that. Instead, I grabbed some drives and decided to go with local storage. Since I am running the software in containers hosted on Proxmox, the natural solution was to set up a new ZFS Pool with RaidZ to make one big drive.
What I discovered: There are dozens of recommendations for various configurations, such as installing OVM or TrueNAS to do the actual storage management and simply exposing it over NFS/SMB. Others will say to get a hardware controller and do passthrough. I don’t know if some of this was recommended with older versions of Proxmox, but I found a much simpler solution.
- Configure the ZFS Pool with the drives you want to use, directly in Proxmox.
- Set up your VM (in my case Ubuntu) and then shut it down.
- Under the new VM in Proxmox, select Hardware and do Add -> Hard Drive.
- Select whatever configuration makes the most sense, but make sure it is using your ZFS pool and max out the storage (GiB).
It will take some trial and error, I have 4x4TB drives with RaidZ1, so roughly 1/4 of that goes to parity. Even so, I couldn’t put in 12000 GiB, it would give an error about not enough space. Just tick it down 100 or so at a time until it accepts it and you’re all set. Now your VM has a storage drive directly accessible. You’ll need to still format it and set up a filesystem (mkfs.ext4
or similar), and then mount it somewhere (be sure to add it to /etc/fstab
) and you’re all set.
I don’t know what all the weird roundabout solutions accomplish, but at least from what I can see ZFS storage can be added straight to a VM. You heard it here first.