• 0 Posts
  • 18 Comments
Joined 10 months ago
cake
Cake day: March 5th, 2025

help-circle







  • I agree that you’ll want to figure out inter-pod networking.

    In docker, you can create a specific “external” network (external to the docker container is my understanding) and then you can attach the docker compose stack to that network and talk using the hostnames of the containers.

    Personally, I would avoid host network mode as you expose those containers to the world (good if you want that, bad if you don’t)… possibly the same with using the public IP address of your instance.

    You could alternatively bind the ports to 127.0.0.1 which would restrict them from exposing to internet… (see above)

    So just depends on how you want to approach it.


  • I am running AdGuard Home DNS, not PiHole… but same idea. I have AGH running in two LXCs on proxmox (containers). I have all DHCP zones configured to point to both instances, and I never reboot both at the same time. Additionally, I watch the status of the service to make sure it’s running before I reboot the other instance.

    Outside of that, there’s really no other approach.

    You would still need at least 2 DNS servers, but you could setup some sort of virtual IP or load balancing IP and configure DHCP to point to that IP, so when one instance goes down then it fails over to the other instance.




  • You are looking for a disaster recovery plan. I believe you are going down the right path, but it’s something that will take time.

    I backup important files to my local NAS or directly store them on the local NAS.

    This NAS then backs up to an off site cloud backup provider BackBlaze B2 storage.

    Finally, I have a virtual machine that has all the same directories mounted and backs up to a different cloud provider.

    It’s not quite 3-2-1… but it works.

    I only backup important files. I do not do full system backups for my windows clients. I do technically backup full Linux vms from within Proxmox to my NAS…but that’s because I’m lazy and didn’t write a backup script to back up specific files and such. The idea of being able to pull a full system image quickly from a cloud provider will bite you in the ass.

    In theory, when backing up containers, you want to backup the configurations, data, and the databases… but you shouldn’t worry about backing up the container image. That can usually be pulled when necessary. I don’t store any of my docker container data in volumes… I use the folder mapping from host to directory in docker container… so I can just backup directories on the host instead of trying to figure out the best way to backup a randomly named docker volume. This way I know what I’m backing up for sure.

    Any questions, just ask!



  • I’ve just started to delve into Wazuh… but I’m super new to vulnerability management on a home lab level. I don’t do it for work so 🤷🏼‍♂️

    Anyways, best suggestion is to keep all your containers, vms, and hosts updated best you can to remediate vulnerabilities that are discovered by others.

    Otherwise, Wazuh is a good place to start, but there’s a learning curve for sure.