• 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: August 3rd, 2023

help-circle
  • I just built a NAS not too along ago, so I’ll just say what I would have changed in my build. Maybe it will help you.

    1. Get a server you can manage over something like ipmi or similar. Look at ASRock rack or super micro.
    2. Try and get something with a lot of pci lanes or at least bifurcation. This way you can expand and use more nvmes
    3. Go with nvmes first if you can. Depending on what you go with for a filesystem, will sort of depend how many you should start with.
    4. If you go for 10gbe, don’t use ethernet unless it’s onboard, otherwise go with sfp card and switch. It runs cooler.
    5. Try to find something with rdimm. Cheaper than ecc dimm and easier to find.
    6. Don’t forget a UPS. Protect your investment.

    One thing to watch out for. Some of these server motherboards expect the smaller higher rpm fans. This means you may have to fiddle to get the fan curve corrected with normal fans.

    If using something like zfs, you may want to have a bigger raid to start with. Otherwise you can do mirrored vdevs and combine them. But it can be a bit costly since you need 2 drives every time you want to expand and won’t get as much space as say something like z2 (2 parity drives)

    Nvmes will run cooler, longer, and faster. They also take up way less space.

    Make sure you have backups. You could probably use your Synology for this or some old computer parts you have lying around. But if you do use something like trunas, it makes it very easy to backup. This server you can use platter drives.

    16gh of memory is not enough. If you want to run workloads on there and if you do use something like trunas, you want as much memory as you can get for caching.

    Also, just buy used. You can find cheap servers online or just the parts on like eBay or even AliExpress.

    Good luck!




  • You will get different answers. Some people like proxmox with ZFS. You can run vms and lxc containers pretty easily. Some people like running everything in a container and using podman or docker. Some people like to raw dog it and just install everything on bare metal ( I don’t recommend this approach though).

    The setup I currently have are 3 servers. One server for compute. This is where I run all my services from. 1 server for storage. 1 server for backup storage.

    The compute server is set up with an NFS share that connects to the storage server. These all have a 10gbe nic on a 10gbe switch.

    If I could go back and redo this setup again, I would make a few changes. I do have a few NVMe drives in my storage server for the NFS share. The compute server has the user home directories on there, as well as the permanent files for the containers that have volumes. This makes it easy for me to backup that data to the other server as well.

    With that said, I kinda wish I went with less storage and built out a server using mostly nvmes. My mobo doesn’t do bifurcation on its x16 slots and so I can only get 1 NVMe per slot. It’s a waste. Nvmes can run somewhat hot, but are smaller and easier to cool than platters. Plus it’s faster to rebuild if something were to happen. You could probably get away with using 1 parity drive because of this.

    I would still need a few big drives for my media, but that data is not as critical to me in the event I lost something there.

    What I would look for in a storage system are the following:

    • mobo with rdimm memory
    • bifurcation pcie slots to add adapter cards for NVMe drives or lots of NVMe slots on the mobo.
    • if doing 10gbe, use sfp+ nics and a sfp+ switch (runs cooler). Then you would just get sfp cables instead of cat6/6e.
    • management port (ipmi)
    • as much memory as you can afford

    With those requirements in mind, something like an ASRock server motherboard using an AMD epyc would normally fit the bill. I have seen bundles go for about 600-700 on AliExpress.

    As far as the OS. I treat the storage server as an appliance. I have truenas on there. This is also the reason I have a separate computer server as it makes it easier for me to manage services the way I want, without trying to hack the truenas box. This makes it easy to replicate to my backup since that is also truenas. I have snapshots every hour and those get backed up. I also have cloud backup for critical data every hour.

    Last, but not least, I have a vps server so I can access my services from the internet. This uses a wireguard tunnel and forwards from the vps to the compute server.

    For the compute server, I am managing mostly everything with saltbox. Which uses ansible and docker containers for most services.

    No matter what you choose, I highly recommend ZFS for your data. Good luck!


  • I decided instead to use ZFS. Better protection than just letting something sit there. Your backups are only as good as your restores. So, if you are not testing your restores, those backups may be useless anyway.

    ZFS with snapshots, replicated to another ZFS box. The replicated data also stores the snapshots and they are read-only. I have snapshots running every hour.

    I have full confidence that my data is safe and recoverable.

    With that said, you could always use M-disk.


  • Any reason why that board? Not 100% sure what you are trying to do, but it seems like an expensive board for a home NAS. I feel like you could get more value with other hardware. Again, you don’t need a raid controller these days. They are a pain to deal with and provide less protection when compared to software raid these days. It looks like the x16 can be split on that board to be 8/8, so if needed you can add an adapter to add 2 nvmes.

    You can just get an HBA card and add a bunch of drives to that as well if you need more data ports.

    I would recommend doing a bit more research on hardware and try and figure out what you need ahead of time. Something like an ASRock motherboard might better in this case. The epyc CPU is fine. But maybe get something with rdimm memory. I would just make sure it has a Management port like ipmi on the supermicro.


    1. You don’t need zfs cache. Stay away from it. This isn’t going to help with what you want to do anyway. Just have enough RAM.

    2. You need to backup your stuff. Follow the 3-2-1 rule. RAID is not a backup.

    3. Don’t use hardware raids, there are many benefits to using software these days.

    With that said, let’s dig into it. You don’t really need NVMe drives tbh. SATA is probably going to be sufficient enough here. With that said, having mirrored drives will be sufficient enough as long as you are backing up your data. This also depends on how much space you will need.

    I just finished building out my backup and storage solution and ended up wanting NVMe drives for certain services that run. I just grabbed a few 1 TB drives and mirrors them. Works great and I do get better performance, even with other bottlenecks. This is then replicated to another server for backup and also to cloud backup.

    You also haven’t said what hardware you are currently using or if you are using any software for the raid. Are you currently using zfs? Unraid? What hardware do you have? You might be able to use a pice slot to install multiple NVMe drives in the same slot. This requires bifurcation though.



  • Was this in the Bible or something? Why is it immoral?

    Let me ask this. Imagine 1 person owned many farms of food. They sell their food and they own a huge house on top of the hill. There is more than enough food to feed every person in town. The only way for anyone to get food is to buy it from this one person since he owns all of the farm land and if anyone tries to farm their own food, he uses his money to push them around and makes them stop.

    A family is struggling to find work. The father asks the farm owner if he could get some food to eat. The farm owner obviously says no. Pay or no food, he says. The family ends up starving to death.

    Would it be wrong for the family to steal food in this case so they can survive? Or is that immoral? Is the farm owner immoral for not helping them? He has plenty of money to last him 100 lifetimes, his belly is full, but he keeps eating. Who is wrong here?



  • Ok but containers generally have a lot less dependencies. If you are making your own images, then you know exactly how to rebuild them. In the event something happens, it makes it much easier to get up and running again and also remember what you did to get the service running. The only other thing that would be better is Nix.

    If you use an image that someone is maintaining, this makes it even easier and there are services out there that will keep your containers up to date when a new image is available. You can also just automate your image builds to run nightly and keep it up to date.






  • Had issues downloading for offline. Recommendations are meh. Sometimes I can’t search. Sometimes the app won’t load when on cell data.

    I never had issues like those before and then all of the sudden, it’s not even usable. I get having bad cell coverage somewhere, but I would have a strong signal and it will still do it. I had to uninstall and reinstall the app multiple times for it to work.

    Tidal is now cheaper and it has everything I would listen to. Before they were missing some bands and deezer had them. Doesn’t seem to be the case anymore.



  • Not sure where you got the idea that it’s not advisable to mount the box via NFS. You can totally do this. I would make some adjustments though.

    I would use mergerfs to union multiple mounts into one. You would then download to the local mount which is the drive connected directly to your seed box. Then I would have a remote mount to the nfs mount. You merge these into one so that when you link up jellyfin, it won’t know the difference and you can just stream like normal.

    You need to copy files from the local drive to the remote, so you can try and roll your own solution by using rclone or use something like cloudplow which solves this issue as well. Cloudplow uses rclone as well, but monitors for changes automatically.

    As far as copying files, why are you using sync anyway? It’s pretty dangerous. Just use move or copy instead. This way you don’t need to keep copies on your computer and the server.

    As far as streaming from the nfs mount. You may need to make some changes to the cache settings and ensure they are set correctly.

    With a setup like that, you should have no problems though.