

From the above post:
The most common question I got was “but why?” and I had a hard time initially answering that. Not because I didn’t think Fetcharr didn’t need to exist, but because I couldn’t adequately explain why it needed to exist. After a lot of back-and-forth some helpful folks came in with the answer. So, allow me to break how these *arr apps work for a moment.
When you use, say, Radarr to get a movie using the automatic search / magnifying glass icon it will search all of your configured indexers and find the highest quality version of that movie based on your profiles (you are using configarr with the TRaSH guides, right?)
After a movie is downloaded Radarr will continue to periodically refresh newly-released versions of that movie via RSS feeds, which is much faster than using the automated search. The issue with this system is that not all indexers support RSS feeds, the feeds don’t get older releases of that same movie, and the RSS search is pretty simplistic compared to a “full” search and may not catch everything. Additionally, if your quality profiles change it likely won’t find an upgrade. The solution to this would be using the auto-search on every movie periodically, which is doable by hand but projects like Upgradinatorr and Huntarr automated it while keeping the number of searches and the period of time reasonably low as to avoid overloading the *arr and the attached indexer and download client. Fetcharr follows that same idea.
So, if the RSS systems work just fine for you, then that’s great! This is a tool made for the people who have found the RSS searches have failed them for one reason or another.

It’s all about tradeoffs and maximizing the useful qualities of each.
NVMe storage is extremely fast, but expensive and wears quickly. For a homelab, those drives are usually not easily accessible or replaceable without powering the system off. Internal SSDs are similar with the caveat that they’re more likely to be hot swappable when using more server-grade equipment (even older equipment, which more homelabs will have) - HDDs are obviously slower but have higher capacity and wear less quickly. SAS drives will have higher DWPD and more speed for roughly the same (used) cost but you need to make sure the backplane you’re using supports them.
External USBs are much cheaper and higher capacity, depending on what you get, but are usually limited to USB-C or even USB3 speeds. Additionally, they can be disconnected physically or via software.
A SAN or vSAN requires either special equipment and cables or a dedicated high speed (10Gbit+) network to function well. There’s various free software that can create a vSAN-like thing for you, such as ceph. A “proper” vSAN will be marginally slower than an internal drive array but usually still plenty fast for “big data” which is what it’s good for - big chunks of data that don’t require the world’s fastest drive access speeds. Note that, while unlikely if set up properly, this storage can also be disconnected both physically and via software. Usually this is more recoverable more quickly than USB since common vSAN software will work around this.
For my homelab, I use NAS storage for data that’s large, “infinitely” growing, and doesn’t need extremely fast access like a database would require. vSAN for most other operations. I should keep local storage or use an actual SAN fabric of some kind but homelabs aren’t professional datacenters