• 2 Posts
  • 7 Comments
Joined 3 years ago
cake
Cake day: July 7th, 2023

help-circle
  • It’s all about tradeoffs and maximizing the useful qualities of each.

    NVMe storage is extremely fast, but expensive and wears quickly. For a homelab, those drives are usually not easily accessible or replaceable without powering the system off. Internal SSDs are similar with the caveat that they’re more likely to be hot swappable when using more server-grade equipment (even older equipment, which more homelabs will have) - HDDs are obviously slower but have higher capacity and wear less quickly. SAS drives will have higher DWPD and more speed for roughly the same (used) cost but you need to make sure the backplane you’re using supports them.

    External USBs are much cheaper and higher capacity, depending on what you get, but are usually limited to USB-C or even USB3 speeds. Additionally, they can be disconnected physically or via software.

    A SAN or vSAN requires either special equipment and cables or a dedicated high speed (10Gbit+) network to function well. There’s various free software that can create a vSAN-like thing for you, such as ceph. A “proper” vSAN will be marginally slower than an internal drive array but usually still plenty fast for “big data” which is what it’s good for - big chunks of data that don’t require the world’s fastest drive access speeds. Note that, while unlikely if set up properly, this storage can also be disconnected both physically and via software. Usually this is more recoverable more quickly than USB since common vSAN software will work around this.

    For my homelab, I use NAS storage for data that’s large, “infinitely” growing, and doesn’t need extremely fast access like a database would require. vSAN for most other operations. I should keep local storage or use an actual SAN fabric of some kind but homelabs aren’t professional datacenters


  • From the above post:

    The most common question I got was “but why?” and I had a hard time initially answering that. Not because I didn’t think Fetcharr didn’t need to exist, but because I couldn’t adequately explain why it needed to exist. After a lot of back-and-forth some helpful folks came in with the answer. So, allow me to break how these *arr apps work for a moment.

    When you use, say, Radarr to get a movie using the automatic search / magnifying glass icon it will search all of your configured indexers and find the highest quality version of that movie based on your profiles (you are using configarr with the TRaSH guides, right?)

    After a movie is downloaded Radarr will continue to periodically refresh newly-released versions of that movie via RSS feeds, which is much faster than using the automated search. The issue with this system is that not all indexers support RSS feeds, the feeds don’t get older releases of that same movie, and the RSS search is pretty simplistic compared to a “full” search and may not catch everything. Additionally, if your quality profiles change it likely won’t find an upgrade. The solution to this would be using the auto-search on every movie periodically, which is doable by hand but projects like Upgradinatorr and Huntarr automated it while keeping the number of searches and the period of time reasonably low as to avoid overloading the *arr and the attached indexer and download client. Fetcharr follows that same idea.

    So, if the RSS systems work just fine for you, then that’s great! This is a tool made for the people who have found the RSS searches have failed them for one reason or another.




  • For anyone interested in the configarr config I use, here you go. It’s somewhat customized to my taste (especially dubs > subs for anime) and there’s likely an issue or inconsistency or two in it that someone more familiar might be able to spot, but it works pretty well and I’d say it’s a good starting point if you just want to get going.

    Note that it’s a kubernetes ConfigMap but it’s not hard to pull the relevant info into docker for your own needs.


  • as always, the answer is “it depends” - everyone has their own unique flavor of *arr stack with different components. Breaking it down, everything revolves around the core apps:

    • Radarr, for movies
    • Sonarr, for TV shows / anime
    • Lidarr, for music
    • Readarr (now Bookshelf), for books/audiobooks
    • Whisparr, for porn

    These apps do the majority of the hard work of going from eg. “I want this movie” to “this movie file is now downloaded and placed into a subdirectory on my NAS or storage somewhere”

    Realistically, all you need to get started is a download client (usenet, torrent client, whatever - the most popular choice is qbittorrent-nox or an equivalent docker container), your *arr app(s) of choice, and a way to consume and share the media you’ve now downloaded to your NAS or server (plex, jellyfin, stash, audiobookshelf, VLC, etc)

    For consuming media, here’s a non-comprehensive list that most people will recommend at least one thing from:

    • Plex or Jellyfin for audiovisual media. TV shows, anime, movies, porn, audiobooks, and music
    • Stash for porn-specific media, if you prefer. Significantly better metadata handling and management designed specifically and only for porn
    • Audiobookshelf specifically for books and audiobooks. Again, better metadata handling and management designed specifically for books/audiobooks
    • VLC or an equivalent if you prefer mounting your media share to your PC and just playing the raw files

    The rest of the *arr ecosystem serves as a way to automate this core idea or fix issues with that automation. An example from my own homelab:

    • I have every *arr app listed as the core for finding/downloading whatever media
      • I have two instances of Sonarr and Bookshelf. One Sonarr for TV shows and one for anime, and similarly one Bookshelf for regular books and one for audiobooks. the way data management is handled in these apps it’s significantly easier to set up two instances of each rather than trying to force everything into one app
    • I use Prowlarr as an indexer manager. You can add indexers to each app but it’s easier to set up Prowlarr and let it do the handling and search caching
    • I use qBittorerent for the actual downloading and Plex for sharing. I’ve found that friends and family have a much easier time both finding and using Plex, so I stuck with that over Jellyfin
    • I set up Unpackerr because often times you’ll find imports for the *arr apps fail because they’re compressed in some way. This just automates the finding and decompressing of those files so they can import successfully without needing me to go in and do things myself
    • I use configarr to automate the application of the TRaSH guides to each *arr which significantly increases the odds of getting a good quality version of whatever it is you’re looking for when doing an automatic search
    • I have Seerr set up so friends and family can request movies, TV, or anime on their own without needing to message me all the time
    • The *arr apps do an okay-ish job of constantly looking for upgrades for existing media but they fail in a lot of unexpected ways so I used to run Huntarr. After that imploded I created and now run Fetcharr. If a better version of something I have is ever released it’ll nab it automatically
    • Since I’m a filthy dub watcher (I just can’t do subtitles, sorry) I have Taggarr to tag anime series as “not the dubbed version” which works well enough
    • I just set up dispatcharr for live TV which was a fun little side-project and maybe could be useful later. This was one of those “ooh pretty” set-it-up-and-see-how-it-goes things.
    • Because automated requests from Seerr and Fetcharr can clog up your queues with failed downloads pretty quickly (stalled, bad releases or naming, etc), I set up Cleanuparr to deal with that whole mess. Works pretty well, no need to check and clear things myself any more
    • My wife can’t do any media without subtitles so I also have Bazarr running to download those for any media that’s missing them
    • I also set up Maintainerr because I’ve realized my friends and family have a habit of requesting stuff and then never watching it, so this prevents media from completely filling up the NAS. It deletes media based on rulesets. Mine is customized to delete unwatched stuff after X days
    • I also have Mixarr set up which I have mixed (hah) feelings on. Just takes my music I listen to and grabs artists I don’t already have. Very obviously vibe-coded which makes me nervous because of the type of people who vibe-code popular apps and the thick skin required to publish popular apps to the internet. So far I haven’t found anything better
    • I also recently set up audiobookshelf for books and audiobooks. The metadata handling and management is ehh so I may look into LazyLibrarian to clean up and properly tag downloaded media before audiobookshelf pulls it so it can actually get the correct books and authors
    • I also have Stash running for an interface to Whisparr, since adding porn to Plex would be a terrible idea. My friends have kids and they watch a lot on the Plex. It would be super unfortunate to have porn as a recommended video
    • Finally, I run tautulli for stats upon stats upon stats. And because Mintainerr can make use of it
    • FileFlows and Tdarr are also popular for compression, health checks, etc of existing media. I ran them previously but don’t any longer

    Not all of these will be useful to you, and you’ll likely find others that are more useful for your situation. Like I mentioned, everyone’s *arr stack is different and unique.

    My recommendation: start with an *arr or two, configarr (optional but really recommended - hard to set up but once you do you’re good forever), prowlarr (optional but you’ll thank yourself later if you ever get into this and end up with more *arrs), and unpackerr (really do recommend this one) and go from there.



  • Not sure what you mean by that. I occasionally use the web UI as the tool that it is and I’ve played around with opencode, cursor, etc previously on other home projects to get a sense for where things are and what the limits of these things are. That said, I take pride in my own work and this project is no exception. Is there something in this project that makes you think I threw a prompt into cursor and am passing that off as my own? Or are you against the idea of using an LLM and consider any person or project using them at all to be vibecoded?

    As a quick edit, I’ll note that, since I documented any use of ChatGPT reasonably well in this project, you can see the number of times it was used and what it provided. I feel the contributions were largely inconsequential and really just time-saving on my end. I also vetted (and understood!) the output and modified it according to what I wanted. Personally, I don’t consider that to be “vibe-coding” but I suppose everyone has their own definition.

    Edit again: ugh, it’s far too easy to focus on negative feedback and let that consume you. I am not going to defend my use of ChatGPT but I personally think that someone seeing the word ChatGPT and saying “oh so this is vibe-coded” is disingenuous to the project and my skills as a developer. I spent years learning and mastering Java and this is a lot of my experience and several weekends of my free time. Look, if you feel that the four uses of ChatGPT, much of which have been modified by my own hand and all of which inconsequential, constitutes a vibe-coded system then that’s your take - but I don’t think it’s a fair take. There are many things to be said about the ethics of modern LLMs and over-reliance on them but personally I think understanding and effectively using tools at your disposal is a skill. If you want something completely free of LLMs these days you may very well have to invent the universe.

    Phew. Okay, I’m off my soap-box. Consider me got. I’ll try not to think about this too hard but it definitely feels bad pouring your time and skills into a thing and seeing that one comment saying “nah this isn’t worth anything”