Yup; hopefully there are some advances in the training space, but I’d guess that having large quantities of VRAM is always going to be necessary in some capacity for training specifically.
also at beehaw
Yup; hopefully there are some advances in the training space, but I’d guess that having large quantities of VRAM is always going to be necessary in some capacity for training specifically.
Yup; hopefully there are some advances in the training space, but I’d guess that having large quantities of VRAM is always going to be necessary in some capacity for training specifically.
So I’m no expert at running local LLMs, but I did download one (the 7B vicuña model recommended by the LocalLLM subreddit wiki) and try my hand at training a LoRA on some structured data I have.
Based on my experience, the VRAM available to you is going to be way more of a bottleneck than PCIe speeds.
I could barely hold a 7B model in 10 GB of VRAM on my 3080, so 8 GB might be impossible or very tight. IMO to get good results with local models you really have large quantities of VRAM and be using 13B or above models.
Additionally, when you’re training a LoRA the model + training data gets loaded into VRAM. My training dataset wasn’t very large, and even so, I kept running into VRAM constraints with training.
In the end I concluded that in the current state, running a local LLM is an interesting exercise but only great on enthusiast level hardware with loads of VRAM (4090s etc).
Yeah after some googling I’m kinda thinking this is a fake screenshot, idk
Damn, never seen that before. Is it a windows 11 thing? It’s looking more and more like I’ll have to move to linux on my desktop, I guess.
Edit: hard to find a source for the image; I assume if it was real there’d be a lot more reports of this online but I’m not seeing those.
Probably stating the obvious, but keep the obscure stuff around! You might not get upload immediately but the longer you seed it, the more chance someone else who wants it will come along and you get some of the upload. the most real upload I’ve ever gotten on TL (talking 1.2/1.5/1.9 ratio, absolutely insane ratio to have on a home network for a TL torrent imo) was from submitting a reseed request for several super obscure boxsets that had other leechers and no seeders.
but do watch out for downloading any more non-freeleech stuff from TL if your ratio is already poor, as that’ll dig you into a bigger hole than just letting what you’ve already grabbed seed.
Glad I could help! Most of my TL ratio comes from the seeding time point system, but all of the torrents I have real ratio on are freeleech TV boxsets (which I find TL is pretty generous about providing). Seeding large-file-size torrents gives a bonus to the points generated, so if you have the space and find yourself gaining points too slowly to bring your ratio where you want it, I’d recco nabbing a few of those boxsets to seed long term as well.
Are you getting upload on public torrents with a large number of leechers, like YTS or similar? I’d test that first.
If you are getting upload on a large public torrent, then it’s because TorrentLeech is really hard to get real upload on; your best bet is to seed your TL torrents as long as possible (ideally forever) and build your ratio by buying upload from the points store.
If you’re not able to upload to public torrents then yeah it’s probably a setup issue.
Another vote here for ProtonVPN, though it doesn’t support port forwarding via a GUI on Linux, only OpenSSL and Wireguard configs. I set it up with gluetun, qBittorrent, and qBittorrent-natmap and and it just works.
Unfortunately migration isn’t built into Lemmy yet, but I’d guess it’s on their feature wishlist.
Reminds me of this neocities site that has a collection of Gameboy Camera photos collected from the internet.
There are some bug reports around the Hot view not refreshing; this should hopefully be fixed once 0.18 is released and installed on your instance.
For now, I’d recommend using a different sort view; “New” is good if you want to see really fresh stuff and be a first contributor to discussion, and “Top Day” is good to see posts with more discussion. (also, there’s a web UI bug where a lot of posts get added all at once whenever anyone on your instance subscribes to a new community; I refresh the page whenever that happens.)
Interesting, thanks for this! I’ve got a reasonably sized wiki I exported from TiddlyWiki into Obsidian and it works alright; but now I’m curious if Logseq would be a better fit. All my daily and review entries in TiddlyWiki were bullet-pointed, so it should feel natural in that respect.
When I was in my early teens I got my hands on a copy of Photoshop 7 from my granddad and spent so much time on tutorial websites and Worth1000, messing around with the tools and making fake digital post-its and stuff like that. I think Photoshop is definitely up there in terms of complex UIs, so having that hands-on experience was crucial in learning how to learn other UIs.
It also helped that a lot of the tutorials by that point were for CS3, which had warp features that 7 didn’t have, and I had to experiment to find workarounds for the missing tools.
Glad to help! If you end up referencing my PDF and have any questions, feel free to shoot me a message.
Re: port forwarding, if you don’t have it, it’s kinda like a one-way mirror? Your torrent client can look out through the mirror, but no one can look in, and you’ll only be able to connect with other torrent clients that have a clear window - because your client can see them through the glass and send them a request to connect, and their connection is transparent so they can accept the message. So if there’s a lot of other people out there with one-way mirrors also, you can’t connect to them b/c you can’t see them and vice versa.
Port forwarding is basically setting your client up with a clear window instead of a mirror - it’ll be able to accept both incoming requests and make outgoing requests, increasing the number of other people you can connect to. Increased connections means more likely to find people on torrents with small amounts of seeders, and I think increased download speeds too.
I run a fairly comprehensive suite of *Arr apps off a Synology DS923+ and it was somewhat straightforward to set up. Note that there’s some setting with the 923 that doesn’t make it optimal for Plex and people prefer the 920 - I run Plex off an Nvidia Shield so that didn’t matter to me.
I wrote up a step-by-step installation guide for myself, mostly for reference / any future times I might need to make changes. tossed the PDF here in case you’d like to reference, though it’s a little out of date atm because I need to switch from Mullvad to ProtonVPN for port forwarding (which you can ignore if you don’t care about port forwarding): https://mega.nz/file/9pISRSqB#w_I-6gI8Ga2u_rvTGqTezmIk_-fxnmHfAr1FapFNpEM
Most of the guide was built using these two folks’ articles as reference:
Oh, interesting. I didn’t know about this TPM requirement; looks like my CPU does support it, but it’s not turned on in BIOS. Hope you’re right though and W10 does get its support lifetime extended.
Well, looks like it still doesn’t let you move the taskbar to the top natively so I will continue my boycott of Windows 11.
On that note, it’s ridiculous how quickly they’re stopping long-term support for Windows 10 compared to say XP or 7.
I don’t know anything about GPU design but expandable VRAM is a really interesting idea. Feels too consumer friendly for Nvidia and maybe even AMD though.