I’ve long since switched to foundry, but I used to use
https://www.rptools.net/toolbox/maptool/
It’s not web based, but the clients are available for linux mac and windows,
I’ve long since switched to foundry, but I used to use
https://www.rptools.net/toolbox/maptool/
It’s not web based, but the clients are available for linux mac and windows,


that’s true, but I think it’s in the phrasing, they describe it as a shortage of human made content. the bigger issue to note is the lack of ability to identify human made content. IE you give it reddit and our e-mails, there’s plenty of human made content on there… but nobody knows what percentage of it is actually bots or AIs.


Honestly I don’t get how AI isn’t rolling backwards already. Image sites are burried in AI slop. Social media posts are burried in AI slop, and now e-mails, that were probably written by AIs. How is AI even remotely improving right now, when obviously 90% of any new training data it’s getting, was generated by the last generation of AI.


Maybe he’s just Pichai is just really smart and trying to get the breaks slammed on the AI bubble before it pops.
Listen if this works the way you think it does, half the country is going to be out of work. CEO: but our company will make a lot of money right, someone else is going to do it eventually anyway.
OK, look if this works the way it is supposed to, we won’t need you anymore!
Oh shit, hit the breaks, no more AI, it’s all a bubble anyway.


Flying pig was too on the nose?
Not sure that really works for git though… at least with regards to it’s primary usage.
git isn’t just a backup… it’s about version control.
IE the point is if you know what you are doing, you realize this function isn’t working in this edge case, you can search through and find out, when did this part of this file change… and what was it before, and it will basically find exactly that.
If you encrypted it so that git couldn’t actually read the contents, then you basically reduced a crazy powerful tool, into a glorified dropbox. (IE yeah you could revert back to previous versions… but you’d basically be counting on your memory for what you changed when, if the git server can’t read the files).
I guess for me it kind of depends on your definition of “self host” as 90% of what I host is a hetzner server running out of finland. because well that’s off site backups lol.
my setup is.
Local: Frigate (CCTV manager), Homeassistant (home automation), Matrix (chat).
Remote: Mealie (recipe collection), Vaultwarden (works with bitwarden clients), Nextcloud (files and documents), Freshrss, gitea (github alternative)
Now in terms of wanting an offsite backup, you are probably right, assuming you don’t have something offsite that you can syncronize with, and assuming you don’t have any major privacy fears of what is hosted, those things are probably best to use cloud for, assuming you are more worried of losing everything in a house fire, than you are of say the stuff being spied on by a 3rd party or caught by hackers.
So yeah I’d say, personally in things I like to have self hosted… on site, probably I’d say a local messanger is good if you’d like a reasonably private communication for friends/family etc… Niche things like RSS readers, or recipe books, really anything strange niche you can probably search for some program to self host it.


Just imagining future ads once data collection gets to more insane points. Just imagining AI scanning your facebook page, then pushing clothing ads with pictures of you in the outfits, or the new TV with pictures of it in your house etc…
Honestly maybe we should get ahead of it… like intentionally make our facebook accounts showing some cave man living in a cave, and in 2 years when the advertisers start going stupid crazy… watch ads show up on it depicting a flat screen TV in a cave.


While I’m far from an expert on it… at best the dream simulations are still, extremely rudimentary. To the point that’s usually how you can tell it isn’t real by doing something like reading a book. IE it’s largely believable, but only because you are put in a gullible state. Like watching 2 year old AI videos, while stoned.
IMO the learning curve for caddy is almost non existent, and just about anything you might want to selfhost almost certainly has a quick simple caddy configuration you can copy paste with just updating the relevant domain. Personally learning curve for caddy was probably way lower than figuring out the edge cases of apache that I was using before


Don’t have a tesla, not sure if there’s any way to test, but looks to me like the user at least claims she ensured NSFW was not turned on. and that gork was listed a just “lazy male”.
I don’t dabble with it so I can’t say if there’s a way that could be turned on by mistake on grok’s settings elsewhere and it carries over to the car or similar.


I do aknowledge that’s always going to be the problem when we have the human + AI driver combinations.
Safest hypothetical is 100% AIs that always follow the same rules… next safest is humans that break the rules, but in a context aware situation (IE everyone going 70 in a 55, is safer than 1 car going 55 and all other cars going 70).
Real danger though is if the AI doesn’t make good judgement calls when doing so. IE rather than deciding based on how fast other cars are going, it’s primary determination is whether the user says they are in a hurry, leading it to sometimes be the one car going 55, but if the person is in a hurry it may be the only car going 70 on a road everyone else is going 55.


well do you want something that has an 80% chance of finding it in 2 seconds… or something that has a 99% chance of finding it in 38 hours? (and yeah, duh the obviously rational thing to do would be to try one or 2 layers of the quick methods, say “did this find it or do you want me to look deeper”.


Can we like, maybe normalize adding descriptions of the product to these announcements? On the blogs of these apps pages that’s one thing, but on selfhosted forums etc… where we are drawing attention to the product of which many aren’t familiar with… a 2 sentence summary of what the product is before going into the long changelog, would make everyone so happy.


anyone know what the claim is to even count as defamation?. That to me seems like what should be the crux.
IE if the claim was “X’s voice clearly shows he’s dying of cancer.” I could see that as defamation. On the other hand “X is summoning demons” that clearly would be fan-fiction.


Cases where you want something googled quickly to get an answer, and it’s low consequence when the answer is wrong.
IE, say a bar arguement over whether that guy was in that movie. Or you need a customer service agent, but don’t actually care about your customers and don’t want to pay someone, or your coding a feature for windows.
It’s OK the secretary of education was only promoting steak sauce in the classrooms.


I believe in mullvad there’s a setting to “allow local traffic”,
Navigate: to the Preferences or Settings menu. Find: the option for “Local network sharing” and enable it by turning the switch on
or if you are using CLI
mullvad lan set allow
in short, most vpn clients have an option to not override local traffic. while still directing everything outside of your LAN through the VPN.


I mean if you are talking pre-google youtube, I don’t believe it had a subscription model… or any real plan to profit or pay it’s creators. It was both hemorrhaging money itself, and giving it’s creators nothing.
Post google youtube, I guess yeah it is, but worth noting mostly it isn’t exactly a high production values system, with the exception of like mr beast etc… which make a boatload of money by still following the same traps as regular TV, catering to the lowest common denominators, microanalyzing maximum views on every aspect.
IMO I don’t see why you get a second human involved. Store the database in an encrypted form… save a copy to some cloud service. Why count on another human for it