Alt account of @Badabinski

Just a sweaty nerd interested in software, home automation, emotional issues, and polite discourse about all of the above.

  • 0 Posts
  • 64 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2024

help-circle


  • Arch is a pretty good one if you want to control and tinker. I have personally found it to be very reliable over the years, and the AUR is exceptionally powerful (although you NEED to review your PKGBUILDs, there’s nothing stopping someone from putting malware on the AUR again). The packaging format is so simple and easy that I actually build a few performance-critical packages locally so I can tweak compiler flags (gimmie that -march native).

    Nix is cool and kinda crazy, but honestly? I’d hold off until you’re comfortable with Arch. Same with Gentoo.


  • Yep, this is why we use GPL! Using a permissive license is like lending money to a friend—you should never, ever expect to get your money back. “Good” companies aren’t altruistic, they’re ruthlessly self-interested. They’re not going to give back to your project unless there’s a damn good reason for them to do so. There are times when permissive licenses are totally fine (like when writing some kinds of libraries), but if you care about freedom of an application then you should stay the fuck away from MIT, Apache, BSD, or any other permissive license. Just use the GPL, folks.

    edit: Using GPL from the getgo would have prevented this atrocity from occurring: https://github.com/coredevices/libpebble3/commit/35853d45cd0ec51cb732be866f6f72467653a613

    They couldn’t have relicensed the project without community approval if it had been using a copyleft license in the first place.

    Also, fuck off with your fucking AGPL license with a copyright transfer CLA bullshit. I’d love to see a new version of the AGPL that expressly prohibits copyright transfers. Never let a company take your rights away from you. A copyright license makes even the GPL effectively meaningless if the company wants to rug pull at a later date.







  • We’ve had the template for this for decades. Put the solar panels in space where the thick soupy gunky spunky atmosphere doesn’t stop the little energy things from the sun. Collect the power in orbit. You just do that up there up in orbit okay? And then you fucking beam the power down to the surface you numpty fucks. Use a maser to send the power down to the surface and you can pick a frequency that isn’t affected by the gunky spunky and then the receivers on the ground can pick it up and they send the power through these things called wires to a building that uses the power and the building can use this neat little thing called CONVECTION to more efficiently remove the heat from the things using the electricity wow.

    Or just, y’know, use less power and make use of ground based solar. We don’t need fucking AI data centers in space. Don’t get me wrong, I think it might be useful to, say, have some compute up in geostationary orbit that other satellites could punt some data to for computation. You could have an evenly spaced ring of the fuckers so the users up there can get some data crunching done with a RTT of like 50ms instead of 700ms. That seems like a hard sell, but it at least seems a bit tenable if you needed to reduce the data you’re sending back to the earth down to a more manageable amount with some preprocessing. That is still not fuckass gigawatt AI data centers. Fuck







  • Do you have any sources for the 10x memory thing? I’ve seen people who have made memory usage claims, but I haven’t seen benchmarks demonstrating this.

    EDIT: glibc-based images wouldn’t be using service managers either. PID 1 is your application.

    EDIT: In response to this:

    There’s a reason a huge portion of docker images are alpine-based.

    After months of research, my company pushed thousands and thousands of containers away from alpine for operational and performance reasons. You can get small images using glibc-based distros. Just look at chainguard if you want an example. We saved money (many many dollars a month) and had fewer tickets once we finished banning alpine containers. I haven’t seen a compelling reason to switch back, and I just don’t see much to recommend Alpine outside of embedded systems where disk space is actually a problem. I’m not going to tell you that you’re wrong for using it, but my experience has basically been a series of events telling me to avoid it. Also, I fucking hate the person that decided it wasn’t going to do search domains properly or DNS over TCP.


  • Debian is superior for server tasks. musl is designed to optimize for smaller binaries on disk. Memory is a secondary goal, and cpu time is a non-goal. musl isn’t meant to be fast, it’s meant to be small and easily embedded. Those are great things if you need to run in a network/disk constrained environment, but for a server? Why waste CPU cycles using a libc that is, by design, less time efficient?

    EDIT: I had to fight this fight at my job. We had hundreds of thousands of Alpine containers running, and switching them to glibc-based containers resulted in quantifiable cloud spend savings. I’m not saying musl (or alpine) is bad, just that you have horses for courses.


  • Is it? I thought the thing that musl optimized for was disk usage, not memory usage or CPU time. It’s been my experience that alpine containers are worse than their glibc counterparts because glibc is damn good. It’s definitely faster in many cases. I think this is fixed now, but I remember when musl made the python interpreter run like 50-100x slower.

    EDIT: musl is good at what it tries to be good at. It’s not trying to be the fastest, it’s trying to be small on disk or over the network.