

For some, human pride and dignity have literally no value, or is something they will hold simply to trade it off whenever convenient, especially in a world that can value it, so it’s just like a commodity.
For some, human pride and dignity have literally no value, or is something they will hold simply to trade it off whenever convenient, especially in a world that can value it, so it’s just like a commodity.
Says the company that literally crawled the Internet without anyone’s permission to train their damn model.
Rules for thee, not for me.
This. Any time someone’s tries to tell me that AGI will come in the next 5 years given what we’ve seen, I roll my eyes. I don’t see a pathway where LLMs become what’s needed for AGI. It may be a part of it, but it would be non-critical at best. If you can’t reduce hallucinations down to being virtually indistinguishable from misunderstanding a sentence due to vagueness, it’s useless for AGI.
Our distance from true AGI (not some goalpost moved by corporate interests) has not significantly moved from before LLMs became a thing, in my very harsh opinion, bar the knowledge and research being done by those who are actually working towards AGI. Just like how we’ve always thought AI would come one day, maybe soon, before 2020, it’s no different now. LLMs alone barely closes that gap. It gives us that illusion at best.
Somewhat unfair judgement against emails IMO, especially cause it’s the “trust list” that’s in the control of a few, with no open manner to add more people to the trust list. The protocol isn’t at fault for failing to prevent problems; it’s the ability for corporations to gain significant market share without control, before they are then allowed to put barriers down to disallow or discourage interaction between those in and out, forcing those within to stay in, while those outside to give up on others in order to gain usability.
Idk about pre-orders but I’d imagine it’s a combo of many things, from Xiaomi already having the finances, to tax breaks and subsidies from the CCP, and subsidies on the domestic consumer side to encourage adoption to further stabilize the industry, which further encourages investments.
Definitely not within reach physically, but good to see what’s available out there. Thanks for replying!
It did not occur to me that they’d do this with ebikes but now I’m concerned. Would be nice to know what you found for the day when I decide to get one.
As someone who was working really hard trying to get my company to be able use some classical ML (with very limited amounts of data), with some knowledge on how AI works, and just generally want to do some cool math stuff at work, being asked incessantly to shove AI into any problem that our execs think are “good sells” and be pressured to think about how we can “use AI” was a terrible feel. They now think my work is insufficient and has been tightening the noose on my team.
Imagine the amount of bandwidth and energy saved, if they didn’t do any of this bullshit.
They are essentially using someone else’s money to get themselves more money. Fuck these people!
It’s not possible for everyone to just tell if it’s supposed to be sarcasm. ADHD makes it hard. A bad day makes it hard. A tiring day makes it hard.
The downside of the misunderstanding isn’t just downvotes. It’s possibly a proliferation of misinformation and an impression that there are people who DO think that way.
Being not serious while saying something grim is not a globally understood culture either. It’s more common and acceptable in the Western world as a joke.
So… call it accessibility, but it’s just more approachable for everyone to just put an “/s”.
Many of these meanings seem to be captured in some modern solutions already:
- We plan to provide a value, but memory for this value hasn’t been allocated yet.
- The memory has been allocated, but we haven’t attempted to compute/retrieve the proper value yet
- We are in the process of computing/retrieving the value
Futures?
- There was a code-level problem computing/retrieving the value
Exception? Result
monads? (Okay, yea, we try to avoid the m word, but bear with me there)
- We successfully got the value, and the value is “the abstract concept of nothingness”
An Option
or Maybe
monad?
- or the value is “please use the default”
- or the value is “please try again”
An enumeration of return types would seem to solve this problem. I can picture doing this in Rust.
2 things I like about golang is just 1) the ease of getting someone to start work, and 2) goroutines. I have no complains about goroutines cause I’ve barely used it, and when I do it’s been fine. The first point though, I’d say the simplicity of the language is a double-edged sword — it’s easy to learn with little surface to cover, but it forces you to implement a lot of basic machinery you find in other languages by yourself, and so your codebase can get clunky to read really quickly, especially as your project grows.
Not trying to dissuade you from learning golang tho. I think it’s a good language to learn and use, especially for small simple programs, but it’s not the great language many try to say it is. It’s… fine. There are many reasons why it grinds my gears, but I’m still fine with using it and maintaining it for prod.
Ehhh, golang’s pretty down there for me too. Sure, you have types, but the way you “implement” an interface is the sussiest thing I’ve seen in most well-known programming languages. Not to mention all the foot guns (pointers for nullables is a common one, and oh, if you forgot that a function returns an error, and you called it for its effects, you’ve just built a possibly very silent bomb) you end up building into your programs. I use in prod, and I get scared.
Kinda don’t like how my handwavy idea is just taken for the most naive turn. I’m not even trying to give precise solutions. I’ve never worked with software at scale, and I expect the playing ground to be pretty different, but I think you’re exaggerating.
Look buddy, all I want to say is that I don’t think your method against Reddit would work. It’s basically gamble though, so I’m definitely not against attempt at it. I just want to point out the possibility of it not working. I don’t think there are surefire ways against their attempt at restoring content.
It’s hard to say that without knowing what their infrastructure’s like, even if we think it’s expensive. And if they built their stack with OLAP being an important part of it, I don’t see why they wouldn’t have our comment edit histories stored somewhere that’s not a backup, and maybe they just toss dated database partitions into some cheap cold storage that allows for occasional, slow reads. They’re not gonna make a backup of their entire fleet of databases for every change that happens. That would be literally insane.
Also, tracking individual edit and delete rates over time isn’t expensive at all, especially if they just keep an incremental day-by-day, maybe more or less frequent, change over time. Or, just slap a counter for edits and deletes in a cache, reset that every day, and if either one goes higher than some threshold, look into it. There are probably many ways to achieve something similar in a cheap way.
And ChatGPT is just an example. I’m sure there already are other out-of-fashion-but-totally-usable language models or heuristics that are cheap to run and easy to use. Anything that can give a decent amount of confidence is probably good enough.
At the end of the day, the actual impact of their business from the API fiasco is just on a subset of power users and tech enthusiasts, which is vanishingly small. I know many that still use Reddit, some begrudgingly, despite knowing the news pretty well. Why? Cause the contents are already there. Restoring valuable content is important for Reddit, so I don’t see why they wouldn’t want to sink some money into ensuring that they keep what makes em future money. It’s basically an investment. There are some risks, but the chances to earn em back with returns on top of the cost is high.
You misunderstood my comment. Reddit probably has every version of your edits, so all they need to do is to put all your past comments through ChatGPT or something, by time in descending order. The first sensible one gets accepted. In some sense, that’s just like how a person would do it. This way, they don’t have to deal with individual approaches to obfuscating or messing with their data.
I was gonna just wait till this whole fiasco dies down, let it sit for a couple of months to a year, before going ahead and slowly remove my comments over time. It’s easy to build triggers for individual users to detect attempts at mass edit or mass deletion of comments after all, which may trigger some process in their systems. Doing it the low profile way is likely the best way to go.
Not too hard to defeat this solution though: put your comments through something like ChatGPT and if it can understand what you wrote, it’s probably good enough for em to restore it.
Maybe the answer is to write some nonsensical answer that’s understood by human readers as utter nonsense, but still recognized by LLMs as a “good comment”.
Not sure why artists are brought up here but I guess that’s one of the highly affected groups.
Just to talk about that particular consequence, however, I don’t agree with your take. There are AI trained on works of specific artists, and the end result is that the AI is really good at producing work that’s similar to that artist’s work, effectively creating an alternative to that artist, even if it’s of slightly lesser quality and a lack of depth of the original. While this would likely not affect the artist in the short term, in the long term, new prospects who don’t yet know the artist well enough would likely be unable to tell the difference in quality, and may even go straight to the AI model since that’s distributed cheaply or even free. It may also negatively reflect on the original artist to people who don’t know the artist, as the works from the AI would likely be more abundant, and people not in the know may think that the original artist was in fact just producing their works through AI. It is highly discouraging for artists who have worked hard to hone their craft, only to have people think that their works have little difference or even a mimicry (don’t underestimate misinformation).
There has been many instances where such training was done without the knowledge of the artist. Imagine just waking up one day, and finding that there’s someone or something that can very closely reproduce your works, one’s you’ve taken many years of practice to produce, of which its quality is almost unique to yourself. There’s a blatant lack of respect for the hard work that people put into their craft, one that seemingly belittles their blood and tears, and could even be a mockery of their existence. Some artists don’t have other jobs; their art and craft is their job, and some may have even sacrificed learning the skills needed for other jobs to pursue their passion.
Saying that AI is not intended to replace artists, but to improve accessibility, is like saying ATMs weren’t meant to replace bank tellers. True, there’s much less skill required for bank tellers, and getting cash out of banks is an important process that should be swift with almost no errors, so replacing bank tellers with ATMs is a general good, except for the bank tellers, which then banks can retrain them for other jobs. Since then, the job has virtually gone extinct, and almost nobody would want to become a bank teller, and if anyone would like to, they would need to perform better than ATMs. Artists require great skills and creativity, many of which are not easily trained or obtained. Seeing an automated system produce works that are acceptable by most people would either greatly discourage new artists or perhaps even entirely remove the idea of becoming an artist for most people. It raises the barrier to becoming an artist: not only do you need to stand out, you also need to be good enough such that people can’t just train an AI model on your work to produce results that are highly indistinguishable from yours. How many more years do people need to train to be that good? For those with a job but wish to become an artist, abandoning their job to focus on their craft will likely become a much more difficult choice to make. Also, I don’t doubt this would further rise the prices of commissions due to how much work artists would have to put in, and this would only get worse at a rate that’s much faster than a scenario without AI.
So a line should be drawn somewhere. AI trained on public works or artist-approved works are definitely okay. All other options will likely need further discussion and scrutiny. We’re talking about the possibility of ruining an already perilous career path, whose works are coveted.
$60 can buy you a lifetime license for the Affinity Designer 2, which is a fantastic alternative to Adobe’s Illustrator, which some people can’t live without. AFAIK, Serif isn’t backed by a venture capitalist as well. So, are you still happy paying $20 more for a social media app?
Like, look, I get that we should support devs for what they do, especially if they don’t take venture capitalist money to sell their products for cheap to gain market share. But this seems really overpriced. What are you getting with an $80 app for social media?