

Did you respond to the wrong person? The article nor my comment was about one specific AI model.
Did you respond to the wrong person? The article nor my comment was about one specific AI model.
This is the inevitable end game of some groups of people trying to make AI usage taboo using anger and intimidation without room for reasonable disagreement. The ones devoid of morals and ethics will use it to their hearts content and would never interact with your objections anyways, and when the general public is ignorant of what it is and what it can really do, people get taken advantage off.
Support open source and ethical usage of AI, where artists, creatives, and those with good intentions are not caught in your legitimate grievances with corporate greed, totalitarians, and the like. We can’t reasonably make it go away, but we can reduce harmful use of it.
While there are spaces that are luckily still looking at it neutrally and objectively, there are definitely leftist spaces where AI hatred has snuck in, even to a reality-denying degree where lies about what AI is or isn’t has taken hold, and where providing facts to refute such things are rejected and met with hate and shunning purely because it goes against the norm.
And I can’t help but agree that they are being played so that the only AI technology that will eventually be feasible will not be open source, and in control of the very companies left learning folks have dislike or hatred for.
The absolute irony
I never claimed anything besides that breakthroughs did happen since you claimed, which is objectively true. You claimed very concretely that AI was the same for over a decade, aka it was the same in at least 2015 if I’m being charitable, all of these things were researched in the last 7-8 years and only became the products as we know them in the last 5 years. (Aka 2020)
Breakthroughs are not a myth. They still happen even when the process is iterative. That page even explains it. The advent of the GAN (2014-2018), which got overtaken by transformers in around 2017 for which GPTs and Diffusion models later got developed on. More hardware is what allowed those technologies to work better and bigger but without those breakthroughs you still wouldnt have the AI boom of today.
ML technology has existed for a while, but it’s wild to claim that the technology pre-2020 is the same. A breakthrough happened.
Oh I agree money talks in the US justice system, but as the page shows, these laws also exist elsewhere, such as in the EU. And even if I or you don’t agree with them, they are still the case law that determines the legality of these things. For me that aligns with my ethical stance as well, but probably not yours.
I never claimed that in this case. As I said in my response: There have been won lawsuits that machines are allowed to index and analyze copyrighted material without infringing on such rights, so long as they only extract objective information, such as what AI typically extracts. I’m not a lawyer, and your jurisdiction may differ, but this page has a good overview: https://blog.apify.com/is-web-scraping-legal/
EDIT: For the US description on that page, it mentions the US case that I referred to: Author’s Guild v Google
Outside of the marketing labels of “artificial intelligence” and “machine learning”, it’s nothing like real intelligence or learning at all.
Generative AI uses artificial neural networks, which are based on how we understand brains to connect information (Biological neural networks). You’re right that they have no self generated input like humans do, but their sense of making connections between information is very similar to that of humans. It doesn’t really matter that they don’t have their own experiences, because they are not trying to be humans, they are trying to be as flexible of a ‘mind’ as possible.
Are you an artist or a creative person?
I see anti-AI people say this stuff all the time too. Because it’s a convenient excuse to disregard an opposing opinion as ‘doesn’t know art’, failing to realize or respect that most people have some kind of creative spark and outlet. And I know it wasn’t aimed at me, but before you think I’m dodging the question, I’m a creative working professionally with artists and designers.
Professional creative people and artists use AI too. A lot. Probably more than laypeople, because to use it well and combine it with other interesting ideas, requires a creative and inventive mind. There’s a reason AI is making it’s way all over media, into movies, into games, into books. And I don’t mean as AI slop, but well-implemented, guided AI usage.
I could ask you as well if you’ve ever studied programming, or studied psychology, as those things would all make you more able to understand the similarities between artificial neural networks and biological neural networks. But I don’t need a box to disregard you, the substance of your argument fails to convince me.
At the end of the day, it does matter that humans have their own experiences to mix in. But AI can also store much, much more influences than a human brain can. That effectively means for everything it makes, there is less of a specific source in there from specific artists.
For example, the potential market effects of generating an automated system which uses people’s artwork to directly compete against them.
Fair use considerations do not apply to works that are so substantially different from any influence, only when copyrighted material is directly re-used. If you read Harry Potter and write your own novel about wizards, you do not have to credit nor pay royalties to JK Rowling, so long as it isn’t substantially similar. Without any additional laws prohibiting such, AI is no different. To sue someone over fair use, you typically do have to prove that it infringes on your work, and so far there have not been any successful cases with that argument.
Most negative externalities from AI come from capitalism: Greedy bosses thinking they can replace true human talent with a machine, plagiarists that use it as a convenient tool to harass specific artists, scammers that use it to scam people. But around that exists an entire ecosystem of people just using it for what it should be used for: More and more creativity.
You picked the wrong thread for a nuanced question on a controversial topic.
But it seems the UK indeed has laws for this already if the article is to believed, as they don’t currently allow AI companies to train on copyrighted material (As per the article). As far as I know, in some other jurisdictions, a normal person would absolutely be allowed to pull a bunch of publicly available information, learn from it, and decide to make something new based on objective information that can be found within. And generally, that’s the rationale AI companies used as well, seeing as there have been landmark cases ruled in the past to not be copyright infringement with wide acceptance for computers analyzing copyrighted information, such as against Google, for indexing copyrighted material in their search results. But perhaps an adjacent ruling was never accepted in the UK (which does seem strange, as Google does operate there). But laws are messy, and perhaps there is an exception somewhere, and I’m certainly not an expert on UK law.
But people sadly don’t really come into this thread to discuss the actual details, they just see a headline that invokes a feeling of “AI Bad”, and so you coming in here with a reasonable question makes you a target. I wholly expect to be downvoted as well.
I like to just take the opportunity to roast the fuck out of old me for my terrible code and then sit content knowing that I am now making much more stupid but better hidden mistakes 😌
Stocks for what? AI? I can’t have stocks for a technology. I could get stocks in companies that use AI, but the only ones that are on the stock market I’d rather die than support a single penny to since they abuse the technology (and technology in general). But they are not the only ones using the technology. I’m not really a fan of stocks to begin with, profit focused companies are a plague in my opinion.
Seems a bit strange to blame AI for this. Meta has always been garbage and using technology to it’s worst effects.
Well, countries with higher birthrates have a third option that is essentially negligible in those with lower birthrates, which is not even making it to adulthood. Effectively still less children end up becoming productive members of society. And together with that, due to less available social services, often a goal of having children survive is so they can take care of the parent when they’re older.
As soon as infant mortality becomes a non-factor, birthrates decline drastically as well. And since children are no longer largely seen as a “life assurance” for when parents are older, and the society’s demands for productive members is higher as well, the focus really does shift to the quality of the life and the two types of reasons to have kids are harder to compare. But even among developed nations you can see differences in fertility rates.
If you think that’s depressing, wait until you find out that it’s basically nothing in the grand scheme of things.
Most sources agree that we use about 4 trillion cubic meters of water every year worldwide (Although, this stat is from 2015 most likely, and so it will be bigger now). In 2022, using the stats here Microsoft used 1.7 billion gallons per year, and Google 5.56 billion gallons per year. In cubic meters that’s only 23.69 million cubic meters. That’s only 0.00059% of the worldwide water usage. Meanwhile agriculture uses on average 70% of a country’s daily fresh water.
Even if we just look at the US, since that’s where Google and Microsoft are based, they use 322 billion gallons of water every day, resulting in about 445 billion cubic meters per year, that’s still 0.00532%. So you can have 187 more Googles and Microsofts before you even top a single percentage.
_
And as others have pointed out the water isn’t gone, there’s some cyclicality in how the water is used.
There is so much wrong with this…
AI is a range of technologies. So yes, you can make surveillance with it, just like you can with a computer program like a virus. But obviously not all computer programs are viruses nor exist for surveillance. What a weird generalization. AI is used extensively in medical research, so your life might literally be saved by it one day.
You’re most likely talking about “Chat Control”, which is a controversial EU proposal to scan either on people’s devices or from provider’s ends for dangerous and illegal content like CSAM. This is obviously a dystopian way to achieve that as it sacrifices literally everyone’s privacy to do it, and there is plenty to be said about that without randomly dragging AI into that. You can do this scanning without AI as well, and it doesn’t change anything about how dystopian it would be.
You should be using end to end regardless, and a VPN is a good investment for making your traffic harder to discern, but if Chat Control is passed to operate on the device level you are kind of boned without circumventing this software, which would potentially be outlawed or made very difficult. It’s clear on it’s own that Chat Control is a bad thing, you don’t need some kind of conspiracy theory about ‘the true purpose of AI’ to see that.
That’s because you’re using AI for the correct thing. As others have pointed out, if AI usage is enforced (like in the article), chances are they’re not using AI correctly. It’s not a miracle cure for everything and should just be used when it’s useful. It’s great for brainstorming. Game development (especially on the indie side of things) really benefit from being able to produce more with less. Or are you using it for DnD?
Depends on what kind of AI enhancement. If it’s just more things nobody needs and solves no problem, it’s a no brainer. But for computer graphics for example, DLSS is a feature people do appreciate, because it makes sense to apply AI there. Who doesn’t want faster and perhaps better graphics by using AI rather than brute forcing it, which also saves on electricity costs.
But that isn’t the kind of things most people on a survey would even think of since the benefit is readily apparent and doesn’t even need to be explicitly sold as “AI”. They’re most likely thinking of the kind of products where the manufacturer put an “AI powered” sticker on it because their stakeholders told them it would increase their sales, or it allowed them to overstate the value of a product.
Of course people are going to reject white collar scams if they think that’s what “AI enhanced” means. If legitimate use cases with clear advantages are produced, it will speak for itself and I don’t think people would be opposed. But obviously, there are a lot more companies that want to ride the AI wave than there are legitimate uses cases, so there will be quite some snake oil being sold.
Can I add 4. the integrated video downloader actually downloads videos, in whatever format you would want, and with no internet connection required to watch them. This to me is still the biggest scam ‘feature’ of Youtube Premium. You can ‘’‘download’‘’ videos, but not as eg. an mp4, but as an encrypted file only playable inside the Youtube app, and only if you connected to the internet in the last couple of days can you play it.
That’s not downloading, that’s just jacking my disk space to avoid buffering the video from Youtube’s servers. That’s not a feature, that’s me paying for Youtube’s benefit.
I cancelled and haven’t paid for Premium in years because of it. When someone scams me out of features I paid for, I don’t reward that shit until they either stop lying in their feature list, or actually start delivering.