• 0 Posts
  • 44 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • Can I add 4. the integrated video downloader actually downloads videos, in whatever format you would want, and with no internet connection required to watch them. This to me is still the biggest scam ‘feature’ of Youtube Premium. You can ‘’‘download’‘’ videos, but not as eg. an mp4, but as an encrypted file only playable inside the Youtube app, and only if you connected to the internet in the last couple of days can you play it.

    That’s not downloading, that’s just jacking my disk space to avoid buffering the video from Youtube’s servers. That’s not a feature, that’s me paying for Youtube’s benefit.

    I cancelled and haven’t paid for Premium in years because of it. When someone scams me out of features I paid for, I don’t reward that shit until they either stop lying in their feature list, or actually start delivering.



  • This is the inevitable end game of some groups of people trying to make AI usage taboo using anger and intimidation without room for reasonable disagreement. The ones devoid of morals and ethics will use it to their hearts content and would never interact with your objections anyways, and when the general public is ignorant of what it is and what it can really do, people get taken advantage off.

    Support open source and ethical usage of AI, where artists, creatives, and those with good intentions are not caught in your legitimate grievances with corporate greed, totalitarians, and the like. We can’t reasonably make it go away, but we can reduce harmful use of it.


  • While there are spaces that are luckily still looking at it neutrally and objectively, there are definitely leftist spaces where AI hatred has snuck in, even to a reality-denying degree where lies about what AI is or isn’t has taken hold, and where providing facts to refute such things are rejected and met with hate and shunning purely because it goes against the norm.

    And I can’t help but agree that they are being played so that the only AI technology that will eventually be feasible will not be open source, and in control of the very companies left learning folks have dislike or hatred for.








  • Outside of the marketing labels of “artificial intelligence” and “machine learning”, it’s nothing like real intelligence or learning at all.

    Generative AI uses artificial neural networks, which are based on how we understand brains to connect information (Biological neural networks). You’re right that they have no self generated input like humans do, but their sense of making connections between information is very similar to that of humans. It doesn’t really matter that they don’t have their own experiences, because they are not trying to be humans, they are trying to be as flexible of a ‘mind’ as possible.

    Are you an artist or a creative person?

    I see anti-AI people say this stuff all the time too. Because it’s a convenient excuse to disregard an opposing opinion as ‘doesn’t know art’, failing to realize or respect that most people have some kind of creative spark and outlet. And I know it wasn’t aimed at me, but before you think I’m dodging the question, I’m a creative working professionally with artists and designers.

    Professional creative people and artists use AI too. A lot. Probably more than laypeople, because to use it well and combine it with other interesting ideas, requires a creative and inventive mind. There’s a reason AI is making it’s way all over media, into movies, into games, into books. And I don’t mean as AI slop, but well-implemented, guided AI usage.

    I could ask you as well if you’ve ever studied programming, or studied psychology, as those things would all make you more able to understand the similarities between artificial neural networks and biological neural networks. But I don’t need a box to disregard you, the substance of your argument fails to convince me.

    At the end of the day, it does matter that humans have their own experiences to mix in. But AI can also store much, much more influences than a human brain can. That effectively means for everything it makes, there is less of a specific source in there from specific artists.

    For example, the potential market effects of generating an automated system which uses people’s artwork to directly compete against them.

    Fair use considerations do not apply to works that are so substantially different from any influence, only when copyrighted material is directly re-used. If you read Harry Potter and write your own novel about wizards, you do not have to credit nor pay royalties to JK Rowling, so long as it isn’t substantially similar. Without any additional laws prohibiting such, AI is no different. To sue someone over fair use, you typically do have to prove that it infringes on your work, and so far there have not been any successful cases with that argument.

    Most negative externalities from AI come from capitalism: Greedy bosses thinking they can replace true human talent with a machine, plagiarists that use it as a convenient tool to harass specific artists, scammers that use it to scam people. But around that exists an entire ecosystem of people just using it for what it should be used for: More and more creativity.


  • You picked the wrong thread for a nuanced question on a controversial topic.

    But it seems the UK indeed has laws for this already if the article is to believed, as they don’t currently allow AI companies to train on copyrighted material (As per the article). As far as I know, in some other jurisdictions, a normal person would absolutely be allowed to pull a bunch of publicly available information, learn from it, and decide to make something new based on objective information that can be found within. And generally, that’s the rationale AI companies used as well, seeing as there have been landmark cases ruled in the past to not be copyright infringement with wide acceptance for computers analyzing copyrighted information, such as against Google, for indexing copyrighted material in their search results. But perhaps an adjacent ruling was never accepted in the UK (which does seem strange, as Google does operate there). But laws are messy, and perhaps there is an exception somewhere, and I’m certainly not an expert on UK law.

    But people sadly don’t really come into this thread to discuss the actual details, they just see a headline that invokes a feeling of “AI Bad”, and so you coming in here with a reasonable question makes you a target. I wholly expect to be downvoted as well.







  • There is so much wrong with this…

    AI is a range of technologies. So yes, you can make surveillance with it, just like you can with a computer program like a virus. But obviously not all computer programs are viruses nor exist for surveillance. What a weird generalization. AI is used extensively in medical research, so your life might literally be saved by it one day.

    You’re most likely talking about “Chat Control”, which is a controversial EU proposal to scan either on people’s devices or from provider’s ends for dangerous and illegal content like CSAM. This is obviously a dystopian way to achieve that as it sacrifices literally everyone’s privacy to do it, and there is plenty to be said about that without randomly dragging AI into that. You can do this scanning without AI as well, and it doesn’t change anything about how dystopian it would be.

    You should be using end to end regardless, and a VPN is a good investment for making your traffic harder to discern, but if Chat Control is passed to operate on the device level you are kind of boned without circumventing this software, which would potentially be outlawed or made very difficult. It’s clear on it’s own that Chat Control is a bad thing, you don’t need some kind of conspiracy theory about ‘the true purpose of AI’ to see that.



  • Depends on what kind of AI enhancement. If it’s just more things nobody needs and solves no problem, it’s a no brainer. But for computer graphics for example, DLSS is a feature people do appreciate, because it makes sense to apply AI there. Who doesn’t want faster and perhaps better graphics by using AI rather than brute forcing it, which also saves on electricity costs.

    But that isn’t the kind of things most people on a survey would even think of since the benefit is readily apparent and doesn’t even need to be explicitly sold as “AI”. They’re most likely thinking of the kind of products where the manufacturer put an “AI powered” sticker on it because their stakeholders told them it would increase their sales, or it allowed them to overstate the value of a product.

    Of course people are going to reject white collar scams if they think that’s what “AI enhanced” means. If legitimate use cases with clear advantages are produced, it will speak for itself and I don’t think people would be opposed. But obviously, there are a lot more companies that want to ride the AI wave than there are legitimate uses cases, so there will be quite some snake oil being sold.