• 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • https://openai.com/index/how-openai-is-approaching-2024-worldwide-elections/

    Here is a direct quote from openai:

    “In addition to our efforts to direct people to reliable sources of information, we also worked to ensure ChatGPT did not express political preferences or recommend candidates even when asked explicitly.”

    It’s not a conspiracy. It was explicitly thier policy not to have the ai discuss these subjects in meaningful detail leading up to the election, even when the facts were not up for debate. Everyone using gpt during that period of time was unlikely to receive meaningful information on anything Trump related, such as the legitimacy of Biden’s election. I know because I tried.

    This is ostentatiously there to protect voters from fake news. I’m sure it does in some cases, but I’m sure China would say the same thing.

    I’m not pro China, I’m suggesting that every country engages in these shenanigans.

    Edit it is obvious that a 100 billion dollar company like openai with it’s multude of partnerships with news companies could have made gpt communicate accurate and genuinely critical news regarding Trump, but that would be bad for business.






  • Something which clarified Zuck’s behavior in my mind was an interview where he said something along the lines of, “I could sell meta for x amount of dollars, but then I’d just start another company anyways, so I might as well not.”

    The guy isn’t doing what financially makes sense. He’s Uber rich and working on whatever projects he thinks are cool. I wish Zuck would stop sucking in all his other ways, but he just doesn’t care about whether his ideas are going to succeed or not.









  • It’s worth mentioning that in this instance the guy did send porn to a minor. This isn’t exactly a cut and dry, “guy used stable diffusion wrong” case. He was distributing it and grooming a kid.

    The major concern to me, is that there isn’t really any guidance from the FBI on what you can and can’t do, which may lead to some big issues.

    For example, websites like novelai make a business out of providing pornographic, anime-style image generation. The models they use deliberately tuned to provide abstract, “artistic” styles, but they can generate semi realistic images.

    Now, let’s say a criminal group uses novelai to produce CSAM of real people via the inpainting tools. Let’s say the FBI cast a wide net and begins surveillance of novelai’s userbase.

    Is every person who goes on there and types, “Loli” or “Anya from spy x family, realistic, NSFW” (that’s an underaged character) going to get a letter in the mail from the FBI? I feel like it’s within the realm of possibility. What about “teen girls gone wild, NSFW?” Or “young man, no facial body hair, naked, NSFW?”

    This is NOT a good scenario, imo. The systems used to produce harmful images being the same systems used to produce benign or borderline images. It’s a dangerous mix, and throws the whole enterprise into question.