Perhaps now it is, but leading up to the election, I found gpt would outright refuse to discuss Trump in voice mode. Meta ai too. It was very frustrating. It would start, and then respond with something like, “I’m not able to talk about that, yet.”
Perhaps now it is, but leading up to the election, I found gpt would outright refuse to discuss Trump in voice mode. Meta ai too. It was very frustrating. It would start, and then respond with something like, “I’m not able to talk about that, yet.”
https://www.wired.com/story/google-and-microsofts-chatbots-refuse-election-questions/
There are plenty of examples of Ai either refusing to discuss subjects of the elections (I remember meta ai basically just saying “I’m learning how to respond to these questions.” Or in the above case, just hand waving away clear issues of wrong doing.
Chat gpt advanced voice mode would constantly activate its guardrails when asked about trump or “politically charged” topics.
Incidentally, no Western ai would make a statement on Donald Trump’s crimes leading up to the election. Ai propaganda is a serious issue. In China the government enforces it, in America, billionaires.
I frequently forget that chrome is installed on my phone. The only time I’m forced to use it is about once a year when I order Papa John’s Pizza takeout. Their checkout page doesn’t seem to work in any other browser.
Something which clarified Zuck’s behavior in my mind was an interview where he said something along the lines of, “I could sell meta for x amount of dollars, but then I’d just start another company anyways, so I might as well not.”
The guy isn’t doing what financially makes sense. He’s Uber rich and working on whatever projects he thinks are cool. I wish Zuck would stop sucking in all his other ways, but he just doesn’t care about whether his ideas are going to succeed or not.
Congratulations! … [Cries in Florida man tears]
I actually don’t think this is shocking or something that needs to be “investigated.” Other than the sketchy website that doesn’t secure user’s data, that is.
Actual child abuse / grooming happens on social media, chat services, and local churches. Not in a one on one between a user and a llm.
Why is everyone so mad about this? I mean, it’s a salty article, but yeah, it kinda sucks when publications don’t give notice before closing down. I think providing the public, including previous contributors, time to archive content is a good practice.
I have to say after watching the videos, boo to the corpo for the weird exploitative lies, but kudos to the two women for staying in character! They legit put effort into moving like the real robots around them, and all while in what were probably uncomfortable costumes. I hope they get positive social media attention!
Grok 2 uses the image model, “Flux.” Flux is made by black forest labs. You two can download the model and run it locally on a moderately expensive gaming PC or use it for free at https://huggingface.co/spaces/black-forest-labs/FLUX.1-dev among other places.
I find it funny that the most newsworthy component of this product is made and distributed for free by a completely unrelated company. This is manufactured outrage by musk as a ploy to seem relivant in the ai space. All he did was put a free thing behind a Paywall.
Please don’t. Children’s media is already flooded with ai generated fluff. You won’t make any money on it.
Can you be in the steam family group with a dead person?
It’s worth mentioning that in this instance the guy did send porn to a minor. This isn’t exactly a cut and dry, “guy used stable diffusion wrong” case. He was distributing it and grooming a kid.
The major concern to me, is that there isn’t really any guidance from the FBI on what you can and can’t do, which may lead to some big issues.
For example, websites like novelai make a business out of providing pornographic, anime-style image generation. The models they use deliberately tuned to provide abstract, “artistic” styles, but they can generate semi realistic images.
Now, let’s say a criminal group uses novelai to produce CSAM of real people via the inpainting tools. Let’s say the FBI cast a wide net and begins surveillance of novelai’s userbase.
Is every person who goes on there and types, “Loli” or “Anya from spy x family, realistic, NSFW” (that’s an underaged character) going to get a letter in the mail from the FBI? I feel like it’s within the realm of possibility. What about “teen girls gone wild, NSFW?” Or “young man, no facial body hair, naked, NSFW?”
This is NOT a good scenario, imo. The systems used to produce harmful images being the same systems used to produce benign or borderline images. It’s a dangerous mix, and throws the whole enterprise into question.
I believe Librechat would achieve your goals, but you’d need a PC or server to host it in. It supports all major API.
Kobold light might with as well, and doesn’t need to be hosted locally, but I don’t think it supports Claude haiku specifically, for unknown reasons.
Additionally, the official Claude api workshop is pretty good on desktop, but it only supports Claude.
I mean, be careful. These llms can be honeypots for data. Like, if you’re using it for cover letters, or work, you’re sending tons of personal info to random websites.
I would recommend sticking to actual, reputable vendors for llms, or running your own. I have a GTX 1070 and can run some pretty decent models these days locally using koboldai.
Bing is probably the only way to use gpt 4 without paying for it, and Microsoft probably won’t steal your bank account info.
There is almost no chance that it is truthfully based on gpt 4. If you want a free, open source llm with 32k context and generous limits, I recommend using huggingface.co/chat/
The nous-hermes model (you can select different models) is uncensored, and performs really well for a open source model. Plus, they have data controls so you can turn off data gathering per model. Huggingface is a reputable vendor, and doesn’t claim to be something it isn’t.
This feels… Scammy? Not to be accusatory, but gpt 4 is expensive to run. It is impossible for people to use it for free.
What llm is actually providing the response here? Either someone is footing the bill for an API and acting as a proxy, a situation which raises many red flags, or the model you’re talking to is something far cheaper to run, like a mistral model.
Even the second case is sketchy. 😅
https://openai.com/index/how-openai-is-approaching-2024-worldwide-elections/
Here is a direct quote from openai:
“In addition to our efforts to direct people to reliable sources of information, we also worked to ensure ChatGPT did not express political preferences or recommend candidates even when asked explicitly.”
It’s not a conspiracy. It was explicitly thier policy not to have the ai discuss these subjects in meaningful detail leading up to the election, even when the facts were not up for debate. Everyone using gpt during that period of time was unlikely to receive meaningful information on anything Trump related, such as the legitimacy of Biden’s election. I know because I tried.
This is ostentatiously there to protect voters from fake news. I’m sure it does in some cases, but I’m sure China would say the same thing.
I’m not pro China, I’m suggesting that every country engages in these shenanigans.
Edit it is obvious that a 100 billion dollar company like openai with it’s multude of partnerships with news companies could have made gpt communicate accurate and genuinely critical news regarding Trump, but that would be bad for business.