AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather::The real risk of AI isn’t that it’ll kill you. It’s that a small group of billionaires will control the tech forever.
Business Insider warning about late stage capitalism feels more than a little ironic.
As does being warned of technological oligarchs monopolizing AI by someone who works for fucking Meta.
And is the reason we can all fuck around with llama models despite the fact. Props to yann and other meta AI researchers. Also eager to see future jepa stuff.
Today on PBS, we got an insider warning from a lifelong Republican that the fascism got put of hand and is going for full autocracy, even though he’d been pushing through pro-fash policies for the last thirty years.
Everyone thinks The One Ring will be theirs to control.
And in other news, the Leopards Eating Faces Party continues to eat faces, confusing Leopards Eating Faces voters…
Was that the Adam Kinzinger one? It’s a low bar, but I’ll give him a modicum of credit for saying his vote against the first impeachment was cowardice and that he’d vote for Biden in 2024 if Trump is the Republican nominee. Doesn’t totally feel like a lesson learnt that he still considers himself a Republican, though.
They should rename themselves to Business Balls Deep Insider.
Business Insider is run by college students making minimum wage.
That’s how they got inside.
This is why we need large-scale open-source AI efforts, even if it scares the everliving shit out of me.
I’ve been thinking about how to do that. The code for most AI is pretty basic and uninteresting. It’s mostly modifying the input for something usable. Companies could open source their entire code base without letting anything important out.
The dataset is the real problem. Say you want to classify fruit to check if it’s ripe enough for harvesting. You’ll need a whole lot of pictures of your preferred fruit where it’s both ripe and not ripe. You’ll want people who know the fruit to classify those images, and then you can feed it into a model. It’s a lot of work, and needs to attract a bunch of people to volunteer their time. Largely the sort of people who haven’t traditionally been a part of open source software.
If we set up some kind of blockchain to just pay people to honestly differentiate between pictures, it could be done.
There is no problem in this world so serious that someone will not suggest blockchain as a potential solution.
Your being hyperbolic and silly. Find me a solution to mass shootings or racism using blockchain.
Nah, using Recaptcha is the way to get free labor for that training
Fair
AI safety experts are worried that capitalists will be too eager to get AGI first and will discard caution (friendly AI principles) for mad science.
And I, for one, welcome our new robot overlords!
Any AI safety experts that believes these oligarchs are going to get AGI and not some monkey paw are also drinking the cool aide.
Actually AI safety experts are worried that corporations are just interested in getting technology that achieves specific ends, and don’t care that it is dangerous or insufficiently tested. Our rate of industrial disasters kinda demonstrates their views regarding risk.
For now, we are careening towards giving smart drones autonomy to detect, identify, target and shoot weapons at enemies long before they’re smart enough to build flat-packed furniture from the IKEA visual instructions.
If we have to choose between corporations or the government ruling us with AI I think I’m gonna just take a bullet.
Anarchy with never exist as anything but the exception to the rule, governments are a form of power that the population can at least influence. Weaker government will always mean stronger either nobility or corporations
We’re failing at influencing now.
You may think you’re choosing the best yoke, but I’d prefer none.
Maybe in the future we can go back to smaller tribes/groups of people that take care of each other, but in the world as it exists today? An entity will come by sooner or later to conquer said group. We influence our government FAR better than we influence a corporation or dictator. Right now we need an equalizing big power, and at least with democratic governments, these big powers at least have to pretend to work for their people. Which, again, corporations and dictators do not
Might be one of the key democratizing forces us plebs will have…I do suggest people try out some of the open solutions out there already just to have that skill in their back pockets (e.g. GPT4All).
Yep. As dangerous as that could be, it’s better then centralizing it. There are already systems like GPT4all that come with good models that are slower then things like Chat GPT but work similarly well.
deleted by creator
God I can’t stand these people who are only basically only worried about AI’s affect on the stock market. No normal person would even notice. we have more realistic issues with AI.
Sure AI is going to kill us all, but what about the Dow?!
Raytheon is going to make a killing selling terminators!!! BUY!BUY!BUY!
Yann LeCun the Godfather of AI? He feels more like a Fredo to me.
This is the best summary I could come up with:
He named OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis, and Anthropic’s Dario Amodei in a lengthy weekend post on X.
“Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment,” LeCun wrote, referring to these founders’ role in shaping regulatory conversations about AI safety.
That’s significant since, as almost everyone who matters in tech agrees, AI is the biggest development in technology since the microchip or the internet.Altman, Hassabis, and Amodei did not immediately respond to Insider’s request for comment.
Thanks to @RishiSunak & @vonderleyen for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can’t be refuted with snark and corporate lobbying alone.
In March, more than 1,000 tech leaders, including Elon Musk, Altman, Hassabis, and Amodei, signed a letter calling for a minimum six-month pause on AI development.
Those risks include worker exploitation and data theft that generates profit for “a handful of entities,” according to the Distributed AI Research Institute (DAIR).
The original article contains 768 words, the summary contains 163 words. Saved 79%. I’m a bot and I’m open source!
Well we know that, but anybody who does anything less than clap and sing about it gets treated like trash by the huge wave of people who immediately trusted the crazy thing with their lives. It’s the fucking iPhone all over again. So hooray for AI.
Yeah, my own Dad calls me an “activist” now (in a deragotory manner). I never leave my house most days… But okay. I’m an activist because I think AI is a tangible threat to the working class. I’ve said only a few sentences to my Dad about it. But yeah… I guess I’m the problem for not finding some creative way to profit off LLM’s yet.
Me running various models that outperform gpt or bard just fine on a 4080: 👌👍
That’s great. Now try training that model on a 4080 and you’ll see it’ll take significantly longer. Try amassing the data needed for training on your home PC and see how much longer beyond that which you’ll need. There’s a reason the current race is down to just a few companies, it costs pennies to run queries on an existing model, millions to build and train that model in the first place.
No one can fucking run it locally right now only people who have 1%er money can run it
Uhh what? You can totally run LLMs locally.
Inference, yes. Training, no. Derived models don’t count.
I have Llama 2 running on localhost, you need a fairly powerful GPU but it can totally be done.
I’ve run one of the smaller models on my i7-3770 with no GPU acceleration. It is painfully slow but not unusably slow.
To get the same level as something like chat gpt?














