To be fair, it is useful in some regards.
I’m not a huge fan of Amazon, but last time I had an issue with a parcel it was sorted out insanely fast by the AI assistant on the website.
Within literally 2 minutes I’d had a refund confirmed. No waiting for people to eventually pick up the phone after 40 minutes. No misunderstanding or annoying questions. The moment I pressed send on my message it instantly started formulating a reply.
The truncated version went:
“Hey I meant to get [x] delivery, but it hasn’t arrived. Can I get a refund?”
“Sure, your money will go back into [y] account in a few days. If the parcel turns up in the meantime, you can send it back by dropping it off at [z]”
Done. Absolutely painless.
How is a chatbot here better, faster, or more accurate than just a “return this” button on a web page? Chat bots like that take 10x the programming effort and actively make the user experience worse.
Presumably there could be nuance to the situation that the chat bot is able to convey?
That has nothing to do with AI and is strictly a return policy matter. You can get a return in less than 2 minutes by speaking to a human at Home Depot.
Businesses choose to either prioritize customer experience, or not.
There’s a big claim from Klarna - that I am not aware has been independently verified – that customers prefer their bot.
The cynic might say they were probably undertraining a skeleton crew of underpaid support reps. More optimistically, perhaps so many support inquiries are so simple that responding to them with a technology that can type a million words per minute should obviously be likely to increase customer satisfaction.
Personally, I’m happy with environmentally-acceptable and efficient technologies that respect consumers… assuming they are deployed in a world with robust social safety nets like universal basic income. Heh
You can just go to the order and click like 2 buttons. Chat is for when a situation is abnormal, and I promise you their bot doesn’t know how to address anything like that.
I like using it to assist me when I am coding.
Do you feel like elaborating any? I’d love to find more uses. So far I’ve mostly found it useful in areas where I’m very unfamiliar. Like I do very little web front end, so when I need to, the option paralysis is gnarly. I’ve found things like Perplexity helpful to allow me to select an approach and get moving quickly. I can spend hours agonizing over those kinds of decisions otherwise, and it’s really poorly spent time.
I’ve also found it useful when trying to answer questions about best practices or comparing approaches. It sorta does the reading and summarizes the points (with links to source material), pretty perfect use case.
So both of those are essentially “interactive text summarization” use cases - my third is as a syntax helper, again in things I don’t work with often. If I’m having a brain fart and just can’t quite remember the ternary operator syntax in that one language I never use…etc. That one’s a bit less impactful but can still be faster than manually inspecting docs, especially if the docs are bad or hard to use.
With that said I use these things less than once a week on average. Possible that’s just down to my own pre-existing habits more than anything else though.
So how “intelligent” do you think the amazon returns bot is? As smart as a choose-your-own-adventure book, or a gerbil, or a human or beyond? Has it given you any useful life advice or anything?
Is it me or is there something very facile and dull about Gartner charts? Thinking especially about the “””magic””” quadrants one (wow, you ranked competitors in some area along TWO axes!), but even this chart feels like such a mundane observation that it seems like frankly undeserved advertising for Gartner, again, given how little it actually says.
We should be using AI to pump the web with nonsense content that later AI will be trained on as an act of sabotage. I understand this is happening organically; that’s great and will make it impossible to just filter out AI content and still get the amount of data they need.
Force the AI folks to dev accurate AI detection tools to screen their input
LLMs need to get better at saying “I don’t know.” I would rather an LLM admit that it doesn’t know the answer instead of making up a bunch of bullshit and trying to convince me that it knows what it’s talking about.
I work on LLM’s for a big tech company. The misinformation on Lemmy is at best slightly disingenuous, and at worst people parroting falsehoods without knowing the facts. For that reason, take everything (even what I say) with a huge pinch of salt.
LLM’s do NOT just parrot back falsehoods, otherwise the “best” model would just be the “best” data in the best fit. The best way to think about a LLM is as a huge conductor of data AND guiding expert services. The content is derived from trained data, but it will also hit hundreds of different services to get context, find real-time info, disambiguate, etc. A huge part of LLM work is getting your models to basically say “this feels right, but I need to find out more to be correct”.
With that said, I think you’re 100% right. Sadly, and I think I can speak for many companies here, knowing that you’re right is hard to get right, and LLM’s are probably right a lot in instances where the confidence in an answer is low. I would rather a LLM say “I can’t verify this, but here is my best guess” or “here’s a possible answer, let me go away and check”.
I hate to break this to everyone who thinks that “AI” (LLM) is some sort of actual approximation of intelligence, but in reality, it’s just a fucking fancy ass parrot.
Our current “AI” doesn’t understand anything or have context, it’s just really good at guessing how to say what we want it to say… essentially in the same way that a parrot says “Polly wanna cracker.”
A parrot “talking to you” doesn’t know that Polly refers to itself or that a cracker is a specific type of food you are describing to it. If you were to ask it, “which hand was holding the cracker…?” it wouldn’t be able to answer the question… because it doesn’t fucking know what a hand is… or even the concept of playing a game or what a “question” even is.
It just knows that it makes it mouth, go “blah blah blah” in a very specific way, a human is more likely to give it a tasty treat… so it mushes its mouth parts around until its squawk becomes a sound that will elicit such a reward from the human in front of it… which is similar to how LLM “training models” work.
Oversimplification, but that’s basically it… a trillion-dollar power-grid-straining parrot.
And just like a parrot - the concept of “I don’t know” isn’t a thing it comprehends… because it’s a dumb fucking parrot.
The only thing the tech is good at… is mimicking.
It can “trace the lines” of any existing artist in history, and even blend their works, which is indeed how artists learn initially… but an LLM has nothing that can “inspire” it to create the art… because it’s just tracing the lines like a child would their favorite comic book character. That’s not art. It’s mimicry.
It can be used to transform your own voice to make you sound like most celebrities almost perfectly… it can make the mouth noises, but has no idea what it’s actually saying… like the parrot.
You get it?
Third, we see a strong focus on providing AI literacy training and educating the workforce on how AI works, its potentials and limitations, and best practices for ethical AI use. We are likely to have to learn (and re-learn) how to use different AI technologies for years to come.
Useful?!? This is a total waste of time, energy, and resources for worthless chatbots.
Useful in the way that it increases emissions and hopefully leads to our demise because that’s what we deserve for this stupid technology.
Surely this is better than the crypto/NFT tech fad. At least there is some output from the generative AI that could be beneficial to the whole of humankind rather than lining a few people’s pockets?
Unfortunately crypto is still somehow a thing. There is a couple year old bitcoin mining facility in my small town that brags about consuming 400MW of power to operate and they are solely owned by a Chinese company.
It takes living with a broken system to understand the fix for it. There are millions of people who have been saved by Bitcoin and the freedom that it brings, they are just mainly in the 2nd and 3rd worlds, so to many people they basically don’t exist.
I recently noticed a number of bitcoin ATMs that have cropped up where I live - mostly at gas stations and the like. I am a little concerned by it.
Do you really think that paper money covered in colonizers and other slavermasters is going to last forever?
lol
Forever? No, of course not.
But paper currency is backed by a nation state, so I’m betting it’ll be around a bit longer then a purely digital speculative asset without the backing of a nation, and driven entirely by speculation.
I’m not even anti-crypto. It was novel idea when it was actually used entirely as a currency, but that hasn’t been true for quite some time.
Found the diamond hands.
Crypto currencies are still backed by and dependent on those same currencies. And their value is incredibly unstable, making them largely useless except as a speculative investment for stock market daytraders. BitCoin may as well be Doge Coin or Bored Ape NFTs as far as the common person is concerned.
I hope your coins haven’t seen a 90%+ drop in value in the past 4 years like the vast majority have.
Cryptos obviously have serious issues, but so do fiat currencies. In fact all implementations of money have one problem or another. It’s almost like it’s a difficult thing to get right and that maybe it was a bad idea in the first place.
I hope this is sarcasm.
I’m crypto neutral.
But it’s really strange how anti-crypto ideologues don’t understand that the system of states printing money is literally destroying the planet. They can’t see the value of a free, fair, decentralized, automatable, accounting systems?
Somehow delusional chatbots wasting energy and resources are more worthwhile?
I’m fine doing away with physical dollars printed on paper and coins but crypto seems to solve none of the problems that we have with a fiat currency but instead continues to consume unnecessary amounts of energy while being driven by rich investors that would love nothing more than to spend and earn money in an untraceable way.
Printing currency isn’t destroying the planet…the current economic system is doing that, which is the same economic system that birthed crypto.
Governments issuing currency goes back to a time long before our current consumption at all cost economic system was a thing.
You are right, crypto has nothing to do with currency printing. And yes, the environmental side too is a problem (unless it is produced inline with recycled energy) But governments issuing currency is a relatively recent phenomenon. Historically, people traded de facto currencies and IOUs amongst themselves.
Bitcoin was conceived out of the 2008 financial crisis, as a direct response to big banks being bailed out. It’s literally written in Bitcoin’s Genesis block. The point of Bitcoin has always been to free people from the tyranny of big government AND big capital.
Crypto isn’t that popular in developed countries with functioning monetary systems… untill of course those big institutions fail. I am still quite surprised, this side of Bitcoin is rarely discussed on Lemmy, given how anticapitalist it is.
I get it libertarian, bad. And to some degree, there are a lot of problems there. But the extreme opposite ain’t that rosy either.
Are you really using all of human history as a timeframe to say that currency is a relatively recent phenomenon?
Again, I’m not anti-cryptocurrency, but it’s not really a currency anymore than any other commodity in a commodity exchange, or a barter market.
And I don’t care if it’s livestock, or Bitcoin, I’m not accepting either as payment if I sell my home, or car. Not because of principles, but because I don’t know how to convert livestock into cash, and I can’t risk the Bitcoin payment halving in value before I can convert it to cash.
And who was talking extremes? I’m just pointing out the absurdity of the claims that crypto is the replacement for, or salvation from, our current economic system, or the delusion that currency backed by a nation is somehow just as ephemeral as Bitcoin, or ERC20 rug pulls.
You said Bitcoin was designed to free us from the tyranny of big capital, but it’s been entirely co-opted by the same boogeyman. So regardless of the intentionality behind the project, it’s now just another speculative asset.
Except, unlike gold or futures contracts, there’s no tangible real world asset, but there is a hell of a real cost.
I think it’s because of what crypto turned into and the inherent flaws in the system. Crypto currencies are still backed by and dependent on traditional currency, and their value is too unstable for the average person. The largest proponents of crypto have been corporations - big capital, as you put it - and there’s a reason for that (though they’re more on the speculative market of NFTs looking to make a profit off of Ponzi schemes).
In the end, crypto hasn’t solved any problems that weren’t already solved by less energy intensive means.
While the consumption for AI train can be large, there are arguments to be made for its net effect in the long run.
The article’s last section gives a few examples that are interesting to me from an environmental perspective. Using smaller problem-specific models can have a large effect in reducing AI emissions, since their relation to model size is not linear. AI assistance can indeed increase worker productivity, which does not necessarily decrease emissions but we have to keep in mind that our bodies are pretty inefficient meat bags. Last but not least, AI literacy can lead to better legislation and regulation.
The argument that our bodies are inefficient meat bags doesn’t make sense. AI isn’t replacing the inefficient meat bag unless I’m unaware of an AI killing people off and so far I’ve yet to see AI make any meaningful dent in overall emissions or research. A chatgpt query can use 10x more power than a regular Google search and there is no chance the result is 10x more useful. AI feels more like it’s adding to the enshittification of the internet and because of its energy use the enshittification of our planet. IMO if these companies can’t afford to build renewables to support their use then they can fuck off.
Theoretically we could slow down training and coast on fine-tuning existing models. Once the AI’s trained they don’t take that much energy to run.
Everyone was racing towards “bigger is better” because it worked up to GPT4, but word on the street is that raw training is giving diminishing returns so the massive spending on compute is just a waste now.
It’s a bit more complicated than that.
New models are sometimes targeting architecture improvements instead of pure size increases. Any truly new model still needs training time, it’s just that the training time isn’t going up as much as it used to. This means that open weights and open source models can start to catch up to large proprietary models like ChatGPT.
From my understanding GPT 4 is still a huge model and the best performing. The other models are starting to get close though, and can already exceed GPT 3.5 Turbo which was the previous standard to beat and is still what a lot of free chatbots are using. Some of these models are still absolutely huge though, even if not quite as big as GPT 4. For example Goliath is 120 billion parameters. Still pretty chonky and intensive to run even if it’s not quite GPT 4 sized. Not that anyone actually knows how big GPT 4 is. Word on the street is it’s a MoE model like Mixtral which run faster than a normal model for their size, but again no one outside Open AI actually can say with certainty.
You generally find that Open AI models are larger and slower. Wheras the other models focus more on giving the best performance at a given size as training and using huge models is much more demanding. So far the larger Open AI models have done better, but this could change as open source models see a faster improvement in the techniques they use. You could say open weights models rely on cunning architectures and fine tuning versus Open AI uses brute strength.
Useful for scammers and spam
So if I were to get this straight, the entire logic is that due to big hype, it fits the pattern or other techs becoming useful… that’s sooo not a guarantee, so many big hype stuff have died.
Meanwhile, in the real world, generative A.I. continues to improve at an exponential rate.
The improvement is not exponential. It’s just slowly getting better.
It’s following a breakthrough then slow refinement curve. Not an exponential one. Although there will certainly be breakthroughs in the future, followed by more “refinement” periods.
Current iterations of ChatGPT for example, aren’t otherworldly better than what we had 1-2 years ago.