• Someplaceunknown@fedia.io
    link
    fedilink
    arrow-up
    18
    ·
    8 months ago

    “LLMs such as they are, will become a commodity; price wars will keep revenue low. Given the cost of chips, profits will be elusive,” Marcus predicts. “When everyone realizes this, the financial bubble may burst quickly.”

    Please let this happen

  • Boomer Humor Doomergod@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    8 months ago

    I wish just once we could have some kind of tech innovation without a bunch of douchebag techbros thinking it’s going to solve all the world’s problems with no side effects while they get super rich off it.

    • ohwhatfollyisman@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 months ago

      … bunch of douchebag techbros thinking it’s going to solve all the world’s problems with no side effects…

      one doesn’t imagine any of them even remotely thinks a technological panacaea is feasible.

      … while they get super rich off it.

      because they’re only focusing on this.

      • azertyfun@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Oh they definitely exist. At a high level the bullshit is driven by malicious greed, but there are also people who are naive and ignorant and hopeful enough to hear that drivel and truly believe in it.

        Like when Microsoft shoves GPT4 into notepad.exe. Obviously a terrible terrible product from a UX/CX perspective. But also, extremely expensive for Microsoft right? They don’t gain anything by stuffing their products with useless annoying features that eat expensive cloud compute like a kid eats candy. That only happens because their management people truly believe, honest to god, that this is a sound business strategy, which would only be the case if they are completely misunderstanding what GPT4 is and could be and actually think that future improvements would be so great that there is a path to mass monetization somehow.

        • Voroxpete@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          That’s not what’s happening here. Microsoft management are well aware that AI isn’t making them any money, but the company made a multi billion dollar bet on the idea that it would, and now they have to convince shareholders that they didn’t epicly fuck up. Shoving AI into stuff like notepad is basically about artificially inflating “consumer uptake” numbers that they can then show to credulous investors to suggest that any day now this whole thing is going to explode into an absolute tidal wave of growth, so you’d better buy more stock right now, better not miss out.

        • peopleproblems@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          Yeah my management was all gungho about exploiting AI to do all sorts of stuff.

          Like read. Not generative AI crap, but read. They came to us and said quite literally: “how can we use something like ChatGPT and make it read.”

          I don’t know who or how they convinced them to use something that wasn’t generative AI, but it did convince me that managers think someone being convincing and confident is correct all the time.

          • anomnom@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            Being convincing and confident without actually knowing is how 9/10s of them make it to the C suite.

            That’s probably why they don’t worry about confidently incorrect AI.

            • Aceticon@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 months ago

              Salesmanship is the essence of management at those levels.

              Which brings us back around to the original subject of this thread - tech bros - in my own experienced in Tech recently and back in the 90s boom, this generation of founders and “influencers” aren’t techies, they’re people from areas heavy on salesmanship, not actually on creating complex things that objectivelly work.

              The complete total dominance of sales types in both domains id why LLMs are being pushed the way they are as if they’re some kind of emerging-AGI and lots of corporates believe it and are trying to hammer those square pegs into round holes even though the most basic of technical analises would tell them that it doesn’t work like that.

              Ultimately since the current societal structures we have massively benefit that kind or personality, we’re going to keep on having these kinds of barely-useful-stuff-insanely-hyped-up cycles wasting tons of resources because salesmanship is hardly a synonym for efficiency or wisdom.

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    8 months ago

    largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence

    Who said that LLMs were going to become AGI? LLMs as part of an AGI system makes sense but not LLMs alone becoming AGI. Only articles and blog posts from people who didn’t understand the technology were making those claims. Which helped feed the hype.

    I 100% agree that we’re going to see an AI market correction. It’s going to take a lot of hard human work to achieve the real value of LLMs. The hype is distracting from the real valuable and interesting work.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 months ago

    Thank fuck. Can we have cheaper graphics cards again please?

    I’m sure a RTX 4090 is very impressive, but it’s not £1800 impressive.

    • bountygiver [any]@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      nope, if normal gamers are already willing to pay that price, no reason for nvidia to reduce them.

      There’s more 4090 on steam than any AMD dedicated GPU, there’s no competition

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        I just don’t get whey they’re so desperate to cripple the low end cards.

        Like I’m sure the low RAM and speed is fine at 1080p, but my brother in Christ it is 2024. 4K displays have been standard for a decade. I’m not sure when PC gamers went from “behold thine might from thou potato boxes” to “I guess I’ll play at 1080p with upscaling if I can have a nice reflection”.

        • Tywèle [she|her]@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          8 months ago

          4k displays are not at all standard and certainly not for a decade. 1440p is. And it hasn’t been that long since the market share of 1440p overtook that of 1080p according to the Steam Hardware survey IIRC.

          • Blackmist@feddit.uk
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            Maybe not monitors, but certainly they are standard for TVs (which are now just monitors with Android TV and a tuner built in).

  • CerealKiller01@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    8 months ago

    Huh?

    The smartphone improvements hit a rubber wall a few years ago (disregarding folding screens, that compose a small market share, improvement rate slowed down drastically), and the industry is doing fine. It’s not growing like it use to, but that just means people are keeping their smartphones for longer periods of time, not that people stopped using them.

    Even if AI were to completely freeze right now, people will continue using it.

    Why are people reacting like AI is going to get dropped?

    • finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 months ago

      People are dumping billions of dollars into it, mostly power, but it cannot turn profit.

      So the companies who, for example, revived a nuclear power facility in order to feed their machine with ever diminishing returns of quality output are going to shut everything down at massive losses and countless hours of human work and lifespan thrown down the drain.

      This will have an economic impact quite large as many newly created jobs go up in smoke and businesses who structured around the assumption of continued availability of high end AI need to reorganize or go out of business.

      Search up the Dot Com Bubble.

    • Ultraviolet@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Because novelty is all it has. As soon as it stops improving in a way that makes people say “oh that’s neat”, it has to stand on the practical merits of its capabilities, which is, well, not much.

      • theherk@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        I’m so baffled by this take. “Create a terraform module that implements two S3 buckets with cross-region bidirectional replication. Include standard module files like linting rules and enable precommit.” Could I write that? Yes. But does this provide an outstanding stub to start from? Also yes.

        And beyond programming, it is otherwise having positive impact on science and medicine too. I mean, anybody who doesn’t see any merit has their head in the sand. That of course must be balanced with not falling for the hype, but the merits are very real.

        • Eccitaze@yiffit.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          There’s a pretty big difference between chatGPT and the science/medicine AIs.

          And keep in mind that for LLMs and other chatbots, it’s not that they aren’t useful at all but that they aren’t useful enough to justify their costs. Microsoft is struggling to get significant uptake for Copilot addons in Microsoft 365, and this is when AI companies are still in their “sell below cost and light VC money on fire to survive long enough to gain market share” phase. What happens when the VC money dries up and AI companies have to double their prices (or more) in order to make enough revenue to cover their costs?

  • halcyoncmdr@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    No shit. This was obvious from day one. This was never AGI, and was never going to be AGI.

    Institutional investors saw an opportunity to make a shit ton of money and pumped it up as if it was world changing. They’ll dump it like they always do, it will crash, and they’ll make billions in the process with absolutely no negative repercussions.

    • metaStatic@kbin.earth
      link
      fedilink
      arrow-up
      0
      arrow-down
      2
      ·
      8 months ago

      Turns out AI isn’t real and has no fidelity.

      Machine learning could be the basis of AI but is anyone even working on that when all the money is in LLMs?

      • Joeffect@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 months ago

        I’m not an expert, but the whole basis of LLM not actually understanding words, just the likelihood of what word comes next basically seems like it’s not going to help progress it to the next level… Like to be an artificial general intelligence shouldn’t it know what words are?

        I feel like this path is taking a brick and trying to fit it into a keyhole…

        • metaStatic@kbin.earth
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          8 months ago

          learning is the basis of all known intelligence. LLMs have learned something very specific, AGI would need to be built by generalising the core functionality of learning not as an outgrowth of fully formed LLMs.

          and yes the current approach is very much using a brick to open a lock and that’s why it’s … ahem … hit a brick wall.

          • Joeffect@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            8 months ago

            Yeah, 20 something years ago when I was trying to learn PHP of all things, I really wanted to make a chat bot that could learn what words are… I barely got anywhere but I was trying to program the understanding of sentence structure and feeding it a dictionary of words… My goal was to have it output something on its own …

            I see these things become less resource intensive and hopefully running not on some random server…

            I found the files… It was closer to 15 years ago…

  • acargitz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    It’s so funny how all this is only a problem within a capitalist frame of reference.

    • masquenox@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      8 months ago

      What they call “AI” is only “intelligent” within a capitalist frame of reference, too.

      • Hazor@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        I don’t understand why you’re being downvoted. Current “AI” based on LLM’s have no capacity for understanding of the knowledge they contain (hence all the “hallucinations”), and thus possess no meaningful intelligence. To call it intelligent is purely marketing.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      8 months ago

      Someone in here has once linked me a scientific article about how today’s “AI” are basically one level below what they need to be anything like an AI. A bit like the difference between exponent and Ackermann function, but I really forgot what that was all about.

      • ContrarianTrail@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        8 months ago

        LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.

        However, AI itself doesn’t imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      I’ve been hearing about the imminent crash for the last two years. New money keeps getting injected into the system. The bubble can’t deflate while both the public and private sector have an unlimited lung capacity to keep puffing into it. FFS, bitcoin is on a tear right now, just because Trump won the election.

      This bullshit isn’t going away. Its only going to get forced down our throats harder and harder, until we swallow or choke on it.

  • dejected_warp_core@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    Welcome to the top of the sigmoid curve.

    If you were wondering what 1999 felt like WRT to the internet, well, here we are. The Matrix was still fresh in everyone’s mind and a lot of online tech innovation kinda plateaued, followed by some “market adjustments.”

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 months ago

      I think it’s more likely a compound sigmoid (don’t Google that). LLMs are composed of distinct technologies working together. As we’ve reached the inflection point of the scaling for one, we’ve pivoted implementations to get back on track. Notably, context windows are no longer an issue. But the most recent pivot came just this week, allowing for a huge jump in performance. There are more promising stepping stones coming into view. Is the exponential curve just a series of sigmoids stacked too close together? In any case, the article’s correct - just adding more compute to the same exact implementation hasn’t enabled scaling exponentially.

  • KeenFlame@feddit.nu
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    I am so tired of the ai hype and hate. Please give me my gen art interest back please just make it obscure again to program art I beg of you

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      8 months ago

      It’s still quite obscure to actually mess with AI art instead of just throwing prompts at it, resulting in slop of varying quality levels. And I don’t mean controlnet, but github repos with comfyui plugins with little explanation but a link to a paper, or “this is absolutely mathematically unsound but fun to mess with”. Messing with stuff other than conditioning or mere model selection.

  • jpablo68@infosec.pub
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    I just want a portable self hosted LLM for specific tasks like programming or language learning.