• enumerator4829@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    28
    ·
    4 months ago

    Apparently AMD couldn’t make the signal integrity work out with socketed RAM. (source: LTT video with Framework CEO)

    IMHO: Up until now, using soldered RAM was lazy and cheap bullshit. But I do think we are at the limit of what’s reasonable to do over socketed RAM. In high performance datacenter applications, socketed RAM is on it’s way out (see: MI300A, Grace-{Hopper,Blackwell},Xeon Max), with onboard memory gaining ground. I think we’ll see the same trend on consumer stuff as well. Requirements on memory bandwidth and latency are going up with recent trends like powerful integrated graphics and AI-slop, and socketed RAM simply won’t work.

    It’s sad, but in a few generations I think only the lower end consumer CPUs will be possible to use with socketed RAM. I’m betting the high performance consumer CPUs will require not only soldered, but on-board RAM.

    Finally, some Grace Hopper to make everyone happy: https://youtube.com/watch?v=gYqF6-h9Cvg

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 months ago

      I definitely wouldn’t mind soldered RAM if there’s still an expansion socket. Solder in at least a reasonable minimum (16G?) and not the cheap stuff but memory that can actually use the signal integrity advantage, I may want more RAM but it’s fine if it’s a bit slower. You can leave out the DIMM slot but then have at least one PCIe x16 expansion slot. A free one, one in addition to the GPU slot. PCIe latency isn’t stellar but on the upside, expansion boards would come with their own memory controllers, and push come to shove you can configure the faster RAM as cache / the expansion RAM as swap.

      Heck, throw the memory into the CPU package. It’s not like there’s ever a situation where you don’t need RAM.

      • enumerator4829@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        All your RAM needs to be the same speed unless you want to open up a rabbit hole. All attempts at that thus far have kinda flopped. You can make very good use of such systems, but I’ve only seen it succeed with software specifically tailored for that use case (say databases or simulations).

        The way I see it, RAM in the future will be on package and non-expandable. CXL might get some traction, but naah.

        • God's hairiest twink@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          Couldn’t you just treat the socketed ram like another layer of memory effectively meaning that L1-3 are on the CPU “L4” would be soldered RAM and then L5 would be extra socketed RAM? Alternatively couldn’t you just treat it like really fast swap?

          • enumerator4829@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            Wrote a longer reply to someone else, but briefly, yes, you are correct. Kinda.

            Caches won’t help with bandwidth-bound compute (read: ”AI”) it the streamed dataset is significantly larger than the cache. A cache will only speed up repeated access to a limited set of data.

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 months ago

          The cache hierarchy has flopped? People aren’t using swap?

          NUMA also hasn’t flopped, it’s just that most systems aren’t multi socket, or clusters. Different memory speeds connected to the same CPU is not ideal and you don’t build a system like that but among upgraded systems that’s not rare at all and software-wise worst thing that’ll happen is you get the lower memory speed. Which you’d get anyway if you only had socketed RAM.

          • Jyek@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            In systems where memory speed are mismatched, the system runs at the slowest module’s speed. So literally making the soldered, faster memory slower. Why even have soldered memory at that point?

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              4 months ago

              I’d assume the soldered memory to have a dedicated memory controller. There’s also no hard requirement that a single controller can’t drive different channels at different speeds. The only hard requirement is that one channel needs to run at one speed.

              …and the whole thing becomes completely irrelevant when we’re talking about PCIe expansion cards the memory controller doesn’t care.

    • unphazed@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 months ago

      Honestly I upgrade every few years and isually have to purchase a new mobo anyhow. I do think this could lead to less options for mobos though.

      • enumerator4829@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        I don’t think you are wrong, but I don’t think you go far enough. In a few generations, the only option for top performance will be a SoC. You’ll get to pick which SoC you want and what box you want to put it in.

        • GamingChairModel@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          the only option for top performance will be a SoC

          System in a Package (SiP) at least. Might not be efficient to etch the logic and that much memory onto the same silicon die, as the latest and greatest TSMC node will likely be much more expensive per square mm than the cutting edge memory production node from Samsung or whatever foundry where the memory is being made.

          But with advanced packaging going the way it’s been over the last decade or so, it’s going to be hard to compete with the latency/throughout of an in-package interposer. You can only do so much with the vias/pathways on a printed circuit board.

            • GamingChairModel@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              4 months ago

              No, I don’t think you owe an apology. It’s a super common terminology almost to the point where I wouldn’t really even consider it outright wrong to describe it as a SoC. It’s just that the blurred distinction between a single chip and multiple chiplets packaged together are almost impossible for an outsider to tell without really getting into the published spec sheets for a product (and sometimes may not even be known then).

              It’s just more technically precise to describe them as SiP, even if SoC functionally means something quite similar (and the language may evolve to the point where the terms are interchangeable in practice).

      • confusedbytheBasics@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I get it but imagine the GPU style markup when all mobos have a set amount of RAM. You’ll have two identical boards except for $30 worth of memory with a price spread of $200+. Not fun.

    • wabafee@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      Sound like a downgrade to me I rather have more ram than having a soldered limited one. Especially for consumer stuff.