• 0 Posts
  • 273 Comments
Joined 11 months ago
cake
Cake day: February 10th, 2025

help-circle

  • The AI bubble is certainly going to burst at some point. Assuming manufacturers are ramping up production to profit off of the higher prices, the bubble will result in a glut of supply after demand collapses. So we’ll likely see a year or two of depressed electronics prices.

    On top of that, DDR5 is worth more than gold until DDR6 comes along and suddenly you have companies who own a significant percentage of the 2025 global production of RAM that want to purchase newer hardware. I doubt all of that RAM is going to be shredded, so we may have a thriving secondary market when that happens.

    It’ll suck for the next year or two, so get used to your current PC and pray that you don’t have a RAM failure.



  • I think that people are too enthralled with the current situation that’s centered around LLMs, the massive capital bubble and the secondary effects from the expansion of datacenter space (power, water, etc).

    You’re right that they do allow for the disruption of labor markets in fields that were not expecting computers to be able to do their job (to be fair to them, humanity has spent hundreds of millions of dollars designing various language processing software and been unable to engineer the software to do it effectively).

    I think that usually when people say ‘AI’ they mean ChatGPT or LLMs in general. The reason that LLMs are big is because neural networks require a huge amount of data to train and the largest data repository that we have (the Internet) is text, images and video… so it makes sense that the first impressive models were trained on text and images/video.

    The field of robotics hasn’t had access to a large public dataset to train large models on, so we don’t see large robotics models but they’re coming. You can already see it, compare robotic motion 4 years ago using a human engineered feedback control loop… the motions are accurate but they’re jerky and mechanical. Now look at the same company making a robot that uses a neural network trained on human kinematic data, that motion looks so natural that it breaks through the uncanny valley to me.

    This is just one company generating data using human models (which is very expensive) but this is the kind of thing that will be ubiquitous and cheap given enough time.

    This isn’t to mention the AlphaFold AI which learned how to fold proteins better than anything human engineered. Then, using a diffusion model (the same kind used in making pictures of shrimp jesus) another group was able to generate the RNA which would manufacture new novel proteins that fit a specific receptor. Proteins are important because essentially every kind of medication that we use has to interact with a protein-based receptor and the ability to create, visualize and test custom proteins in addition to the ability to write arbitrary mRNA (see, the mRNA COVID vaccine) is huge for computational protein design (responsible for the AIDS vaccines).

    LLMs and the capitalist bubble surrounding them is certainly an important topic, framing it as being ‘against AI’ creates an impression that AI technology has nothing positive to offer. This reduces the amount of people who study the topic or major in it in college. So in 10 years, we’ll have less machine learning specialists than other countries who are not drowning in this ‘AI bad’ meme.





  • One of the first videos I watched about LLMs, was a journalist who didn’t know anything about programming used ChatGPT to build a javascript game in the browser. He’d just copy paste code and then paste the errors and ask for help debugging. It even had to walk him through setting of VS Code and a git repo.

    He said it took him about 4 hours to get a playable platformer.

    I think that’s an example of a unique capability of AI. It can let a non-programmer kinda program, it can let a non-Chinese speaker speak kinda Chinese, it’ll let a non-artist kinda produce art.

    I don’t doubt that it’ll get better, but even now it’s very useful in some cases (nowhere near enough to justify the trillions of dollars being spent though).





  • Exactly.

    This isn’t a decision being made to cut costs, it’s a strategic move because the EU just assessed how badly they’d be screwed if Trump throws a tantrum and forces American tech companies to disrupt services to their governments.

    In addition, the EU has strong data privacy laws and US tech companies are resisting compliance (Elon was recently fined 150million, for example).

    This has led to several hearings with tech executives who said that they could not guarantee that the data would stay in the EU and they could not guarantee that the data would not be provided to any other country.

    Digital privacy laws don’t mean anything if they don’t apply to the major tech companies and they’ve said that they won’t comply.



  • The person is using heroin as a metaphor for a destructive product that causes harm to its users in order to setup an article about digital privacy. When people use metaphors, we all understand that they’re a rhetorical technique and not an attempt at describing reality.

    If someone says that their grandchildren are perfect little angles, you don’t say “well, actually, angels are divine beings who don’t dwell upon this earth Grandma, so your grandchildren are not angels and also you’re so dumb for literally thinking that.” In this scenario, it isn’t the grandmother that is dumb.

    You’re getting caught up in the fact that he said to imagine a scenario. You think that the fake scenario he imagined, where US corporations are selling recreational heroin, is not as bad as the current opioid epidemic. That is a completely irrelevant detail because, once again, the article isn’t about drugs.

    It’s like you’re saying “this guy is stupid, you can’t put social media in a spoon and melt it over a candle in order to inject it into your arm!”. Sure, I guess you’d be correct, but it would be completely irrelevant and make it look like you can’t navigate basic conversations without pointless digressions about irrelevant details.