Developer and refugee from Reddit

  • 1 Post
  • 182 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle









  • Venture capital drying up.

    Here’s the thing… No LLM provider’s business is making a profit. None of them. Not OpenAI. Not Anthropic. Not even Google (they’re profitable in other areas, obviously). OpenAI optimistically believes it might start being profitable in 2029.

    What’s keeping them afloat? Venture capital. And what happens when those investors decide to stop throwing good money after bad?

    BOOM.


  • There are tricks to getting better output from it, especially if you’re using Copilot in VS Code and your employer is paying for access to models, but it’s still asking for trouble if you’re not extremely careful, extremely detailed, and extremely precise with your prompts.

    And even then it absolutely will fuck up. If it actually succeeds at building something that technically works, you’ll spend considerable time afterwards going through its output and removing unnecessary crap it added, fixing duplications, securing insecure garbage, removing mocks (God… So many fucking mocks), and so on.

    I think about what my employer is spending on it a lot. It can’t possibly be worth it.



  • After working on a team that uses LLMs in agentic mode for almost a year, I’d say this is probably accurate.

    Most of the work at this point for a big chunk of the team is trying to figure out prompts that will make it do what they want, without producing any user-facing results at all. The rest of us will use it to generate small bits of code, such as one-off scripts to accomplish a specific task - the only area where it’s actually useful.

    The shine wears off quickly after the fourth or fifth time it “finishes” a feature by mocking data because so many publicly facing repos it trained on have mock data in them so it thinks that’s useful.








  • There’s also the fact that what we are currently calling AI isn’t, that there are better options that aren’t environmental catastrophes (I’m hopeful about small language models), and that no one seems to want all the “AI” being jammed into every goddamn thing.

    No, I don’t want Gemini in my email or messaging, I want to read messages from people myself. No, I don’t want Copilot summaries of my meetings in Teams, half the folks I work with have accents it can’t parse. Get the hell out of my way when I’m trying to interact with actual human beings.

    And I say that as someone whose job literally involves working with LLMs every day. Ugh.