I write about technology at theluddite.org

  • 1 Post
  • 121 Comments
Joined 3 years ago
cake
Cake day: June 7th, 2023

help-circle

  • I don’t like this way of thinking about technology, which philosophers of tech call the “instrumental” theory. Instead, I think that technology and society make each other together. Obviously, technology choices like mass transit vs cars shape our lives in ways that simpler tools, like a hammer or or whatever, don’t help us explain. Similarly, society shapes the way that we make technology.

    In making technology, engineers and designers are constrained by the rules of the physical world, but that is an underconstraint. There are lots of ways to solve the same problem, each of which is equally valid, but those decisions still have to get made. How those decisions get made is the process through which we embed social values into the technology, which are cumulative in time. To return to the example of mass transit vs cars, these obviously have different embedded values within them, which then go on to shape the world that we make around them. We wouldn’t even be fighting about self-driving cars had we made different technological choices a while back.

    That said, on the other side, just because technology is more than just a tool, and does have values embedded within it, doesn’t mean that the use of a technology is deterministic. People find subversive ways to use technologies in ways that go against the values that are built into it.

    If this topic interests you, Andrew Feenberg’s book Transforming Technology argues this at great length. His work is generally great and mostly on this topic or related ones.




  • I’d say that’s mostly right, but it’s less about opportunities, and more about design. To return to the example of the factory: Let’s say that there was a communist revolution and the workers now own the factory. The machines still have them facing away from each other. If they want to face each other, they’ll have to rebuild the machine. The values of the old system are literally physically present in the machine.

    So it’s not that you can do different things with a technology based on your values, but that different values produce technology differently. This actually limits future possibilities. Those workers physically cannot face each other on that machine, even if they want to use it that way. The past’s values are frozen in that machine.


  • No problem!

    Technology is constrained by the rules of the physical world, but that is an underconstraint.

    Example: Let’s say that there’s a factory, and the factory has a machine that makes whatever. The machine takes 2 people to operate. The thing needs to get made, so that limits the number of possible designs, but there are still many open questions like, for example, should the workers face each other or face away from each other? The boss might make them face away from each other, that way they don’t chat and get distracted. If the workers get to choose, they’d prefer to face each other to make the work more pleasant. In this way, the values of society are embedded in the design of the machine itself.

    I struggle with the idea that a tool (like a computer) is bad because is too general purpose. Society hence the people and their values define how the tool is used. Would you elaborate on that? I’d like to understand the idea.

    I love computers! It’s not that they’re bad, but that, because they’re so general purpose, more cultural values get embedded. Like in the example above, there are decisions that aren’t determined by the goals of what you’re trying to accomplish, but because computers are so much more open ended than physical robots, there are more decisions like that, and you have even more leeway in how they’re decided.

    I agree with you that good/evil is not a productive way to think about it, just like I don’t think neutrality is right either. Instead, I think that our technology contains within it a reflection of who got to make those many design decisions, like which direction should the workers sit. These decisions accumulate. I personally think that capitalism sucks, so technology under capitalism, after a few hundred years, also sucks, since that technology contains within it hundreds of years of capitalist decision-making.


  • I didn’t find the article particularly insightful but I don’t like your way of thinking about tech. Technology and society make each other together. Obviously, technology choices like mass transit vs cars shape our lives in ways that the pens example doesn’t help us explain. Similarly, society shapes the way that we make technology. Technology is constrained by the rules of the physical world, but that is an underconstraint. The leftover space (i.e. the vast majority) is the process through which we embed social values into the technology. To return to the example of mass transit vs cars, these obviously have different embedded values within them, which then go on to shape the world that we make around them.

    This way of thinking helps explain why computer technology specifically is so awful: Computers are shockingly general purpose in a way that has no parallel in physical products. This means that the underconstraining is more pronounced, so social values have an even more outsized say in how they get made. This is why every other software product is just the pure manifestation of capitalism in a way that a robotic arm could never be.


  • I live in Vermont. These rosy articles about Front Porch Forum come out every so often, and, as someone who writes about the intersection of tech and capitalism, they frustrate me.

    First things first, it’s a moderated mailing list with some ads. I don’t know if it even makes sense to call it a social network, honestly. It’s a great service because moderated mailing lists are great. Here’s the problem:

    To maintain this level of moderation, the founder does not want to expand Front Porch Forum beyond Vermont’s borders. He highlighted Nextdoor, another locally-focused social media platform that has expanded internationally, which has often been accused of inflaming tensions within communities due to its more relaxed moderation policy. However, Sabathier believes that local social media similar to Front Porch Forum could work elsewhere in the US, including in less progressive states – Vermont, the home of socialist Senator Bernie Sanders, was the state that cast the fewest votes for Trump in the November 2024 election. “It’s not so much a political platform as a tool for communities to organize themselves and be more cohesive,” said the researcher. “And that would be beneficial everywhere.”

    Capitalism makes this world impossible. Front Porch Forum is a private business owned by a guy (technically, it’s a public benefit corporation, but those are toothless designations). Like so many beloved services, it’ll be great until it’s not. Eventually, cofounders, as lovely and well meaning as they might be, leave, move, die, whatever, and someone shitty will end up in control. Without a corporate restructuring into, say, a user cooperative, it is just as doomed as every other internet thing that we’ve all loved. These puff pieces always act like Vermont is a magical place and, frankly, it is, but not like this. We live under capitalism too. Sometimes, due to being a rural, freezing, mountainous backwater, we get short reprieves from the worst of it, but the problem with social media is systemic.

    AMA I guess.




  • This is an article about a tweet with a screenshot of an LLM prompt and response. This is rock fucking bottom content generation. Look I can do this too:

    ![ChatGPT screenshot. prompt: Is chatgpt spreading misinformation? ChatGPT said: ChatGPT, like other AI models, can occasionally spread misinformation due to limitations in training data and interpretive algorithms. The model learns from a vast dataset but doesn’t actually “know” facts in the way humans do; instead, it generates responses based on patterns it has seen. If there are inaccuracies in the training data, those can sometimes be reflected in the responses.

    Additionally, for topics that require up-to-date information, an AI without real-time access to new data might give outdated answers. OpenAI, the organization behind ChatGPT, has worked to reduce misinformation by employing rigorous training processes, but no system is perfect. Fact-checking and user awareness remain essential when using AI for information.](https://lemmy.ml/pictrs/image/9559a382-6421-4116-8d88-9d7c13b7f044.png)


  • Totally agreed. I didn’t mean to say that it’s a failure if it doesn’t properly encapsulate all complexity, but that the inability to do so has implications for design. In this specific case (as in many cases), the error they’re making is that they don’t realize the root of the problem that they’re trying to solve lies in that tension.

    The platform and environment are something you can shape even without an established or physical community.

    Again, couldn’t agree more! The platform is actually extremely powerful and can easily change behavior in undesirable ways for users, which is actually the core thesis of that longer write up that I linked. That’s a big part of where ghosting comes from in the first place. My concern is that thinking you can just bolt a new thing onto the existing model is to repeat the original error.


  • This app fundamentally misunderstands the problem. Your friend sets you up on a date. Are you going to treat that person horribly. Of course not. Why? First and foremost, because you’re not a dick. Your date is a human being who, like you, is worthy and deserving of basic respect and decency. Second, because your mutual friendship holds you accountable. Relationships in communities have an overlapping structure that mutually impact each other. Accountability is an emergent property of that structure, not something that can be implemented by an app. When you meet people via an app, you strip both the humanity and the community, and with it goes the individual and community accountability.

    I’ve written about this tension before: As we use computers more and more to mediate human relationships, we’ll increasingly find that being human and doing human things is actually too complicated to be legible to computers, which need everything spelled out in mathematically precise detail. Human relationships, like dating, are particularly complicated, so to make them legible to computers, you necessarily lose some of the humanity.

    Companies that try to whack-a-mole patch the problems with that will find that their patches are going to suffer from the same problem: Their accountability structure is a flat shallow version of genuine human accountability, and will itself result in pathological behavior. The problem is recursive.


  • Investment giant Goldman Sachs published a research paper

    Goldman Sachs researchers also say that

    It’s not a research paper; it’s a report. They’re not researchers; they’re analysts at a bank. This may seem like a nit-pick, but journalists need to (re-)learn to carefully distinguish between the thing that scientists do and corporate R&D, even though we sometimes use the word “research” for both. The AI hype in particular has been absolutely terrible for this. Companies have learned that putting out AI “research” that’s just them poking at their own product but dressed up in a science-lookin’ paper leads to an avalanche of free press from lazy credulous morons gorging themselves on the hype. I’ve written about this problem a lot. For example, in this post, which is about how Google wrote a so-called paper about how their LLM does compared to doctors, only for the press to uncritically repeat (and embellish on) the results all over the internet. Had anyone in the press actually fucking bothered to read the paper critically, they would’ve noticed that it’s actually junk science.



  • I know that this kind of actually critical perspective isn’t point of this article, but software always reflects the ideology of the power structure in which it was built. I actually covered something very similar in my most recent post, where I applied Philip Agre’s analysis of the so-called Internet Revolution to the AI hype, but you can find many similar analyses all over STS literature, or throughout just Agre’s work, which really ought to be required reading for anyone in software.

    edit to add some recommendations: If you think of yourself as a tech person, and don’t necessarily get or enjoy the humanities (for lack of a better word), I recommend starting here, where Agre discusses his own “critical awakening.”

    As an AI practitioner already well immersed in the literature, I had incorporated the field’s taste for technical formalization so thoroughly into my own cognitive style that I literally could not read the literatures of nontechnical fields at anything beyond a popular level. The problem was not exactly that I could not understand the vocabulary, but that I insisted on trying to read everything as a narration of the workings of a mechanism. By that time much philosophy and psychology had adopted intellectual styles similar to that of AI, and so it was possible to read much that was congenial – except that it reproduced the same technical schemata as the AI literature. I believe that this problem was not simply my own – that it is characteristic of AI in general (and, no doubt, other technical fields as well). T





  • I completely and totally agree with the article that the attention economy in its current manifestation is in crisis, but I’m much less sanguine about the outcomes. The problem with the theory presented here, to me, is that it’s missing a theory of power. The attention economy isn’t an accident, but the result of the inherently political nature of society. Humans, being social animals, gain power by convincing other people of things. From David Graeber (who I’m always quoting lol):

    Politics, after all, is the art of persuasion; the political is that dimension of social life in which things really do become true if enough people believe them. The problem is that in order to play the game effectively, one can never acknowledge this: it may be true that, if I could convince everyone in the world that I was the King of France, I would in fact become the King of France; but it would never work if I were to admit that this was the only basis of my claim.

    In other words, just because algorithmic social media becomes uninteresting doesn’t mean the death of the attention economy as such, because the attention economy is something innate to humanity, in some form. Today its algorithmic feeds, but 500 years ago it was royal ownership of printing presses.

    I think we already see the beginnings of the next round. As an example, the YouTuber Veritsasium has been doing educational videos about science for over a decade, and he’s by and large good and reliable. Recently, he did a video about self-driving cars, sponsored by Waymo, which was full of (what I’ll charitably call) problematic claims that were clearly written by Waymo, as fellow YouTuber Tom Nicholas pointed out. Veritasium is a human that makes good videos. People follow him directly, bypassing algorithmic shenanigans, but Waymo was able to leverage their resources to get into that trusted, no-algorithm space. We live in a society that commodifies everything, and as human-made content becomes rarer, more people like Veritsaium will be presented with more and increasingly lucrative opportunities to sell bits and pieces of their authenticity for manufactured content (be it by AI or a marketing team), while new people that could be like Veritsaium will be drowned out by the heaps of bullshit clogging up the web.

    This has an analogy in our physical world. As more and more of our physical world looks the same, as a result of the homogenizing forces of capital (office parks, suburbia, generic blocky bulidings, etc.), the fewer and fewer remaining parts that are special, like say Venice, become too valuable for their own survival. They become “touristy,” which is itself a sort of ironically homogenized commodified authenticity.

    edit: oops I got Tom’s name wrong lol fixed