AmbitiousProcess (they/them)

  • 0 Posts
  • 51 Comments
Joined 7 months ago
cake
Cake day: June 6th, 2025

help-circle

  • The problem is, it’s not unobtrusive.

    When I right click and I instantly get an option silently added to the list that sends data to an AI model hosted somewhere, which I’ve accidentally clicked due to muscle memory, it’s not good just because there’s also the option there to disable it. When I start up my browser after an update and I am instantly given an open sidebar asking me to pick an AI model to use, that’s obtrusive and annoying to have to close and disable.

    Mozilla has indicated they do not want to make these features opt-in, but opt-out. The majority of Mozilla users do not want these features by default, so the logical option is to make them solely opt-in. But Mozilla isn’t doing that. Mozilla is enabling features by default, without consent, then only taking them away when you tell them to stop.

    The approach Mozilla is taking is like if you told a guy you weren’t interested in dating him, but instead of taking that as a “no.” he took it as a “try again with a different pickup line in 2 weeks” and never, ever stopped no matter what you tried. It doesn’t matter that you can tell him to go away now if he’ll just keep coming back.

    Mozilla does not understand consent, and they are violating the consent of their users every time they push an update including AI features that are opted-in by default.


  • Because google only pays Mozilla because of:

    • Maintaining search dominance
    • Preventing anti-monopoly scrutiny

    They don’t want Mozilla to compete in any AI space, because there’s already a ton of competition in the AI space given how much money gets thrown around, so they don’t benefit from anti-monopoly efforts, and there’s so many models that they don’t benefit from search dominance in the AI space. They’d much rather have Mozilla stay a non-AI browser while they get to implement AI features and show shareholders that they’re “the most advanced” of them all, or that “nobody else is doing it like we do”.



  • Videos, images, and text can absolutely compel action or credible harm.

    For example, Facebook was aware that Instagram was giving teen girls depression and body image issues, and subsequently made sure their algorithm would continue to show teen girls content of other girls/women who were more fit/attractive than them.

    the teens who reported the most negative feelings about themselves saw more provocative content more broadly, content Meta classifies as “mature themes,” “Risky behavior,” “Harm & Cruelty” and “Suffering.” Cumulatively, such content accounted for 27% of what those teens saw on the platform, compared with 13.6% among their peers who hadn’t reported negative feelings.

    https://www.congress.gov/117/meeting/house/114054/documents/HHRG-117-IF02-20210922-SD003.pdf

    https://www.reuters.com/business/instagram-shows-more-eating-disorder-adjacent-content-vulnerable-teens-internal-2025-10-20/

    Many girls have committed suicide or engaged in self harm, at least partly inspired by body image issues stemming from Instagram’s algorithmic choices, even if that content is “just videos, and images.”

    They also continued to recommend dangerous content that they claimed was blocked by their filters, including sexual and violent content to children under 13. This type of content is known to have a lasting effect on kids’ wellbeing.

    The researchers found that Instagram was still recommending sexual content, violent content, and self-harm and body-image content to teens, even though those types of posts were supposed to be blocked by Meta’s sensitive-content filters.

    https://time.com/7324544/instagram-teen-accounts-flawed/

    In the instance you specifically highlighting, that was when Meta would recommend teen girls to men exhibiting behaviors that could very easily lead to predation. For example, if a man specifically liked sexual content, and content of teen girls, it would recommend that man content of underage girls attempting to make up for their newly-created body image issues by posting sexualized photos.

    They then waited 2 years before implementing a private-by-default policy, which wouldn’t recommend these teen girls’ accounts to strangers unless they explicitly turned on the feature. Most didn’t. Meta waited that long because internal research showed it would decrease engagement.

    By 2020, the growth team had determined that a private-by-default setting would result in a loss of 1.5 million monthly active teens a year on Instagram, which became the underlying reason for not protecting minors.

    https://techoversight.org/2025/11/22/meta-unsealed-docs/

    If I filled your social media feed with endless posts specifically algorithmically chosen to make you spend more time on the app while simultaneously feeling worse about yourself, then exploited every weakness the algorithm could identify about you, I don’t think you’d look at that and say it’s “catastrophizing over videos, images, text on a screen that can’t compel action or credible harm” when you develop depression, or worse.


  • This whole article is just a condescending mess.

    “Why does everyone who has been repeatedly burned by AI, time and time again, whether that be through usable software becoming crammed full of useless AI features, AI making all the information they get less reliable, or just having to hear people evangelize about AI all day, not want to use my AI-based app that takes all the fun out of deciding where you go on your vacation???”

    (yes, that is actually the entire proposed app. A thing where you say where you’re going, and it generates an itinerary. Its only selling point over just using ChatGPT directly is that it makes sure the coordinates of each thing are within realistic travel restrictions. That’s it.)


  • And it’s more expensive than the most expensive US mobile plan, which would have faster speeds, whereas Trump Mobile’s drops off after a certain (lower than T-Mobile’s own plans) amount of GB data usage since they’re solely using T-Mobile as an MVNO, and also has deprioritized data speeds during periods of network congestion.

    It would also get you the ability to switch underlying network providers if you’re in a dead zone, international calling and data in more locations, better customer support given all the experiences we’ve seen from reviewers, and unlimited hotspot data, plus better bundle deals for families or people with smart watches that need separate data.

    Hell, even T-Mobile’s own own plans, which are usually substantially more expensive than other companies they solely act as an MVNO for, like Mint Mobile, (which is actually owned by T-Mobile now) which will get you the same value as T-Mobile’s $50/mo plan in a $30/mo plan that is just $15/mo for new users for up to a 12 month period.

    Trump Mobile is just $2.55 cheaper than T-Mobile’s $50 plan.





  • It runs autonomously to a degree, but a lot of these sites operate via posting a wide variety of content on the same domains, after those domains have previously gained status in search engines.

    So for example, you’ll have a site like epiccoolcarnews[.]info hosting stuff like “How to get FREE GEMS in Clash of Clans” just because previously they posted an article about cars that Google thought was good so they ranked up the domain in their ranking algorithm.

    Permanently downrank the domain, and eventually they have to start with a new domain that, as is the key part here, has no prior reputation, and thus has to work to actually get ranked up in search again.

    They’re also going to be making this a public database, and have said they’ll use it to train AI-generated content detection tools that will probably be better at detecting “AI generated articles meant to appear legitimate by using common keywords and phrases”, rather than just “any text of any form that has been generated by AI” like other AI detection tools do, which would make them capable of automating the process a bit with regard to specifically search engines.


  • They also literally just released SlopStop as a community-based filtering mechanism that’ll downrank AI slop, with the CEO saying “We believe AI slop is an existential threat to an internet that should belong to humans. This is the first step towards our ultimate goal: to kill AI slop so you never see it again.”

    Apparently they’ll be using this to train something that can identify AI slop better based on the database of user-reported sites, and they’ll be making the database open.

    Their AI integration philosophy feels incredibly reasonable to me with how out of the way it is, how it properly cites its sources and shows how much of the answer each one influenced, and how the search results are often so good it doesn’t even feel like you need the AI model, and this just sweetens the deal.

    I can understand having issues with Kagi, they’re a company, after all, but their stance and actions feel very good thus far.


  • There’s a lot of issues with that analysis.

    Oh and they own a t-shirt factory

    The linked article literally states that they partnered with a small print shop, not that they own it. It says they bought warehouse space to store and fulfill orders. Now granted, yes, spending that much money on T-shirts can be a bad idea financially, but they do act as marketing because they get people talking, even if the brand name isn’t on the shirt. This recoups the cost over time.

    Kagi also heavily relies on organic marketing, so it makes total sense.

    First of all, as a project, Kagi stretches itself way too thin. “Kagi” isn’t just Kagi Search, it’s also a whole slew of AI tools, a Mac-only web browser called Orion, and right now they are planning on launching an email service as well.

    The AI tools are easily deployed and based on standard open-source tooling. Not that hard to maintain, yet their AI integrations are genuinely much better than the competition, which draws in a lot of people who pay for their higher-priced plan just for heavy AI users.

    Orion is a fork, with minimal additional bloat. Again, not terribly hard to maintain.

    None of these projects are particularly profitable, so it’s not a case of one subsidizing the other

    Their entire business model is based around a subscription. No individual service is “profitable,” it’s just “part of what you get for your subscription.”

    and when they announced Kagi Email even their most dedicated userbase (aka the types who hang around in a discord for a search engine) seemed largely disinterested.

    Granted, though the hardest part for this is just making a frontend, which they’ve already done. There are many free and open source backends for hosting email services. They haven’t promoted it heavily, and my assumption is because they’re keeping it more on the down-low until they fix bugs, build out more features, and are sure it’s something they can more heavily advertise.

    Kagi was not paying sales tax for two years and they finally have to pay up. They just…didn’t do it. Didn’t think it was important? I have no idea why. Their reactions made it sound like they owed previous taxes, not that they just now had to pay them. They genuinely made it sound like they only just now realized they needed to figure out sales tax. It’s a baffling thing to me and it meant a change in prices for users that some people were not thrilled with.

    And they later explained it’s because there’s a threshold of buyers you have to pass before paying sales tax, and they did not know if they would ever pass that mark, and later had to scramble due to new user growth to make that happen.

    Like most search now Kagi has chosen to include Instant Answers that are AI generated, which means they’re often wrong

    The vast majority of my answers from Kagi’s AI were right, when other search engines were all wrong. (yes, I did actually check real sources to confirm) This is just a strawman of reality. Kagi even shows you what % of the LLM’s response was derived from which source, whereas others leave you in the dark.

    But the developers of Kagi fully believe that this is what search engines should be, a bunch of AI tools so that you don’t even need to read primary sources anymore.

    Oh, is that why Kagi said in the post also linked by the author of that post: “Large language models (LLMs) should not be blindly trusted to provide factual information accurately. They have a significant risk of generating incorrect information or fabricating details”, “AI should be used to enhance the search experience, not to create it or replace it”, and “AI should be used to the extent that it enhances our humanity, not diminish it (AI should be used to support users, not replace them)”

    I’m not gonna keep going through every single thing point-by-point here since that’d take forever, but a lot of this is basically just taking minor issues, like the CEO posting about hopeful uses of AI, or talking about completely normal expectations to have of privacy when you trust a company with information, then blowing it out of proportion and acting as though this is a death blow for the service.

    The author of the post quite literally talks about how “Kagi’s dedication to privacy falls apart for me”, saying they don’t seem to actually care about user privacy… when just a few months later, they released Privacy Pass, which allows you to cryptographically prove you have a membership without revealing your identity, and to continue using Kagi that way. Not really something someone who doesn’t care about privacy would do.

    Overall, this just reads to me as:

    1. They could be doing bad financially because of these decisions I didn’t like them doing
    2. Okay so they said they were profitable currently even after all that but now they’re doing too many things (which could all bring in new users that would pay them)
    3. Okay so people are paying for and using the things but there’s no way they could possibly use AI in any good way
    4. I’ve now ignored anybody saying the tools are actually better than others or are working well, but just in case you’re not convinced, they don’t care about privacy!
    5. I know they explained the ways in which companies are going to get data on you and there is going to be a degree of trust when using a service that requires things like payment information but I still think they don’t actually care about privacy!

    I’m not saying all the points are completely false or don’t mean anything, but a lot of this really does feel like just taking something relatively small (giving out a bunch of T-shirts during a time the company is primarily trying to grow its user count via organic marketing), acting as though it’s both the current and permanent future position of the entire company and will also lead to the worst possible outcome, then moving on to another thing, and doing that until there’s nothing left to complain about.

    Kagi can have its own problems, but a lot of these just aren’t it.

    As a person using Kagi myself:

    1. The search results are the best I’ve ever had. period, full stop.
    2. The AI models are commonly correct, good at citing sources, out of the way till you ask for them, and feel secondary to the search experience
    3. The cost is more than reasonable
    4. Regular small updates with new tools have been incredibly nice to have (such as the Kagi news feed, which is great at sourcing good news from a variety of sources, or the Universal Summarizer, which is great at providing alternative, more natural sounding and accurate translations compared to Google Translate or DeepL)

    I haven’t really had any complaints, and contrasting it with this guy’s post, it just reads like someone complaining about something they’ve never even used. Yes, you can complain about something you haven’t yourself used, but the entire post is just “here’s anything even minor that I think could be an issue if it were taken to the extremes”





  • The study claims that they analyzed participants’ labor market outcomes, that being earnings and propensity to move jobs, “among other things.”

    Fun fact, did you know white men tend to get paid more than black men for the same job, with the same experience and education?

    Following that logic, if we took a dataset of both black and white men, then used their labor market outcomes to judge which one would be a good fit over another, white men would have higher earnings and be recommended for a job more than black people.

    Black workers are also more likely to switch jobs, one of the reasons likely being because you tend to experience higher salary growth when moving jobs every 2-3 years than when you stay with a given company, which is necessary if you’re already being paid lower wages than your white counterparts.

    By this study’s methodology, that person could be deemed “unreliable” because they often switch jobs, and would then not be considered.

    Essentially, this is a black box that gets to excuse management saying “fuck all black people, we only want to hire whites” while sounding all smart and fancy.



  • Oh, of course the legislation is to blame for a lot of this in the end. I’m just saying that Discord could have already partnered with a number of identity verification services that do already have this infrastructure up and running, with standardized and documented ways to call their APIs to both verify and check the verification of a user.

    At the end of the day, Discord chose to implement a convoluted process of having users email Discord, upload IDs, then have Discord pull the IDs back down from Zendesk and verify them, rather than implementing a system where users could have simply gone to a third-party verification website, done all the steps there, had their data processed much more securely, then have the site just send Discord a message saying “they’re cool, let 'em in”


  • In my opinion, they’re still somewhat at fault, because this was them failing to find and configure their software to work with a third-party identity provider who’s infrastructure was built to handle the security of sensitive information, and just choosing to use email through Zendesk because it was easier in the meantime. A platform that I should note has been routinely accessed again and again by attackers, not just for Discord, but for all sorts of other companies.

    The main problem is that legislation like the Online Safety Act require some privacy protections, like not collecting or storing certain data unless necessary, but they don’t require any particular security measures to be in place. This means that, theoretically, nothing stops a company from passing your ID to their servers in cleartext, for example.

    Now compare this to industries like the credit card industry, where they created PCI DSS, which mandates specific security practices. This is why you don’t often see breaches of any card networks or issuers themselves, and why most fraud is external to the systems that actually process payments through these cards. (e.g. phishing attacks that get your card info, or a store that has your card info already getting hacked)

    This is a HUGE oversight, and one that will lead to things like this happening over and over unless it becomes unprofitable for companies to not care.