

I have read some analysis that right-wing propaganda gets the most engagement when there are liberals in the community to provide the “liberal tears”. Yes, there is a core group happy to be in an echo chamber with only imagined liberal tears, but the majority find substitutes unsatisfying. Potentially the diminishing of non-right content volume will also diminishing the right content by making the comments less interesting.
I hope the AI-chat companies really get a handle on this. They are making helpful sounding noises, but it’s hard to know how much they are prioritizing it.
OpenAI has acknowledged that its existing guardrails work well in shorter conversations, but that they may become unreliable in lengthy interactions… The company also announced on Tuesday that it will try to improve the way ChatGPT responds to users exhibiting signs of “acute distress” by routing conversations showing such moments to its reasoning models, which the company says follow and apply safety guidelines more consistently.
I’ve occasionally been part of training hourly workers on software new to them. Having really, really detailed work instructions and walking through all the steps with themthe first time has helped me win over people who were initially really opposed to the products.
My experience with salaried workers has been they are more likely to try new software on their own, but if they don’t have much flexible time they usually choose to keep doing the established less efficient routine over investing one-time learning curve and setup time to start a new more efficient routine. Myself included - I have for many years been aware of software my employer provides that would reduce the time spent on regular tasks, but I know the learning curve and setup is in the dozens of hours, and I haven’t carved out time to do that.
So to answer the question, neither. The problem may be neither the software nor the users, but something else about the work environment.
The skills of both writing useful minutes and prioritizing actually sending them out are frustratingly rare. An average meeting with five or six people has even odds of not including someone with both of those skills. I can see where reliably having a mediocre AI summary might be an advantage over sometimes having superb human-written minutes and sometimes having nothing.
Different people and relationships can have different solutions that work for them. That’s OK!