Imagine if LLMs remained mostly an academic interest for just a few years longer than it did before going commercial. How many issues could’ve been worked out by researchers and engineers with an eye towards scientific advancement rather than monetization?
Imagine if AI models were trained exclusively on peer-reviewed datasets, each one specialized in a single discipline, and maybe others specialized in interdisciplinary studies.
They might not be able to synthesize new ideas due to their fundamental architecture, but they could at least streamline certain tasks like literature reviews and metadata collation. They could provide sanity checks before submitting for review. Machine Learning models could even perform more complex data analysis tasks than LLMs would be capable of.
But no, instead we have Artificial Idiocy injected into everything, deepfakes and disinformation proliferating, and people going crazy from using chatbots to replace therapy…
Imagine if LLMs remained mostly an academic interest for just a few years longer than it did before going commercial. How many issues could’ve been worked out by researchers and engineers with an eye towards scientific advancement rather than monetization?
Imagine if AI models were trained exclusively on peer-reviewed datasets, each one specialized in a single discipline, and maybe others specialized in interdisciplinary studies.
They might not be able to synthesize new ideas due to their fundamental architecture, but they could at least streamline certain tasks like literature reviews and metadata collation. They could provide sanity checks before submitting for review. Machine Learning models could even perform more complex data analysis tasks than LLMs would be capable of.
But no, instead we have Artificial Idiocy injected into everything, deepfakes and disinformation proliferating, and people going crazy from using chatbots to replace therapy…