

LLMs shall prioritize historical accuracy, scientific inquiry, and objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory.
This may not go how they think it will. As an aside, for the moment at least, this is only for AI used/procured by the federal government.
This is kind of interesting and cool, and it’s not a hallucinating LLM. I’ve designed a couple of simple circuit boards, and running traces can be sort of zen, but it is tedious and would be maddening as a job, so I can only imagine what the process must be like on complex projects from scratch. Definitely some hype levels coming from the company that give me pause, but it seems like an actual useful task for a machine learning algorithm.