• Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    How would it do that? Would LLMs not just take input as voice or text and then guess an output as text?

    Wouldn’t the text output that is suppose to be commands for action, need to be correct and not a guess?

    It’s the whole guessing part that makes LLMs not useful, so imo they should only be used to improve stuff we already need to guess.