As I understand it, no. Even the smallest machines with 90% Retrieval Augmented Generation will still produce inaccuracies. Generation by definition is probabilistic which means error. The difference between calculators and language machines is the difference between certainly and most likely. That is baked in. Humans will always have to verify because the substrate is really just sophisticated guesswork. Still, learners can be taught to realize the incredible amplification LMs can do for thought
The question is: are there formal systems that map LLMs ways of compression/decompression against human prompts?
Are open source LLMs making it easier to formalise prompts such that hallucinations are eliminated, rather than just minimised?
Programming languages are quite strict in their grammar compared to natural languages. Can there exist a middle ground between the two that could enable such a mapping to take effect?
As I understand it, no. Even the smallest machines with 90% Retrieval Augmented Generation will still produce inaccuracies. Generation by definition is probabilistic which means error. The difference between calculators and language machines is the difference between certainly and most likely. That is baked in. Humans will always have to verify because the substrate is really just sophisticated guesswork. Still, learners can be taught to realize the incredible amplification LMs can do for thought
Quite insightful, as usual, Terry!
The question is: are there formal systems that map LLMs ways of compression/decompression against human prompts?
Are open source LLMs making it easier to formalise prompts such that hallucinations are eliminated, rather than just minimised?
Programming languages are quite strict in their grammar compared to natural languages. Can there exist a middle ground between the two that could enable such a mapping to take effect?