That has sorta been my experience so far. LLMs are great at producing output as long as the quality of the output doesn’t really matter. Maybe there are a lot more tasks than I realize where this is the case - in my work there are not many.
This is the entire point of LLM, it creates something that has the right ‘shape’ statistically of what you ask for, but the content is not guaranteed to be accurate, true, appropriate, or up to date.
So, if a random person asks for a legal document, and they receive something that “looks right” it is very impressive to them because they can’t see the flaws that a professional or expert would see. And for some applications that’s Good Enough, but it’s nowhere near being PhD levels smart
That has sorta been my experience so far. LLMs are great at producing output as long as the quality of the output doesn’t really matter. Maybe there are a lot more tasks than I realize where this is the case - in my work there are not many.
This is the entire point of LLM, it creates something that has the right ‘shape’ statistically of what you ask for, but the content is not guaranteed to be accurate, true, appropriate, or up to date.
So, if a random person asks for a legal document, and they receive something that “looks right” it is very impressive to them because they can’t see the flaws that a professional or expert would see. And for some applications that’s Good Enough, but it’s nowhere near being PhD levels smart