The student isn't an idiot, they'd use what the teacher says as their ground truth and chatgpt would be used to supplement their understanding. If it's wrong, they didn't understand it anyway, and reasoning/logic would allow them to sus out any incorrect information along the way. The teaching model can account for this providing them the checks to ensure their explanation/understanding is correct. (This is what tests are for, to check your understanding).
How is someone who is learning something supposed to figure out if what chatgpt is saying is bullshit or not? I don't understand this.
It's a kind of Gell-Mann effect. When I ask it a question of which I know the answer (or at least enough to understand if the answer is wrong) it fails miserably. Then I turn to ask if something which I don't know anything about and... I'm supposed to take it at its word?
You have what the teacher has told you as your primary correct reference point (your ground truth). It should align with that, if not the LLM is wrong.
Obviously the gaps between is where the issue would be but as I say the student can think this through (most lessons are built on previous foundations so they should have an understanding of the fundamentals and won't be flying in the dark).