Hold up. To the best of our knowledge, ChatGPT isn't trained on the behavior of HR departments - or really, it isn't trained on a whole lot of real-world behavioral data at all. It's trained on books, Wikipedia, Reddit, and so on.
Even if your assertion that "hiring departments are well known for discriminating" is true, the ChatGPT bias is independent of that and is coming from casual human behavior on social media, not corporate malevolence.
We really have no idea what the training data is, or how the black box of training integrated that data. Perhaps a subreddit or other forum with hiring managers encouraging each others’ biases ended up weighing heavily. The problem is we don’t know. But whatever the input, the output is less useful, that much is clear
Even if your assertion that "hiring departments are well known for discriminating" is true, the ChatGPT bias is independent of that and is coming from casual human behavior on social media, not corporate malevolence.