The unlimited downside is really driven by how people use the tool and what they do with the results rather than how the tool itself works.
If any hiring manager uses a tool poorly and trusts bad information that comes out the other end, that's on them not the tool.
That said, I very much am concerned with how quickly and blindly we're trying to develop and use AI. LLMs being used for narrow tasks like filtering results are at least limited a bit in risk, but what isn't limited is all the dangerous things we can do as we learn more from how these algorithms do those tasks well or poorly.
If any hiring manager uses a tool poorly and trusts bad information that comes out the other end, that's on them not the tool.
That said, I very much am concerned with how quickly and blindly we're trying to develop and use AI. LLMs being used for narrow tasks like filtering results are at least limited a bit in risk, but what isn't limited is all the dangerous things we can do as we learn more from how these algorithms do those tasks well or poorly.