I'm always torn apart when it comes to LLMs and analytical tasks. When you perform an analytical task, whether it is something simple like assessing the potential risk and impact of a vulnerability or complex like analyzing an obfuscated malware sample to determine its capabilities, you have to thoroughly go over the data points available to you, and corroborate the data points or evidence you are using to come up with conclusions. LLMs can help with a lot of this, but you still have to go over their reasoning (black-box mostly) or backtrack their work before you can accept their conclusions.
In other words, even with humans, their skills and experience are never enough. they have to show the reasoning behind their conclusions and then show that reasoning is backed up by an independent source of fact. Short of that, you can still perform analysis, but then you must clearly state that your analysis is weak and requires more follow-up work and validation.
So with LLMs, I'm torn up because they kind of make your life a lot easier, but does it just feel that way or are they adding more work and uncertainty where that is intolerable?
In other words, even with humans, their skills and experience are never enough. they have to show the reasoning behind their conclusions and then show that reasoning is backed up by an independent source of fact. Short of that, you can still perform analysis, but then you must clearly state that your analysis is weak and requires more follow-up work and validation.
So with LLMs, I'm torn up because they kind of make your life a lot easier, but does it just feel that way or are they adding more work and uncertainty where that is intolerable?