I would argue that any AI that does not change when running cannot be conscious and there is no need to worry about its wellbeing. It's a set of weights. It does not learn. It does not change. If it can't change, it can't be hurt. Regardless of how we define hurt, it must mean the thing is somehow different than before it was hurt.
My argument here will probably become irrelevant in the near future because I assume we will have individual AIs running locally that CAN update model weights (learn) as we use them. But until then... LLMs are not conscious and can not be mistreated. They're math formulas. Input -> LLM -> output.
My argument here will probably become irrelevant in the near future because I assume we will have individual AIs running locally that CAN update model weights (learn) as we use them. But until then... LLMs are not conscious and can not be mistreated. They're math formulas. Input -> LLM -> output.