Do you know of anything remotely near that kind of problem in practice? Paperclip maximiser problems seem to me like fantasy at this point. I don't think we're anywhere near close enough to that problem to sensibly work on avoiding it; I think it's a bit like trying to work on the problem of lunar city overcrowding.
There won't be anything practical, but that's kinda the point. This class of problems posits that there is this sudden tipping point where it gets out of control. So by necessity you have to think about it in advance. That's what makes it such a hard problem and why we get such polar opposites in the views on it (AI will doom us all vs AI utopia)