I'm not sure there's an obvious way to (ab)use RT cores for something not directly RT-related. I don't know about DXR, but Vulkan RT API is rather high-level. You never see RT cores directly, you can just produce an opaque Acceleration Structure objects, handles to which can be used as an input into some shader functions. The way these ASes are built is rather rigid, and you never see what's inside (i.e. there's no actual guarantee that they're implemented using a BVH of sorts). Shaders are also directly RT-specific, i.e. you can only ask questions like "what will I hit if the ray is cast from this point into that direction". There's no API to e.g. traverse an AS manually, although you can get your hands on non-nearest hits if you want (for e.g. transparency).
You can sensibly use RT APIs together with raymarching. There is a notion of custom geometry that's specified using bounding boxes. When ray hits one, it's your responsibility to compute an exact intersection manually in the shader. This can definitely help with complex scenes with a lot of very different sdf objects that don't intersect much.
But yeah, in that case you'd still have to pay the Vulkan price first. I might experiment with what can be the size-chepest way to get vk rendering.
You can sensibly use RT APIs together with raymarching. There is a notion of custom geometry that's specified using bounding boxes. When ray hits one, it's your responsibility to compute an exact intersection manually in the shader. This can definitely help with complex scenes with a lot of very different sdf objects that don't intersect much.
But yeah, in that case you'd still have to pay the Vulkan price first. I might experiment with what can be the size-chepest way to get vk rendering.