I saw this in a few of my friends' companies. I have asked to everyone how is storing data in Azure is better than using Azure ChatGPT API, and till now I haven't been able to get a good answer.
Still, it makes sense, as if your company is serious about data protection in the first place, it isn't just "using Azure", but has a contract with guarantees. Guarantees that don't carry over to randos using Azure just because it's Azure.
This. If you don't trust your provider when it says it won't use data you embed in your LLM API Calls, why do you trust them when you use any of their other services?
Because those other services are built with clear expectations of tenant isolation, and cross-tenant data leakage would be a near-fatal event.
But the models behind these AI tools have a single-tenant core, with tenant isolation added on as a heroic effort to fake what the technology does not support by default.