AWS provides a lot of features that are exclusive to their platform and can't be drop-in replaced on other providers like Azure of GCE. ELB, EFS, S3, ASGs, etc. They'd need to be replaced at the application level for other platforms. That could be a huge commitment for a decent sized system.
- S3 has a public protocol and many 3rd party providers support it (OpenIO, Scality, Ceph, Minio, etc),
- EFS could be replaced with something like DRDB or GlusterFS, or DigitalOcean's block storage or Google Cloud's networked disks.
- ELB could be replaced easily with similar services from other providers [1] if you use Kubernetes (I don't know if all have a LoadBalancer type though)
I would be more concerned about firewall/vpc rules, because I have no idea how those could be migrated without risk of forgetting some. Lock-in seems not that high in the end though and even less so if you use an open source container orchestration stack because they abstract most of these things away.
One good way is to have automated tests which make sure that those rules actually work. What if all your AWS rules suddenly get deleted. How are you supposed to know if you have not forgotten any.
Terraform (https://www.terraform.io), which we use, is a neat way to abstract this configuration data. We keep the configuration files in git, and can do GitHub pull requests over our whole infrastructure and apply the configuration with confidence (to change the existing setup or re-create it from scratch). This works for multiple cloud providers and is great for security purposes (all changes are auditable, no configuration drift).
Essentially, a declarative configuration for infra is what you are getting at. You can take this further, using containers and orchestration tech, to abstract your application behind a declarative configuration making it infra agnostic (as it should be iMO). Obviously, not getting locked into any cloud provider services is a pre-requisite for this. Check out such an implementation here[1](full disclosure - I work here))