And CORBA/IDL was doing exactly the same 20 years prior to that.
We get tired of things because they accumulate cruft, or are deemed "ugly" by younger developers. So we replace them with newer alternatives, that are more light and easy to reason about for newbies entering the profession. But then we eventually find that we needed more features after all, so we gradually re-implement them again until the cycle repeats. The industry wheel just keeps on spinning...
I think this is a bit of an oversimplification. The modern approach to RPC is very different than CORBA or even SOAP/XML.
CORBA was designed around the idea of distributed objects. The core idea was that you have a reference to an object but you don't know (or care) if the object lives in your address space or on a remote computer somewhere. When you make a "remote procedure call", CORBA tries to make it behave as if it were just a regular function call. The call would block the thread until it completed, and any communication errors would be marshaled into some kind of language exception.
It turns out that RPCs are different from regular function calls in a lot of ways. Trying to make them the same just makes things overall more complicated and less flexible. Also, making "remote objects" stateful creates a lot of problems for little benefit.
So XML/SOAP did away with these ideas. Instead of being designed around remote object references, it was designed around request/reply to a network endpoint. No statefulness was designed into the protocol, though of course it could be layered on top by enclosing your own data identifiers.
But SOAP was based around XML, which was never really designed to be a object serialization format. Compared to alternatives like Protocol Buffers, XML is big, slow, and not a clean mapping to the kinds of data structures you use in programming languages. Protocol Buffers are a much better match to this problem. (More at: https://developers.google.com/protocol-buffers/docs/overview...)
My point is that these new technologies aren't just repeats, there are real improvements that justify inventing something new.
well I would guess the difference between soap and grpc is that soap was developed as a standard, while gRPC became one (or is becoming one).
Also the biggest difference is, that soap had like a trillion implementations which all worked kinda differently. code generation, etc..
GRPC somewhat does not have this problem because basically there is only one client implementation managed by google (now the cnf).
also in soap you basically built your server first, because writing a wsdl from scratch is like... akward. the idl of grpc is extremly simple to actually start without any implementation at all. and as a bonus it works way better if you need to add/change fields.
> But then we eventually find that we needed more features after all, so we gradually re-implement them again until the cycle repeats.
If the protocols and standards were designed lock-step with concrete implementation, I'd agree with you.
But too much of SOAP, CORBA, yada-yada was designed _before_ any implementation occurred. So they are nasty and cruft-filled long before even version 1.0.
Protocol Buffers ain't perfect, but they've been vastly deployed and hugely battle tested, so their ratio of cruft/useful remains tolerably low.
SOAP was overdesigned and yet somehow still underspecified at the same time. You could implement two different implementations that both followed the specs religiously that could not interop at all.
It's hard to overstate how crappy working with SOAP really was. I think as the industry matures we really will see serialization formats and protocols stabilize, I think we've already seen a bit of it with JSON.
I actually think that the design of gRPC must have been a great deal of effort. The project proposes an scalable solution with simple enough interfaces that smaller teams have been able to adopt quickly. I admire that very much!
No love for Open Network Computing (ONC) Remote Procedure Call (RPC)/xdr? (the rpc in rpcd for nfs).
In all seriousness grpc and protobuf isn't bad. Not sure if I'm sold on the http2 transport - but at least it has somewhat reasonable support for crypto.
I was bummed waiting for for the actual rpc-part to become usable - and now I think I'd rather build on capt'n'proto. But really, if we can just get some standardization that's better than json/soap, I'm willing to have another look.
If I never have to base64-encode an image or other binary to fit it into an api request, it'll be too soon. Or invalid deserislization error.
We get tired of things because they accumulate cruft, or are deemed "ugly" by younger developers. So we replace them with newer alternatives, that are more light and easy to reason about for newbies entering the profession. But then we eventually find that we needed more features after all, so we gradually re-implement them again until the cycle repeats. The industry wheel just keeps on spinning...