Normally I'm all for people using tried and tested primitives for things, however I think that in this case unix sockets are probably not the right choice.<p>Firstly you are creating a hard dependency for having the two services sharing a same box, with a shared file system (that's difficult to coordinate and secure.) But also should you add a new service that <i>also</i> want to connect via unix socket, stuff could get tricky to orchestrate.<p>But this also limits your ability to move stuff about, should you need it.<p>Inside a container, I think its probably a perfectly legitimate way to do IPC. Between containers, I suspect you are asking for trouble.
Bit besides the point, but how many of you do still run nginx inside container infrastructures? I've been having container hosts behind a firewall without explicit WAN access for a long time -- to expose public services, I offload the nginx tasks to CloudFlare by running `cloudflared` tunnel. These "Argo" tunnels are free to use, and essentially give you a managed nginx for free. Nifty if you are using CloudFlare anyway.
I think this is where `gRPC` shines. It can <i>feel</i> tedious but really, define the interface and use the tooling to generate the stubs, implement and done. It prevents having to think up and implement a protocol and importantly versioning for if/when the features of the containerized apps start to grow/change.
The multiple layers of abstraction in this make this test sorta moot. You have the AWS infra, the poor MacOS implementation of Docker, the server architecture. Couldn't you have just had a vanilla Ubuntu install and curl some dummy load n times and get some statistics from that?
<a href="https://podman.io/getting-started/network" rel="nofollow">https://podman.io/getting-started/network</a><p>> By definition, all containers in the same Podman pod share the same network namespace. Therefore, the containers will share the IP Address, MAC Addresses and port mappings. You can always communicate between containers in the same pod, using localhost.<p>I'm a noob here but why wouldn't you use IPC?<p><a href="https://docs.podman.io/en/latest/markdown/podman-run.1.html#sharing-ipc-between-containers" rel="nofollow">https://docs.podman.io/en/latest/markdown/podman-run.1.html#...</a>
I’m curious as to whether the HTTP requests re-used the TCP socket or if they were dumb “Connection: close” ones that closed the socket and set up a new one for each request.<p>The overhead for that alone would outstrip any benefits.
Isn't this what a socket library like zeromq is supposed to cover? Change transports (tcp, ipc, inproc if in the same process, udp with radio/dish,...) through config files, when deploying?
I'm plugging this amazing resource (not only containers but also virtual machines...): <a href="https://developers.redhat.com/blog/2018/10/22/introduction-to-Linux-interfaces-for-virtual-networking" rel="nofollow">https://developers.redhat.com/blog/2018/10/22/introduction-t...</a><p>It's lower lever than OP but might give ideas.
There were a few other combinations I wanted to see -- how does docker-to-docker compare with socket communication on the same local machine? I would love to know if there's a difference.<p>The results when running on other machines could be impacted by a number of different factors. Almost impossible to know what is limiting performance without deep diving into the logs