This is still the header! Main site

Pipes over Remote File Systems

2021/05/31

... a somewhat constructive sequel to that first article on how sockets are stupid.

We have several ways of making files on a remote server accessible locally:

Although NFS does some arcane port mapping magic, with the rest of them there is a chance that you can make them work via a single forwarded TCP port (that is, a reliable, two-directional data pipe), for example using an SSH tunnel.

You can also stuff multiple serial connections into one serial connection (e.g. using an SSH tunnel). Meanwhile, if you want to forward a file system over another file system... well, you can just mount something on your server, and share the entire file system tree. E.g. you have a bunch of Windows servers sharing drives over SMB on your internal network; you could have a Linux host that has each of them mounted on /servers/box1, /servers/box2 etc, and shares the entire /servers/ tree over NFS, for example.

(... no I haven't actually tried how efficient this actually is. But... it's possible.)

So you can:

... anything missing here?

Oh yes. How about pipes over file systems? Can I somehow share one of my open ports via NFS? (... which ties back into my favorite topic of how socket numbers are an entirely stupid namespace and we should really come up with something better. Like... a hierarchical tree of named entities? ;))

Unix sockets are, nominally, located in file systems. However, that's not helping us a lot: the actual files are only used to create a rendezvous point, everything else happens within the kernel of that single machine that all this is happening on. They don't end up being shared (... and some features, like sharing file descriptors, wouldn't work over a network anyway).

... but why exactly?

Well, to begin with, just for the mathematical beauty of it. You can wrap file systems (that are more complex) into pipes (that are less so); why the hell isn't it standard to forward pipes (simpler!) over something that is fairly feature-rich anyway?

But... there are better, practical reasons. Security being first and foremost. We have file system access controls worked out fairly well: users, access rights, ACLs, working over an actual network even. We have the entire thing wrapped in some sort of encryption, too. So, if you want to share access to a service, delivered over a pipe, with a very specific set of users... you could just plop the pipe endpoint into an already existing file system, share it over NFS / SMB / whatever methods you're using, done.

Or... you could hand-code your specific implementation, add SSL/TLS support (if you're not lazy enough), make up some custom user authentication scheme, and hope that it all holds together. Or, as a middle ground, if your service looks even remotely RPC-like, you could do it over HTTP, play around with cookies, and generally feel like you're doing the Right Web Thing.

(... well, if the Right Web Thing is "making things unnecessarily complex", then yes, you are.)

Perhaps not coincidentally, this is not unlike what Plan 9 was doing, too. Although they didn't quite support "pipes over file systems" explicitly, but... about half of the files they had around were really just pipes, not actual binary blobs, so the file system protocol was perfectly suited for carrying these, too. You could actually create network sockets by writing stuff into files, and you could operate a socket proxy by just mounting the "network interface" part of a remote server's file system. This is exactly what we're talking about here.

Practical use cases

This is post no. 11 for Kev Quirk's #100DaysToOffload challenge.

... comments welcome, either in email or on the (eventual) Mastodon post on Fosstodon.