Title says it all. EDIT to clarify: I'm just curious what kinds of setups everyone has for their own situations vs. looking for answers on what to do in my own.<p>I'm currently using a barebones file server to move code between a couple local machines (one Windows, one Linux).
>Title says it all.<p>The title doesn't really say it all. It depends.<p>How much disk space are we talking about? More, or less, than will fit on Dropbox?<p>Do you have one person with accounts on both machines? Or is this sharing between different people?<p>You mention code, and moving it. Do you really mean "move" or do you mean something closer to copy, or sync? rsync maybe? Or stick the code in git, put it in a GitHub project, and clone onto each machine then use GitHub to manage keeping them in sync?<p>Maybe explain the specifics more and you might get some better answers.<p>Personally I lean heavily on GitHub and dropbox (for different use cases). But then also sometimes use scp for copying files. Would use rsync if I had to sync files.
I use Syncthing[1], it is open source and in my opinion a great way to keep your files synced across multiple computers. However I don't think it is the best option to keep code synced. I generally use private git repositories on Bitbucket or GitHub (On Bitbucket you don't have to pay for private repos whereas on GitHub you have to unless you are a student [2]).<p>[1] <a href="https://syncthing.net" rel="nofollow">https://syncthing.net</a><p>[2] <a href="https://education.github.com/pack" rel="nofollow">https://education.github.com/pack</a>
For code between machines in real time, Git. Private repositories on Bitbucket because way back in 2013 Github did not provide free private repositories. Public repositories on Github. I've got a back burner project to run Gogs on an RPi, but it hasn't happened.<p>For ordinary files between Windows and Linux, FAT32 partitions/disks. This means I can drag and drop graphically or use command line tools. Often I will use an SD card or thumbdrive when the transfer is tens of GB because the asynchronous steps means not having to baby sit an end to end pipeline and fewer steps if something goes wrong.
In that scenario I'd use git, and have a bare repo (`git init --bare`) on the file server for each of the client machines to use as a remote. Strictly speaking you could use git and just push code from one machine to the other, but having a central server is quite handy if you want to do things like CI or deployments.