The auther gives a very important advice, but takes it to an unhealthy extreme.<p>I fully agree that you should always try to contribute the changes you make to a Free Software project. This has not only the advantage of less maintainance work in the long run. It also means that your changes will be reviewed - by people who know the code you're modifying very well. So contributing also means to get a good quality assurance.<p>However, it makes still a lot of sense to keep a local fork <i>in addition to contributing</i>. And here the author argues too one-sidedly when he recommends that you should do that only for security fixes. There are many other scenarios in which this makes sense:<p>1) The change may be important for you (e.g. to make the code compile on some strange OS), but not be accepted by upstream (e.g. they don't want to support that strange OS in the long run).<p>2) The review process, as well as the next release, may take some time. And you certainly don't want to make your own release schedule totally depending on other projects' release schedules.<p>In the first case, you have no choice but to keep a local fork as long as the project maintainers don't change their mind or provide a better solution.<p>But even in the second case you'll have a long-term fork, at least if you are contributing regularly. But that's not a bad thing, because everytime the upstream project releases a new version, you can remove some of your (contributed) local changes from your fork. So yes, you'll have a long-living fork, but it will only differ from upstream by the last few patches not yet accepted by them.
Not according to some:<p>"The freedom to run the program means the freedom for any kind of person or organization to use it on any kind of computer system, for any kind of overall job and purpose, without being required to communicate about it with the developer or any other specific entity. In this freedom, it is the user's purpose that matters, not the developer's purpose; you as a user are free to run the program for your purposes, and if you distribute it to someone else, she is then free to run it for her purposes, but you are not entitled to impose your purposes on her."<p>The Free Software Definition
<a href="http://www.gnu.org/philosophy/free-sw.html" rel="nofollow">http://www.gnu.org/philosophy/free-sw.html</a>
As an upstream maintainer, I prefer that people contribute rather than maintain their own forks. It just makes it easier for other users; what if they want a feature that's in user A's repository <i>and</i> a feature that's in user B's repository? It's not their job to figure out how to merge them: I do that so they can focus on adding features instead.<p>But with that in mind, there are plenty of cases where you do need to maintain a proprietary fork: testing new ideas, integrating with internal infrastructure, and so on. This is certainly more difficult than letting someone else maintain the project, but less difficult than being the maintainer yourself. You basically miss out on big refactorings, but it's no different than being a regular user of a library that makes incompatible API changes.
I've managed this with a couple of projects. With one, it was C source, pretty discrete set of modifications so used git to do the leg work - one remote was our own repository, the other was upstream. I can't recall seeing a conflict when rebasing from upstream branches.<p>The other project explicitly allows for code modifications and accomodates them in a separate directory you can keep in revision control. Again, no memory of major conflicts <i>in stuff we did correctly</i> - i.e. used the provided overlays or callbacks.<p>So, when is it a bad idea? When you step outside certain constraints (like our callbacks example above) and override or replace or modify directly the project's own code in some significant fashion.<p>So, modification of open source for your own needs is a major plus if the project accomodates it sanely or the scale and nature of the changes are controllable.<p><i>edit</i> Removed "and only if" in last paragraph, attempts to preclude people from providing better ideas.
This brings up a good point. The two companies I've worked for have used a ton of open source software and have contributed nothing back. It kind of disgusts me. Am I wrong to feel that way?
I agree that, in general, you don't want to maintain a fork because that's a lot of work (or more work than contributing the changes so they're included and maintained upstream), but I don't like the idea of losing the benefits of open source customizability because upstream don't want your changes (sometimes it happens, for different reasons).<p>So yes: avoid forking when possible, but don't be afraid of maintaining a fork if the benefits are worth the effort.
I don't find it <i>that</i> hard to maintain forks with git, even ones in which thousands of lines of code have been changed locally.<p>My usual reason for having a fork is Solaris support, or removing code which I don't need for performance reasons, or replacing the CMake build system with something sane. These aren't the kinds of changes which many maintainers are willing to accept.<p>Another frustratingly common case is that the original maintainer has gone AWOL and I need to fix a few bugs and maybe add a couple of features, but I do <i>not</i> want to become the de facto maintainer of a public fork.
This is precisely one of the areas in which Git shines. It <i>used</i> to be a pain to maintain your own patches on top of an open source project. But with Git it's easy.
It is well known that distributed version control systems make it easier for developer soutside of a project's core team to contribute to the project. But it seems to me that package managers are counteracting this, because they encourage a sharp division between user and developer. I'm talking about both OS-level package managers like APT, and programming language/VM-level package managers like Maven, NuGet, RubyGems, npm, and the various Python package management tools.<p>One solution might be for applications to pull in all of their dependencies as subrepositories in their version control systems. But then where would we stop? At the implementation of the application's main programming language or managed runtime? At the C library? At the operating system itself (assuming the OS is open source)? This would also seem to encourage a single dominant version control system.<p>So I have no definite answers.
Well, that's why a good framework should be built in a modular and extensible way. For example, the way we build our Q framework (previous open source version is here: <a href="http://phponpie.com" rel="nofollow">http://phponpie.com</a>) is that you can override the core with plugins, and plugins with your apps. There is a cascading file system, that I learned from Kohana. Very useful!