In the 90s we had cvs and perforce... and then svn. Then there was BitKeepr. From what I can read of history, 2002 to 2005 Linux was under BitKeeper and then in 2005 that relationship soured and Linus went off to write Git. At the same time there was a large growth of other version control systems. The graphic shows bazaar, darts, hg, plastic and then Fossil is also in there.
Even at 2006 when Fossil was released, there wasn't significant mindshare on any of those platforms yet. Why use git? It wasn't even a year old when Fossil was released.
The migration of SQLite to Fossil (from CVS) was done in 2009.
That's several years after the software was written that SQLite switched to it. I wouldn't exactly put that in the place of dogfooding. It kind of is - but Fossil was a mature project when SQLite shifted its codebase.
SQLite shifted to Fossil in stages. Documentation moved in 2007. The TH3 and sqllogictest test suites started out in Fossil in 2008.
I wrote Fossil specifically to support SQLite development. If Fossil does nothing else other than support SQLite, then it is a success. Any other use of Fossil is just gravy. That we were conservative in moving the main SQLite source code into Fossil does not negate that fact.
Your repo being eaten by some arcane git 'feature' that you had no idea even existed, and then when you ask for help you get a 50/50 split between "that shouldn't have happened" and "well of course, you have to run 'git --unfubar repo' every 431 checkins or it'll corrupt your repo, seriously who doesn't know that?"
That's hilarious given that Fossil was notorious for actually corrupting data, which is one of the few things people hardly ever bitch about with git. The fact that you are running into this frequently enough to rant about it suggests maybe you aren't familiar enough with the tool?
If a version control tool, when used correctly, ever barfs on your data, that's too frequently. And I wasn't praising Fossil, just raising my gripe with git.
I don't know if you're intentionally trolling, but I have been using version control for almost 20 years now and have never used the term check-in. All version control software that I've used in that time used the term commit.
Uh, where? I said "check in" is a generic term (which it is - it's listed as the first synonym for 'commit' here: https://en.wikipedia.org/wiki/Version_control#Common_vocabul... ). I never mentioned the term "commit" (which is obviously also a common term for the same action). Then you suggested I was trolling, for saying it was a generic term, which it is. Now you're putting words in my mouth and calling them nonsense.
> Or that I used the generic term rather than the git-specific term?
In the context of the discussion, generic term refers to "check in", and git-specific term refers to "commit" (which is the only possible alternative term to be using in that context). The obvious reading is that you're calling "commit" a git-specific term.
So I wouldn't say I'm putting words into your mouth, unless you were unaware that "commit" is the correct alternative term, which is of course a possibility that I didn't consider.
Git is much more complicated and harder to grasp, most people I know just remember common operations and commands to issue without understanding what these really do.
Also unless your organization's project is open source, you don't really need DVCS. In my organization we use git nearly exclusively, but in the end everything relies on the central repo. We actually would do much better if we used SVN instead of git and have less issues. For example we already had significant mistakes such as someone delete main branch or performed a force push (yes, you can restrict it, but with git you need to know what to expect before blocking it). We also have repo for CMS, where we would greatly benefit from ability to merge by directory. I also see people trying to checkout latest version from git for just specific subfolder, but with that is also difficult, but trivial and extremely lightweight on SVN.
Yes, for still has very valuable tools for the local developer: stash, staging changes, bisect, local history (great if you work without internet access), but you can actually use git with other SCM and get the best out of both worlds.
Unfortunately when I mention that we could have our main repo under SVN, people look at me like some kind of dinosaur that is proposing it, because I get confused by git.
Just because git is a great tool for Linux kernel development, doesn't mean that it is the best SCM for for your organization. And if your organization uses something else than git, it doesn't mean you can't use git, in fact the tool of my choice is still git, I just think that most companies don't need DVCS for their main repo.
I've built two source management systems (NSElite, internal to Sun, and bitkeeper, now open source at bitkeeper.org).
Calling Git sane just makes it clear that you haven't used a sane source management system.
Git has no file object, it versions the repo, not files. There is one graph for all files, the repo graph. So the common ancester for a 3 way diff is the repo GCA, which could very well be miles away from the file GCA if you have a graph per file (like BitKeeper does).
No file object means no create event recorded, no rename event recorded, no delete event recorded. If you ask for the diffs on "src/foo.c" all Git can do is look at each commit and see if "src/foo.c" was modified in that commit. That's insanely slow in a big repo. And it completely ignores the fact that src/foo.c got moved to src/libc/foo.c years ago and there is a different src/foo.c that is completely unrelated. There is an option to try and intuit the renames when you are spitting out diffs but noone uses that because it's even more insanely slow.
Git is basically a tarball server. Calling that a source management system is an enormous stretch. Calling it a sane and powerful source control tool is just not supported by the facts, calling "the most ..." is laughable.
Yeah, I get it, Git won. You all lost out on "the most sane and powerful" as a result. Which sort of doesn't matter since everyone thinks GitHub is source management.
> Git has no file object, it versions the repo, not files.
As someone who has used (in a professional setting) version control systems ranging from RCA to SVN to the current crop (git, mercurial, even darcs), the fact that git versions the repository as a whole instead of individual files is a godsend.
You've not known hell until you've had to deal with an RCA/CVS repository with 15+ years of history and thousands of files, each that maintains their own version history (and associated version number!).
I'd gladly take the comparative "slowness" of git when dealing with large repositories.
> Git is basically a tarball server. Calling that a source management system is an enormous stretch.
What, in your opinion, is the definition of a version control system then?
A version control system is an accurate audit trail of everything that has happened in the repository. Every create, delete, rename, every rwx mode change, every content change.
In BitKeeper files work like they do in Unix, there is a (globally) unique name for each file object. Where the object lives in the repository is an attribute of that object, as are the contents, the permissions, who has changed etc.
Here's a super common workflow that's easy in BitKeeper and miserable in Git. I'm debugging an assert. I want to see who added the assert. I pop into the gui that shows me the per file graph and contents, search for the assert, hover over the rev and see that it was done a long time ago. I look in the area above the assert and I see a recent change, hover over that, see the comments and go "hmm, maybe this". Double click that line and I pop into a different gui that shows me the whole commit that contains that suspect line.
Note that because I have a graph per file, I have checkin comments per file. More work for you poor committers but a godsend for us debuggers. More breadcrumbs are more better.
In Git, less breadcrumbs, single commit message. Git wants to go from the rev to the commit, it's miserable to look around in a file and then go backwards to the commit.
When I was supporting BitKeeper our average response time to bug report or a crash report was 25 minute. 24x7. The only reason it was that long was because we were all in North America so there was a window where we were all asleep. Response time 6am-6pm PST was typically under 5 minutes. And I credit the fact that the tool accurately recorded everything and you could find the history really easily.
Oh, and it didn't slow down as the repo got big. Git is fine in little repos but it sucks pretty hard in big ones. Sucks even worse if you are on NFS. I can dig up benchmarks, we built up a synthetic 4M file repo and ran a bunch of tests on it (it was a modified version of the facebook repo builder, the facebook one had some stuff in it that made Git look incredibly bad, we looked at that and decided that wasn't real world or fair, we took that part out).
> Here's a super common workflow that's easy in BitKeeper and miserable in Git. I'm debugging an assert. I want to see who added the assert. I pop into the gui that shows me the per file graph and contents, search for the assert, hover over the rev and see that it was done a long time ago. I look in the area above the assert and I see a recent change, hover over that, see the comments and go "hmm, maybe this". Double click that line and I pop into a different gui that shows me the whole commit that contains that suspect line.
I may be misunderstanding that particular workflow (I'm sadly unfamiliar with bitkeeper), but this seems like a workflow that I accomplish relatively often with the use of tig[1].
On a separate note: thank you for the wonderfully detailed reply. It's such a pleasure to have an opposing view be so thoroughly explained.
Indeed, there are many tools to do that with git. I use vim with vim-fugitive (:Gblame, then o to open the commit I want to look at), but most IDEs do that to.
Heh, no worries dude (or dudette), I have dealt with the trolls. You can't post that the sky is blue without getting the trolls telling you are doing it wrong.
Every day humans make me again realize that I love my dogs, and respect my dogs, more than humans. There are exceptions but they are few and far between.
> In Git, less breadcrumbs, single commit message. Git wants to go from the rev to the commit, it's miserable to look around in a file and then go backwards to the commit.
A thought not universally shared. Intel was our biggest customer and when they saw the quality of the breadcrumbs produced by our gui check in tool vs the command line checkins they were smart enough to push hard that everyone used the guis.
I get why, as a dev, you want git commit -m'Fixed bug' but as the debugger guy, the reviewer gui, anyone who reads the code, that's a horrible thing to do to those readers.
Who, if you wait long enough, will be you. And I'll laugh my ass off at all the lazy committers who really could use more breadcrumbs when they have to debug their code later.
Been there, done that, I haven't worked with people that lazy in decades.
edit: since HN won't let me extend the thread, let me reply to the comment below because BK does do something special.
The GUI for checkins presents you with a list of files, a place to type comments, and a big pane that shows the diffs. You type in comments for the first file, go to the next, type in comments (yes, there is a way to say use the previous comments). As you move from file to file, the bottom pane shows the diffs for that file so you can see what changed in that file.
The special sauce, that Git most definitely does not have, is when you get to the last file, which in BK is the ChangeSet file, this is where you would type the commit message. What are the diffs? There aren't any so we stuff all the comments you just typed on individual files. What does that do? Well, on the files you are usually typing in details of how you did this or that, when you get to all of those comments, you naturally uplevel and type in why you were doing all that.
It dramatically increases the usefulness of commit messages. That's why Intel pretty much mandated the use of the gui checkin vs the command line checkin.
To anyone reading, bitkeeper does nothing special in this regard. Enforce commit message standards, plenty of platforms and systems built around git (and literally every other SCM) have support for this.
It's possibly the least interesting and least unique selling point an SCM can have. It's really funny that you keep bringing up this example around the thread.
EDIT: To respond to the above edit:
Again, he's Proving The Point. Commits should be atomic. If you have to individually comment on file changes then the correct thing to do would be to put those in their own commit, no? I'm not really sure what's being described is necessarily a cool feature, but rather a way to avoid making sure your changes are truly related. I honestly don't see the point. This seems like a feature that was written because of the decisions that were made into how BitKeeper works internally, not because it's a fantastic idea. You can get the same thing with atomic commits in git. You comment per file, because BK tracks changes per file. Git does not do this. You should be making your commits atomic because git is tracking the actual content. Atomic commits will accurately describe what's being changed, and then of course those all get lumped together in a patch/PR.
Git isn't lacking the feature you're describing, it just kind of is there without any extra data tracking required, because it's not making up for technical design decisions.
> As someone who has used (in a professional setting) version control systems ranging from RCA to SVN to the current crop (git, mercurial, even darcs), the fact that git versions the repository as a whole instead of individual files is a godsend.
This. A thousand times this. A change isn't a single file, it's likely a changeset of multiple, sometimes hundreds or thousands of files. Most often, you need to know the entire changeset, not just what happened with one file. This is especially true of large-scale refactoring where you're changing the public interface of something. Depending, that can have a far-ranging impact and you want to see that history all together.
You're talking to the guy that created that concept [1]. I get that changesets are cool :) You're right, you do want to see all that info together.
But lots of times you want to look at the file view, find the line of code that looks like the problem, and then zoom out to the changeset. BK makes that trivial, Git makes that miserable to impossible.
[1] Actually there was a little known system, Aide-de-camp, that one of my people told me about that had changesets so I didn't invent it, but I reinvented it. And made the world aware of the concept. Back when you could search Usenet via dejanews you could search for "changeset" and date limit it to before me talking about it. There were maybe 5 hits. A few years after BK came along there were 100's of thousands of hits. So I wasn't first but I am definitely the reason that you know what a changeset is.
Thanks for your contributions to better SCM design through BK. I agree with the potential value of tracking files and branches as well as commits and being able to easily navigate through that space -- as well as the emphasis on adding more metadata to make future understanding easier.
It's unfortunate we don't have a better funding model for FOSS development (or alternatively a basic income) because otherwise BitKeeper might have been open from the start -- and then we might have avoided the limitations of git as a hasty workaround for licensing issues.
> But lots of times you want to look at the file view, find the line of code that looks like the problem, and then zoom out to the changeset. BK makes that trivial, Git makes that miserable to impossible.
This criticism doesn't ring true for me. What are you saying is missing in the Git experience?
In Git, I would use `git blame` to determine which commit contributed the problematic line in question. It displays the file along with the commit that most recently modified each line. At that point, I know which commit last changed the line (and the commit is the changeset). Aren't we done?
If I need more history, I can use `git log` on the filename to see what commits have changed that file over time, and I can inspect how each commit individually changed the file if needed. There are editor-integrated tools to walk back through this history easily.
Looking at a file, and then looking the commit which last modified a specific line in that file is a trivial operation in git that I do on a regular basis.
On the largest repos I've worked with `git blame` takes a second or two: long enough to be annoying and make me wish it was a bit faster, but still fast enough for interactive use and well short of the threshold where I would be tempted to context switch after invoking the command. On most repos it's perceptively instant.
I suspect on a HDD `git blame` would be unbearably slow on anything but the smallest of repositories, but it has been many years since I last worked with source code stored on a HDD.
That's true, I'm not the op, but changesets get in a way when using git for SCM, where the typical workflow is to merge between two branches on the level of subdirectories.
In my company people simply use checkout to apply changes from another branch, but that doesn't handle well for example file removals. It also creates a completely different commit so the branches diverge more and more, making things like rebase more time consuming.
It is a huge pain, which would not exist if we would use SVN as a backed for example.
I agree that that's not well supported, because it kind of goes against the grain of git thinking about project versions rather than file or directory versions.
What you should probably do in this case is use git merge nevertheless, but reset all changes outside of the directory of interest before committing the merge. This way, you get the history of the merge in the DAG, which will make git's merge resolution work.
Unfortunately, I'm not aware of a built-in way of doing this.
What do you mean when you say "look at the file view". I read that as just looking at an annotate view and then find the changelist that the change was part of.
However, this is something every single version control system can do, so surely you were referring to something else. Could you explain what operation you were referring to?
So BK is kinda weird in that the metadata that binds all the files together in a commit is just another version controlled file.
We built a GUI tool that lets you look at a versioned file, it shows you the graph in the top pane and either diffs between two versions or the contents of a particular version in the bottom pane.
It's the goto tool for figuring out stuff. It is not just a GUI version of "git blame". When you use it you can see the history of each line by hovering over that line, you get a popup that shows the checkin comments for that line. And it is fast, as in below human reaction time, so you use that feature.
And you can double click on any line and boom, you are looking at the changeset that introduced that line.
I'm tired so I'm probably not doing a good job explaining this, but we supported commercial customers for a couple of decades and we had just incredible response time to each issue and I credit this work flow for that. Someone would call and say I have this assert and one of would get into the gui, start looking and we would know the cause of the problem in seconds or single digit minutes and I don't me 5-10 minutes, I mean 1-2 minutes.
Maybe I'm clueless and there is a way to do this in git but I haven't found it. When I have to work with git repos I fast-export them into BK just so I can have a more sane way to look at the history. It's not great history but it's better than Git.
Edit: I didn't explain what that gui did on the ChangeSet file. So that's what gitk (gittk?) is, it shows you the repo graph. You can click on a node and see the commit, you can left click and right click and see the diffs between those changesets.
So far as I know, BK is the only system that puts the metadata in the same system as the user data.
SVN does version the whole repository, so a revision can contain any number of changes to files, but it tracks eachs files history too. So a SVN revision is a collection of changes to any number of files/directories.
When I make a change, I want to mark that the I changed 3 files simultaneously. Not that I intended the system to work with only the first file changed.
I don't claim that Git is perfectly. But most of the problems are solved by better UI (Git does track when files are renamed, created and deleted), not by wrecking what a commit is.
Git understands commits. CVS style 'every file has it's own history' is wrong. Git's not perfect and I would not be too bothered if Mecurial beats it long term, but my god if I had to go back to CVS-style flow with concept of multi-file checkin, I would quit my job.
> Git does track when files are renamed, created and deleted
Actually, no, it doesn't. Git tracks just the before/after state: before the commit these files existed, after the commit those files existed. It infers creation/deletion/rename, when necessary, by comparing these two (or more) states.
I think it's interesting, it's checks off a lot of my boxes. We wanted to do a product that we called "software dev in a box" that was SCM, bug db, wiki, etc. Fossil comes closer to this than we did. So that part is really cool.
I'm not a fan of using a DB to store versions. It's just not the right tool. Before we open sourced, we jealously guarded "the weave" which is how the history data is stored. The weave gives us so much, bk blame is instant, bk grep is instant, there is a "bk grep -R" that will look in all versions of a file that is instant, or you can do "bk grep -R<revs>" and look in just those revs, all instant.
The weave is compact, fast, merges better, it's just a better storage format than a DB. Here's an example. In most version control systems, lets say there is a 100 line file. I clone that repo and I modify the first 51 lines of that file. You clone the same thing I cloned, and you modify the bottom 50 lines of that file. So we have 99 unique lines and 1 line that we both modified. Now Joe Merge clones my repo and merges your repo. He's the guy that closed the DAG. He had to manually merge the one line that we both changed so when you do $SCM blame the correct answer is the top 50 lines are me, the bottom 49 lines are you, and the manually merged line is you, right?
That's what happens in BK. It's not what happens pretty much anywhere else. Either the entire top chunk or the entire bottom chunk will look like it was done by Joe. Why? Because everyone else passes data by value, BK (the weave) passes data by reference. Everyone else copies the data across the merge point. BK does not, the only new data that will be in the merge node is the one line that joe had to merge by hand.
This can have some space savings implications, which can be a big deal for big files, but in my opinion the far bigger implication is blame. Joe merged in your stuff and now your stuff looks like he wrote it. Someone is tracking down a bug and they should be talking to you but they are talking to Joe.
> That's what happens in BK. It's not what happens pretty much anywhere else.
I'm not sure I understand your example. At least, the way I understand it, both git and mercurial deal with it the way you say BK does. They attribute the 50 first lines to you, the 51th line to joe if it looks like neither what was in your or the other's version, and the last 49 lines to the other.
Didn't intend for it to come across as personal, just pointing out that it would be very hard to make unbiased statements about git when you have a dog in the fight.
It would be like the Myspace founder/creator badmouthing Facebook and claiming Myspace was still superior and that Facebook's way of doing things is insane. Whether or not the claims are legitimate is irrelevant in light of the fact that your competitor squashed you and may have made you bitter.
>Git has no file object, it versions the repo, not files.
Which is the correct thing to do.
>if you have a graph per file (like BitKeeper does).
Which is insane to do because you're basically making things more complex than they need to be, which is why merging in git is not only SUBSTANTIALLY faster than nearly every SCM out there, it also works a good deal of the time without issue.
>No file object means no create event record, no rename event recorded, no delete event recorded.
Most people do not need these. If you're going to choose to use file GCA's for the reasons listed above then you better have a damn fucking good idea of how much these features are actually used, because the trade offs are ENORMOUS.
> That's insanely slow in a big repo.
And also not really done that often. So you know, good job choosing to optimize for things no one is going to use on an astoundingly consistent basis or even needs to be super duper speedy in the first place.
>You all lost out on "the most sane and powerful" as a result.
Your concerns are misplaced? Misdirected? Git does things a certain way and people use it because people were tired of SCM's that decided to fix problems no one really cared about, and then do the things that developers do care about very poorly. You actually demonstrated this pretty thoroughly in your own post while attempting to call ,presumably, bitkeeper "sane."
>Calling it a sane and powerful source control tool is just not supported by the facts
I'm sorry, feel free to tell every other developer in the world, including the ones that are involved in far more collaborative efforts than your work requires, that the tool they're using is just not sane. I guess being one of the most used SCM's in the world, on one of the biggest OS projects in the world aren't really relevant facts into how "sane" an SCM is. I guess that's totally why bitkeeper used to be sold and now is open source.
Your claim about merging is false though, demonstrably. Picking a repo gca when you could have used a much closer file gca is better. BK does that and automerges, correctly, more frequently and is way way faster than Git.
I've written two source management systems. I'm confident in my knowledge. Arguing with some random dude who thinks he knows more than me is not really fun. So go enjoy Git. Lots of people are too busy/whatever to know what they are missing, maybe that's you. It's not me, I kinda like my audit trail to be accurate.
Even Linus admitted to me in my kitchen that Git's audit trail is lossy. But go enjoy Git, I'm glad it works for you. Knock yourself out.
That doesn't really explain why perforce, and mercurial seem more popular, nor the lack of current buzz around the project. I could be wrong but I can't say the mind share is very high.
>BK does that and automerges, correctly, more frequently and is way way faster than Git.
Demonstrate it. You said it was demonstrable so presumably your Totally Sane SCM project should have evidence to back this up.
>Arguing with some random dude who thinks he knows more than me is not really fun.
Why because you basically made a complete fool of yourself?
>Even Linus admitted to me in my kitchen that Git's audit trail is lossy.
You might describe it as FUZZY, but not "lossy." Nothing is being "lost." My random internet dude that is actually an insane assertion to make, especially the anecdote about Linus being in your god damn kitchen. Not that it really changes anything about the characteristics of any SCM and why people use it.
>But go enjoy Git, I'm glad it works for you.
Enjoy your dead project. Glad it could be surrounded by a graveyard of other Totally Sane (C) SCM's who are collectively responsible for an untold amount of wasted man hours and licensing fees.
Linus, in my kitchen, surrounded by impressed women who were asking how he lost weight. He had just answered the question
"did you give up drinking" with a "hell, no!"
Funny story, I invited Linus to the pig roast and he didn't RSVP so there wasn't anywhere for him to sleep. I ended up sticking him and his daughter in a VW popup van :)
You can kindly wander off now, random internet dude. Especially since the guy who wrote Git agrees that it is lossy, so take your FUZZY and go home.
People used git because it was a tool that solved some SCM headaches. People use it now because it solved some headaches and because of the network effect.
I think Git won because it was superior to CVS and SVN. I don't think it won because it was better than BitKeeper, Fossil or Mercurial.
Lastly, does anybody have experience with git merges and BitKeeper merges? You make it sound slow, but it also sounds like you've never used it.
You can go look at BK's merge alg, it's quite cool. And the basic implementation was done in about 20 minutes. And it is extremely fast.
I should write up a blog post about it, it's pretty complicated to understand because to get it, you need to understand SCCS's interleaved delta format. If you understand that format, then imagine that you put a line number in front of each data line in the weave. Check out the GCA, local, and remote versions of the file with those line numbers prefixed. Now run a 3 way merge on that.
All the complexity in smerge.c comes from dealing with the cases where that doesn't work, but man, it works great 99% of the time.
Ice never used Bitkeeper so I can't say if the statements he made are true or not, but arguing that something is good juts because a lot of people use it is most definitely not a valid argument.
The computing world is full of absolutely terrible technologies people keep using even though much better alternatives exist. At first I considered listing a few and then I realise that it's likely most readers are actually users of one of those, but I'm sure you can think of a lot.
We took too long to open source it, people didn't like it being closed source and used for the Linux kernel. RMS was hugely butthurt that we had the best system and all open source had was CVS and SVN.
Whatever, it paid the bills nicely for almost 20 years.
>We took too long to open source it, people didn't like it being closed source and used for the Linux kernel.
And for things like this:
>I didn't want to do anything that even smelled of BK. Of course, part of
my reason for that is that I didn't feel comfortable with a delta model at
all (I wouldn't know where to start, and I hate how they always end up
having different rules for "delta"ble and "non-delta"ble objects). [1]
Most sane? That’s a matter of perspective. I’m still a little shocked that git “won out” over mercurial. Even as Subversion was eating CVS, everybody knew distributed revision control was going to ultimately prevail. I was pretty sure Darcs wasn’t going to achieve popularity, but I’d have bet anything that Mercurial would be the successor to Subversion. It was far more natural / similar for anyone who's ever worked with CVS/SVN.
If you don't need a distributed system, I'd argue that Subversion is at least as sane as anything else, at least at the plumbing level, once you realize that it is not so much a VCS as essentially a remote file system with atomic multi-file operations, automatic sequentially numbered annotated snapshots after write operations, and near zero cost copy on write copying.
You establish a naming convention on top of that to use it as a VCS. For example, one common such convention is to have directories named "trunk", "tags", and "branches" at the top level, with your projects living in subdirectories under "trunk". Under this convention the way you represent a tag is by simply copying your project directory from trunk/my_project to a new directory named tags/my_project/tagname. Similar for branching...just copy to branches/my_project/branch_name.
Don't like that convention? Develop your own that fits your work better.
> If you don't need a distributed system, I'd argue that Subversion is at least as sane as anything else
Oh my yes. Binary diffs were lovely. Also, support for large assets.
For reproducible data science work, you’re going to need at least code and data. One of those things is a total PIA with git if the data aren’t trivially small.
Seriously, I find git very non-sane if not quite insane. I really liked darcs and still prefer Mercurial but sometimes the value attained by adopting standardisation overcomes the extra value in a "better" alternative, and so… here we are.
This is a great question. Where I work we use TFSVC and the VP of Engineering is willing to green-light a switch to Git as long as we can make a data-driven, objective case for why Git is better. In other words: "the devs prefer it" or "using Git makes me happier" don't wash as valid reasons. Branching in Git is certainly much easier but the counter that "you can branch with TFSVC too" is true, even if it's slower and eats up your hard drive space faster...
That's not necessarily true. Everyone knows at least one dev who would prefer the world's stupidest workflow. And where it is true, it can directly contradict other priorities, like 'the devs prefer to check bug fixes directly into production because it saves them the work of staging'.
I think it's fair for someone to explain why he doesn't use Git when he probably gets that question a few thousand times a year. He's not the one that posted it to HN.
I said the article does, not the HN post. And yes, explain that YOU made Fossil in that case... it's probably one of the biggest reason SQLite uses Fossil. Way above all others.
Now, the reasons why you created Fossil, that interests me. But this is not the case. It's an ad.
But then, on that page, explain that the author wasn't happy with Git and wrote their own. It's as if I post an article on why LibreOffice is better than Microsoft Word and it turns out I'm the founder of LibreOffice or something. Of course I'd think it's better and of course I know just the right things to mention to convince everyone, because I was there to build them or check them in.
> There is no significant way in which I found Pascal superior to C, but there are several places where it is a clear improvement over Ratfor. Most obvious by far is recursion: several programs are much cleaner when written recursively, notably the pattern-search, quicksort, and expression evaluation.
While the author goes about comparing Pascal to C (and Ratfor), he fails to disclose his affiliation with being the author of C.
"Why Pascal is Not My Favorite Programming Language" is an article by Brian Kernighan. Kernighan didn't create C. Dennis Ritchie did. And between Ken Thompson, Kernighan, and Ritchie, Kernighan had the least to do with the creation of C.
What Kernighan did do is write the book on C. (You might argue that it's a conflict of interest, too, but it's not. It makes a lot of sense that someone would like a programming language so much that they'd decide to write a book on it.)
It seems perfectly reasonable for someone who created an alternative to explain why they did it. Maybe the page should be titled "Why sqlite dev(s) created Fossil", but it certainly doesn't come across as disingenuous to me.
Because they don't want to come across as pushing their own product, I guess. I saw a recorded talk by the Fossil creator a couple of days ago where he talked specifically about git and how it could be improved. He was extremely reluctant to name Fossil explicitly, although he stated upfront that he has developed similar software.
There are reasons to dogfood that aren't 'does it pass integration tests'. Stuff like 'is it to hard to use this feature in a real way', and 'performance bottlenecks that you might not have thought of'.