So relatively simple that it took 12 years to even begin? Can you really blame people for using Git when it took them 9 years to even accept that Python performance was a problem?
It is easy to replace a "bad API". There are a million alternative CLIs and GUIs for git.
They haven't caught on because it turns out git's CLI isnt actually bad. It feels complex because it feels like what it is doing is simpler than is presented. But in fact it is solving quite a complicated distributed database program, transactionally, with editable history. The CLI hides a lot of that but cant hide it all. But you do need the flexibility.
Perforce is idiot proof. I can teach a non-programmer who has never even heard of revision control how to use Perforce in literally 10 minutes. They will never shoot themselves in the foot. They will never lose work. They will never, ever need to nuke and reclone their repo.
Perforce has other issues of course. But Git has both a bad CLI and a bad model. Maybe its particular model is strictly required for the Linux kernel. However for 99% of developers that are centralized on GitHub the model ranges from “mediocre fit” to “downright broken”.
>They will never shoot themselves in the foot. They will never lose work. They will never, ever need to nuke and reclone their repo.
You never need to do these things in git either. People only 'nuke and re-clone their repo' because they google something and get awful StackOverflow answers written by idiots that tell them to do that. It's not how you're meant to do things in git.
You're very unlikely to actually lose history in git unless you go out of your way to do so. I mean, you might not actually commit your changes, but I'd hardly call that 'losing work'. What's the alternative, autosaving into your history? No thanks. But once something's committed, it is hard to delete it. The reflog exists.
Another +1 for Perforce from me, it's just so much simpler for non dev users. It's of course a no-go if you really need the distributed nature of git, but as you say, for the majority of users that use GitHub/GitLab it's an option.
The philosophical difference I've found using both is that Perforce is file centric and git is commit centric. In perforce you have the file tree and then the history of each file, with git it's flipped. This is why I find it so hard in git to see how a particular file has evolved, with Perforce it's second nature. Perforce is so good at telling you why a particular line of code is there, I miss that so much in git.
Just curious about this because I’ve never had any direct experience. All of my uses of any version control (save very early work with RCS) has been to a central server.
But “distributed” must mean something other than that, especially how git is presented (i.e. technically there is no center).
So, do folks doing distributed development routinely push changes in a peer to peer fashion? Alice, Bob, and Charlene are collaborating with Alice and Bob working on one feature while Alice and Charlene work on another, pushing incremental changes to each other but only sending the completed feature/branch to their non-collaborating peers when they’re complete.
Does that happen often or is it just the “commit early, commit often to the local copy” that distributed devs are really using? “I can edit on a plane” scenarios.
>So, do folks doing distributed development routinely push changes in a peer to peer fashion? Alice, Bob, and Charlene are collaborating with Alice and Bob working on one feature while Alice and Charlene work on another, pushing incremental changes to each other but only sending the completed feature/branch to their non-collaborating peers when they’re complete.
Yes. The prototypical example is the Linux kernel, which is what git was originally created for. There, there are a large number of different trees. Linus's tree is 'standard' Linux, but there are the various stable trees, there are trees for various subsystems, there's the continuous integration tree 'linux-next', and others. Those trees' changes are all intended to eventually reach Linus's tree. But there are other trees which aren't, they host patchsets that sit on top of Linux "proper" but aren't intended to ever be upstreamed.
>Does that happen often or is it just the “commit early, commit often to the local copy” that distributed devs are really using? “I can edit on a plane” scenarios.
In practice, not many open source projects are big enough and distributed enough that they need to do what Linux does. This aspect of it is very useful too: that you can code on a plane, that you can code in the bath, that you can code in a shack in the woods, etc.
> This aspect of it is very useful too: that you can code on a plane, that you can code in the bath, that you can code in a shack in the woods, etc.
Even turbo-centralized Perforce supports offline mode. Distributed systems enable offline mode, but offline mode does not require a hyper distributed system!
git simply has a handful of commands that require network access, none of them a part of "day to day" development work, nor necessary to fully utilize git if your canonical repository is on your own machine.
my biggest problem with perforce (it's been years since i used it) was that it had a "checkout" model where it was necessary to do something before starting any code changes that you might want to commit later. i found that quite problematic, and reminiscent in some way of older systems like cvs. git manages to retain the sense of "the codebase is just a bunch of files" all the time, and that works better for me.
> For distributed development, it's a total non-starter.
Define distributed development. Do you mean like Linux with thousands of random contributors? Or do you mean a game team distributed across the globe? Or a AAAA team with big, scattered offices?
Perforce is not a good fit for Linux! It’s effectively the only game in town for almost all game devs.
> You can't solve the issues with perforce without making it no longer idiot proof.
I think I’d take that bet. Becoming idiot proof isn’t hard. The trick is for all commits to be automatically backed up in the central hub. And for commits to be locked and stable once made.
Git’s ability to re-write history is, imho, a huge mistake and I don’t think actually necessary to support Linux. Flattening on PR merge doesn’t require a rewrite.
>I think I’d take that bet. Becoming idiot proof isn’t hard. The trick is for all commits to be automatically backed up in the central hub.
That's the last thing I want. Random WIP commits being sent off to some central hub? Fuck that, man. Fuck that.
>And for commits to be locked and stable once made. Git’s ability to re-write history is, imho, a huge mistake and I don’t think actually necessary to support Linux. Flattening on PR merge doesn’t require a rewrite.
Being able to re-write history is absolutely necessary. I seriously doubt you've ever looked at a patch series posted for any free software project if you say that rewriting history isn't necessary.
To put it quite simply: my data is under my control. I can do whatever I want with it. I commit frequently because it is useful to be able to go back in history through changes as I make them. For the purpose of publication, it is not useful to see the various stages I went through when thinking about how to solve a problem. That's not what git history is for. It's for presenting a logical series of changes in a way that is easy to understand and bisect. Flattening on 'PR merge' is abysmal. I don't want one massive commit. I want a series of logical commits.
> That's the last thing I want. Random WIP commits being sent off to some central hub? Fuck that, man. Fuck that.
This is where you’re objectively wrong. I have this feature available to me today. It’s a killer feature. It’s amazing. Having it has zero downsides. Not having it is a pain in the ass and makes life worse.
Imagine this. You’re working at a company with thousands of engineers. Everyone is making stacks and stacks of local commits. At various points in time people push their commit(s) to code review. If approved it gets merged into master.
Now imagine if anyone could check out any commit from any employee just by typing “git checkout #####”. That’s it. That’s the feature. If you browse the repo it is perfectly clean. There’s no dirt or noise. This includes letting you checkout your own commit on one of your five different machines/platforms/cloud servers without having to push or pull or any of that shit. Commit on one machine and checkout on another. It’s pure automagic.
> Flattening on 'PR merge' is abysmal. I don't want one massive commit. I want a series of logical commits.
Sure fine. Shape the series of commits however you want. As few or as many as you want. With nice clean messages. The world is your oyster. But those are new commits. The initial commits should be, imho, unaltered (and unmerged). They can be GC’d months/years down the road if needed.
But in Perforce you can have your own private branch, in practice it feels not much different than having a local git one. You can then merge to main as you want.
Mistakes were made.