Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've worked as a pipeline supervisor at one of the biggest VFX studios. I very much understand these workflows. I've worked as an artist and a tech artist in perforce and SVN workflows too. I've had to support the workflows of a 1000+ workforce across multiple locations.

I don't think attributing my disagreement with you to not understanding workflows is a fair characterization.

I still think locking , while useful, is only mandatory for work cultures with poor communication. Otherwise, many companies get by with very large workforces who don't hit these issues without having locking.

Separation of code and art assets also don't need to be painful. It's very doable but does require some amount of architectural consideration.

And I very much acknowledge there are projects where code isn't the majority makeup, which is why I say that none of the VCS systems cover mixed projects well or cover all the needs of the others well.



I think we'll just have to agree to disagree.

It sounds like we just come from different development cultures. Your solution to lack of locking sounds like a top-down hierarchy that wouldn't be flexible enough to support the teams I've worked with.

Having seen both approaches(and how they break down) I'll take a centralized locking solution over communication mistakes that lead to days of work being lost.


That's fair to agree to disagree but I also again think it's unfair to characterize my pipelines as not flexible enough.

For context I developed the publishing pipelines for the majority of departments in the studio. I have several hundreds of assets being published through my pipelines on a daily basis, if not more, from a variety of departments.

We only hit collisions on a very rare basis, and which were often resolved in an hour or two in the worst case.

We've scaled this from very small teams to large ones, from very scrappy realtime productions to feature length offline rendered films.

I don't doubt that locking helps. I just argue that maybe it's not as critical as people make it out to seem.


What happens if you're not around to drive the process? What about if you don't have the organizational backing to drive the process? What if a team goes AWOL or isn't bought in to your process? I've seen variants of all those happen in production in one form or another.

At the end of the day humans make mistakes, especially when involving communication. I'd rather have a physical system that prevents breaks instead of requiring cross-team/cross-discipline coordination.

Maybe gamedev is much more coupled than film(we regularly had design, animation, art and code touching the same common core packages). Look at Unreal or any other gamedev pipeline and you'll see a bias for locking source control solutions.


It's very rare that someone needs to be around to oversee the process. Tooling guides the vast majority of users in to a workflow that works while still being flexible should they need it.

Lack of organizational backup, do you mean cultural from the studio or infrastructure? Both are a problem no matter what solution you pick.

If a team goes AWOL, that's on them. The tooling usually allows for some amount of arbitrary workflow but they can't go completely off the rails. But that's true of p4 too. So I think that scenario would have to be more specific.

And yes people make mistakes, and you need tooling to guide them. Locking is a tool, but it's not the only tool. I feel very much that many workflows use it to hide deeper issues. That's not to say it's not valid, it is, but it's not a panacea either.

Unreal heavily favors perforce and SVN because that's what it was designed around. There's no absolute reason it could not work with other versioning systems and their paradigms if it came to being necessary.

Unity on the other hand is quite happy to work with any version control system, and works quite well with git or perforce.

You again seem to be trying to approach this from the angle of only the system you're familiar with working. But maybe try stepping outside the box and seeing if your workflow isn't a byproduct of your tools.

After all, you were asking git users to look at perforce as the solution. I don't think it's fair to then go ahead and assume that p4 is the only workable solution.


Oh I've been working with git for ~9 years now, it's not a lack of familiarity.

Take AOSP, even Google had to overlay the repo[1] tool to scale past git. It's a hot pile of garbage that won't let you sync all repos to a specific point in time. Not to mention the nature of cross-repo commits are not atomic. Good luck bisecting a breaking change across millions of lines of code and build files.

I've spent over a week chasing down how some homespun tool for storing binary assets side-by-side with git works so I could get a single file into a build.

Last company I was at which was a leader in the Android space just put the whole thing in P4, branch per device and it worked without many major issues. Pulling source took 1/50th the time a repo sync took. Literally an A/B comparison of one tech vs the other. That's before you even start to consider prebuilts.

Like I said, I think we're just going to have to agree to disagree and leave it at that.

[1] https://gerrit.googlesource.com/git-repo/


Hi, I work at Google and replied to you way up-thread. I built a CI system used by ChromeOS based on Repo and even contributed some changes to it. While I don't like it much, it is useful. You misunderstand or are misinformed about many aspects of it.

> Google had to overlay the repo[1] tool to scale past git

It was created to allow for a forest of git repos to all coexist in a world in which git submodules wasn't suitable yet OR the repos spanned security domains. However, almost all of the shortcomings of submodules have been addressed and so—at least—the team that I lead now is considering migration to it from Repo.

> cross-repo commits are not atomic

Yes, that is a feature. But I think you meant that there's no cross-repo coordinate in the timeline to sync to. However, there is. That's exactly what a Repo tool manifest snapshot is. Our CI system ensured that change that had deps across repos were committed and a Repo manifest snapshot only was taken with all inter-commit deps satisfied.

> Good luck bisecting a breaking change across millions of lines of code and build files

The team that I led implemented this. We simply snapshotted the forest at every T time intervals. For bisection, we walked the snapshots. Once a specific manifest snapshot was identified as the culprit, we further bisected within repos for a specific change.

> Pulling source took 1/50th the time a repo sync took.

Yes, that's what the partial clones (the article you replied to) and sparse checkouts solves. Once these two things are widely available, I don't see any benefits to P4 remaining.


None of the things you're talking about exist within the repo tool, it sounds like these are all things that you had to layer on top with a separate CI system.

It's been 12 years since the Dream was released and we're still not to the same level of perf/features as just stuffing the whole of AOSP in P4. I get that git has advantages, and it's an awesome tool when used appropriately but the desire to use it to solve every SCM problem under the sun is a bit misguided.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: