Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We believe that this is a substantial new direction for PyTorch – hence we call it 2.0. torch.compile is a fully additive (and optional) feature and hence 2.0 is 100% backward compatible by definition.

How about just calling it PyTorch 1.14 if it's backward compatible? Version numbering shouldn't be used as a marketing gimmick.



Dismissive comments like this make me not want to read HN anymore and in addition it’s against the HN guidelines:

It’s snarky. It’s incurious. It’s neither thoughtful nor substantive. It’s flame bait. It’s a shallow dismissal. It doesn’t teach anything. It’s the most provocative thing to complain about.

https://news.ycombinator.com/newsguidelines.html

I’m sorry I had to leave this comment, so let me also try to respond thoughtfully:

Assuming that PyTorch is using semantic versioning requires that the major version MUST change when making a backwards incompatible API change:

> Major version X (X.y.z | X > 0) MUST be incremented if any backwards incompatible changes are introduced to the public API. It MAY also include minor and patch level changes. Patch and minor versions MUST be reset to 0 when major version is incremented.

This requirement does NOT preclude changing the major version when making backwards-compatible changes.

PyTorch has not violated semver here. It is absolutely compatible with semver to bump the major version for marketing reasons.

https://semver.org/


Personal attack aside, from your own link:

> Given a version number MAJOR.MINOR.PATCH, increment the:

> MAJOR version when you make incompatible API changes

> MINOR version when you add functionality in a backwards compatible manner

> PATCH version when you make backwards compatible bug fixes

> Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

You can point towards some other details, but it doesn't change the fact that for the overwhelming majority of people, the quote above is what semver is. Besides, my original comment does not say "They broke semver", it says they shouldn't bump the major version if they don't make backward incompatible change because afterwards the mental model of "Can I use version X.Y.Z?" is broken.

When TensorFlow moved to 2.0 it's because they were changing from graphs and session definition to eager mode. That makes sense, that means the underlying API and how the downstream users interact with it changed. These are just newer features that, while very useful, have limited bearing on downstream users.


Frankly, they are not really showing benchmarks, and given my experience with hyped torch.jit I don't expect much.


They're saying it represents a change in direction and is a pretty big feature, traditionally that's been a good reason to increment a major version number.


Is this really the biggest problem that needs to be solved in AI?


Not sure I understand that question, is versioning the biggest problem no, but it costs nothing to keep semver and prevent production headaches later.

If you meant inference speed then yeah it's a very big problem so it's good that they are addressing it.


what exact production headaches you are expecting by bump the number from 1.13 -> 2.0, while all existing codes keep working as before?

And how is it different from bumping 1.13 to 1.14, even if they named it 1.14?


The soft kind. Major versions are deeply ingrained as "possible backward-compatibility issues" in most engineers' brain. If you handle model development, evaluation and deployment yourself than sure you won't have any issues, but in a bigger organization you have to get people to switch and that version number will mean that everyone will ask the same "hang on this is a major version change?!" question every step of the way.


No? What would have given you that impression?

Oh, I see. You were trying to be dismissive.


Quite the non sequitur you have there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: