>"If someone has a way to make managing big|popular OSS projects simple and seamless ... LET ME KNOW!"
I've had ideas about this, but just to be upfront I've not tried them in the real world.
Basically I think if code is designed around specifications it's possible to automate a lot of what goes into software maintainance.
Let's imagine a scenario. I'm designing a piece of software. The first thing I do is to define specifications on how the system sits together (i.e. the function interfaces, the general architecture). I then put into place tests that can validate that these requirements have been met. If you're familiar with functional languages essentially you're looking at a bunch of type signatures and a map of how they can interact.
Next, I put in place a CI system. This will run tests every time someone submits code to the main repository, as well as run linters, code style checks and performance checks. Commit access is completely open. If the code matches the expected style, doesn't cause the related tests to fail, and doesn't cause performance regressions it's accepted into the codebase. If it doesn't it's removed.
With this approach, development discussion between people can be focused on altering the specs to permit refactoring and/or new features to be added.
This is similar to the typical workflow I see in most GitHub projects I've worked on. Once you submit a PR, various tools will come along, lint your code, run the tests, check your test coverage, etc and give you a report. Then a project maintainer comes along and manually reviews your code to check for various style issues and things that can't be checked automatically (like whether they even want this feature in the first place) and if everything looks good they click merge.
> Commit access is completely open. [...] With this approach, development discussion between people can be focused on altering the specs to permit refactoring and/or new features to be added.
That's a misunderstanding. Most specs aren't fully fleshed out code, they're the bare bones of describing what something needs to do.
Think of it as if you're designing electronics out of integrated circuits. You can design a schematic just by knowing the inputs and outputs that the integrated circuits need to provide. The actual implementation of the integrated circuits is abstracted away.
It's the same with the relationship with specs and code. Specs are meant to be at a higher abstraction level than the code they describe. You can use code to write specs (and there are advantages to doing so), but the idea with the specs is to define something which is universally true, rather than get into the detail of the work done to meet this specification.
Whilst code contracts aren't necessary in all languages (this is a good starting point for looking into why: http://blog.ploeh.dk/2016/02/10/types-properties-software-de... ), I believe they do offer benefits to software written in most imperative-style languages.
> That's a misunderstanding. Most specs aren't fully fleshed out code, they're the bare bones of describing what something needs to do.
Right, the problem though is that you seemed to be suggesting that the spec would be so detailed and complete that a script checking that the code submitted in the PR matches the spec would be good enough to decide all on its own, with no human intervention, whether or not the code can be merged:
> Commit access is completely open. If the code matches the expected style, doesn't cause the related tests to fail, and doesn't cause performance regressions it's accepted into the codebase. If it doesn't it's removed.
A spec that detailed would basically have to _be_ the code itself.
> Commit access is completely open. If the code matches the expected style, doesn't cause the related tests to fail, and doesn't cause performance regressions it's accepted into the codebase. If it doesn't it's removed.
It seems to me that any spec. that went into sufficient detail to allow this would be more or less writing the project in another language, meaning there's no actual benefit. I can't imagine that a spec. that doesn't get into that level of detail would be sufficient to prevent all malicious/unwanted commits (say, subtly weakening cryptofunctions or leaking user data over side-channels).
>"It seems to me that any spec. that went into sufficient detail to allow this would be more or less writing the project in another language, meaning there's no actual benefit."
The benefit is in designing code at a higher level of abstraction, one that can be easily reasoned about. It is possible to design code at a high enough level where the specs and the code are one and the same, but most languages haven't got the type system sophistication of something like Idris or Haskell, which is a key component of pulling off this feat. The vast majority of code is written in languages that do not lend themselves to code as specification. Code contracts and other complimentary techniques (such as automated test generation) can go a long way to counteract those shortcomings.
Crypto functions are a special case. In this instance you won't save time by defining specifications as the requirements on algorithmic correctness are much higher than average. However, in this case the ideal would be formal verification, and that still requires a specification, it's just likely to be more verbose than you'll want for day to day code checking.
Lastly, consider the alternative if you don't use specifications. At this point the burden of performing code checks is with humans. With a large, fast-moving codebase it's unreasonable to expect any one individual to understand all the parts that constitute the whole at a sufficient level to stop new bugs creeping into code. It happens on every project, no known exceptions. With that in mind, why wouldn't you want to put in a framework to help catch bugs automatically? New bugs will still occur, but with a well specced program this should be at a vastly reduced rate.
This doesn't deal with the design of code and the long term effects of choosing the wrong API.
It also doesn't handle new features, at all, because the tests have not been written. Even if you enforced a requirement for tests with coverage, the tests could (and usually would) still be wrong.
Trying to take conscious, adaptive, intelligent response out of the loop is a mistake.
>"This doesn't deal with the design of code and the long term effects of choosing the wrong API."
Yes it does. With this approach, any change to the design requires a change to the specs to be made first. I already indicated the specs could evolve over time to allow for refactoring and new features.
>"It also doesn't handle new features, at all, because the tests have not been written. Even if you enforced a requirement for tests with coverage, the tests could (and usually would) still be wrong.
Trying to take conscious, adaptive, intelligent response out of the loop is a mistake."
You've misunderstood, as I stated before, discussions still happen, they're just focused on the specs.
I agree with Ajedi32 on this. Sufficiently detailed specs will be indistinguishable from code. So reviewing code and its behavior is more efficient than having another language to review.
It may depend on the domain, however. I find that a lot of misunderstandings arise in these discussions because people assume the problems are the same in all kinds of computing. That's not so.
I work in viz/UI (interactive charting) so there aren't great libraries for testing or writing specs. All the features interact in unpredictable ways.
Your process may work better where the API is purely functional, where inputs and outputs and side effects are better defined.
I'm happy for you if you've applied this successfully!
I've had ideas about this, but just to be upfront I've not tried them in the real world.
Basically I think if code is designed around specifications it's possible to automate a lot of what goes into software maintainance.
Let's imagine a scenario. I'm designing a piece of software. The first thing I do is to define specifications on how the system sits together (i.e. the function interfaces, the general architecture). I then put into place tests that can validate that these requirements have been met. If you're familiar with functional languages essentially you're looking at a bunch of type signatures and a map of how they can interact.
Next, I put in place a CI system. This will run tests every time someone submits code to the main repository, as well as run linters, code style checks and performance checks. Commit access is completely open. If the code matches the expected style, doesn't cause the related tests to fail, and doesn't cause performance regressions it's accepted into the codebase. If it doesn't it's removed.
With this approach, development discussion between people can be focused on altering the specs to permit refactoring and/or new features to be added.
Any thoughts?