> Vertical integration stories like this, or Ford's Rouge plant[1], make me reconsider my general predisposition towards "buy" in the software space.
Schwinn was another great example of this back in it's heyday. The Chicago factory could essentially produce every part of a bicycle from 1010 steel.
The issue with this model for a lot of companies is that a high level of vertical integration can make it very difficult to pivot. In the case of Schwinn, the change in consumer taste towards lighter bicycles/frames was not something they could adjust to in a cost effective manner; the lighter alloys other companies were beginning to use at the time were not usable in their production process. There were many other factors in their downturn but that was a big one.
> For many years, I've argued against building software you can easily buy or download, but maybe reinventing the wheel is not so terrible if someone wants a company to be long-lived. Primary benefit is the software can be customized for the company's specific needs and not held back by other companies' software, of course. But even if using open source, it can help avoid fad software trends, or the whiplash-speed changing of standard tools/libraries (e.g. the open-source client-side Javascript world).
There's two sides to the 'buy/build' camp that seem oft ignored; The first is that 'build' is a huge cost unknown (i.e. risk) to the business. Maybe you'll build the right solution. Or, maybe it becomes a terrifying project that was put together so haphazardly that they wind up taking the developer's machine and put it in the server room, because they've run out of time and the code doesn't work anywhere else.
But the second... is that a lot of the off-the-shelf software has a specific use case. And the more off-the-beaten-path your business is, the higher the likelihood that you'll have to write customizations to actually meet the requirements. Those have their own set of risks, and I've seen projects where teams have wound up tossing the 'tool purchased by the business' and writing their own thing because it wound up being cheaper to do that than integrate with the muddled mess the company signed a fools bargain contract for (incomplete, poorly documented AGPL clone of a very popular Apache licensed product.)
It's like everything else though; companies want to 'externalize' the cost/risk. And sometimes that works. Sometimes, you have a good contract, with the right carrot+stick SLAs with a vendor, and having that peace of mind that they -will- respond within 24 hours and not use up your own internal developers troubleshooting the in house leetcode.
One of my favorite examples of this paradigm in practice is Job scheduling and/or queueing. Every company I've worked at has had their own 'opinionated' way of doing job. At one, where the requirements were very well defined, the in house library was hilariously bare code and ran on windows scheduled tasks, directories-as-output and at-first-glance terrifying oracle sprocs. But... when you looked at the actual requirements? it did exactly what it had to and no more. The output had to go to FTP, so who cared if it used directories for output as the default? And quite frankly it worked and was simple enough any developer worth their salt could maintain the structure/paradigm.
At another org, that loved to buy/use things off the shelf, I lost many a Sunday to bizarre problems where the combination of Quartz.NET wrappers, MassTransit wrappers, RabbitMQ wrappers.... secondary database queue tables.... just didn't play nice together and would deadlock on the server.
Schwinn was another great example of this back in it's heyday. The Chicago factory could essentially produce every part of a bicycle from 1010 steel.
The issue with this model for a lot of companies is that a high level of vertical integration can make it very difficult to pivot. In the case of Schwinn, the change in consumer taste towards lighter bicycles/frames was not something they could adjust to in a cost effective manner; the lighter alloys other companies were beginning to use at the time were not usable in their production process. There were many other factors in their downturn but that was a big one.
> For many years, I've argued against building software you can easily buy or download, but maybe reinventing the wheel is not so terrible if someone wants a company to be long-lived. Primary benefit is the software can be customized for the company's specific needs and not held back by other companies' software, of course. But even if using open source, it can help avoid fad software trends, or the whiplash-speed changing of standard tools/libraries (e.g. the open-source client-side Javascript world).
There's two sides to the 'buy/build' camp that seem oft ignored; The first is that 'build' is a huge cost unknown (i.e. risk) to the business. Maybe you'll build the right solution. Or, maybe it becomes a terrifying project that was put together so haphazardly that they wind up taking the developer's machine and put it in the server room, because they've run out of time and the code doesn't work anywhere else.
But the second... is that a lot of the off-the-shelf software has a specific use case. And the more off-the-beaten-path your business is, the higher the likelihood that you'll have to write customizations to actually meet the requirements. Those have their own set of risks, and I've seen projects where teams have wound up tossing the 'tool purchased by the business' and writing their own thing because it wound up being cheaper to do that than integrate with the muddled mess the company signed a fools bargain contract for (incomplete, poorly documented AGPL clone of a very popular Apache licensed product.)
It's like everything else though; companies want to 'externalize' the cost/risk. And sometimes that works. Sometimes, you have a good contract, with the right carrot+stick SLAs with a vendor, and having that peace of mind that they -will- respond within 24 hours and not use up your own internal developers troubleshooting the in house leetcode.
One of my favorite examples of this paradigm in practice is Job scheduling and/or queueing. Every company I've worked at has had their own 'opinionated' way of doing job. At one, where the requirements were very well defined, the in house library was hilariously bare code and ran on windows scheduled tasks, directories-as-output and at-first-glance terrifying oracle sprocs. But... when you looked at the actual requirements? it did exactly what it had to and no more. The output had to go to FTP, so who cared if it used directories for output as the default? And quite frankly it worked and was simple enough any developer worth their salt could maintain the structure/paradigm.
At another org, that loved to buy/use things off the shelf, I lost many a Sunday to bizarre problems where the combination of Quartz.NET wrappers, MassTransit wrappers, RabbitMQ wrappers.... secondary database queue tables.... just didn't play nice together and would deadlock on the server.