I do realize you were intending to give examples of why you don't think annotations aren't very extensible, but it is an odd example as all those things can still be achieved via annotation, since the annotations can accept values loaded from env specific properties.
Exactly this, it’s great fun to have a surface level understanding of a topic and post derisively for internet points; rather then spend the time and effort to actually learn about the subject at hand!
I'm not digging for "internet points". Yes, superficial replacement values can be retrieved from the environment. But I guess we have to give you a more imagined or sophisticated example then to make the point to you?
How about varying implementations of a service interface. Let's say I have a Scheduler interface and I want to have multiple implementations; maybe one is CronScheduler, another is RandomScheduler, another is BlueMoonScheduler. Each of these schedulers have their own properties and configuration values. I might want to choose, per environment or deployment context, which service implementation to use.
Annotation configuration makes this (near?) impossible to dynamically wire and configure these scenarios and make them tailored to the environment or deployment scenario. Annotations are generally "static" and do not follow a configuration-as-code approach to application deployment.
An external configuration file, as offered by Spring's original (less "favored") XML style, allowed a more composable and sophisticated configuration-as-code approach. Maybe that's a negative, putting that much power into the hands of the service administrator. But as I stated originally, in my experience, having statically defined annotation driven configuration and/or dependency injection has caused more problems than it has solved.
> Annotation configuration makes this (near?) impossible to dynamically wire and configure these scenarios and make them tailored to the environment or deployment scenario. Annotations are generally "static" and do not follow a configuration-as-code approach to application deployment.
Off the top of my head, you could drop a `@Profile` or `@ConditionalOnProperty` annotation on each of your different schedulers and pick them at launch time simply by adding the profile you want to the arguments or the environment. That assumes you want to choose one for the whole app. If you want to have different ones in different locations, you can dynamically load beans in code. Or if you want them loaded entirely with annotations, you could define differently named beans for each context, and include `@Qualifier`s on them and in the locations they're being used.
Which isn't to say that annotations are perfect, but dynamic runtime configuration is sort of core to how spring operates, and annotations are no exception to that.
Annotation configuration makes this (near?) impossible to dynamically wire and configure these scenarios and make them tailored to the environment or deployment scenario
all of your scenarios are trivial to implement with annotations
Most web application servers work this way. It also works really well in practice using modern CD tools - update your configuration and perform a gradual rollout of all your application servers to reflect the updated configuration.
people will argue crazy sht here on HN like changing schedule is a thing* that needs instant gratification and god forbid you have to bounce a service to read an updated configuration … :)
It is actually the opposite, it is currently considered a good practice to run stateful workloads outside of kubernetes and stateless workloads inside of kubernetes.
> It is actually the opposite, it is currently considered a good practice to run stateful workloads outside of kubernetes and stateless workloads inside of kubernetes.
Is that still true?
I wouldn't call the parent comment charitable enough, because there definitely can be some reasons for running stateful workloads even outside of containers altogether (familiarity included), but at the same time it feels like a lot of effort has been invested into making that a non-issue.
Honestly, as long as you have storage and config setup correctly, it's not like you even need an Operator, that's for more advanced setups. I've been running databases in containers (even without Kubernetes) for years, haven't had that many issues at small/medium scale.
I find it funny how sometimes there are two sides to the same coin, and articles like these rarely talk about engineering tradeoffs. Just one solution good, other solution bad. I think it is a mistake for a technical discussion to not talk in terms of tradeoffs.
Obviously it makes sense to not use complex tech when simple tech works, especially at companies with a lower traffic volume. That is just practical engineering.
The inverse, however, can also be true. At super high volumes you run into issues really quickly. Just got off a 3 hour site-wide outage due to the database unable to keep up with the unprecedented queue load, and the db system basically ground to a halt. The proposed solution is actually to move off of a dedicated db queue for SQS.
This was a system running that has run well for about 10 years. Granted there was an unprecedented queue volume for this system, but sometimes a scaling ceiling is hit, and it is hit faster than you might expect from all these comments saying to always use a db always, even with all the proper indexing and optimizations.
Same, we use EKS and a very similar setup, our workload has some pretty high throughput and scaling requirements. Works amazing for our team, wouldn't change it for anything else at this point. Very low maintenance effort since AWS manages the K8s infra.
This is the video I recommend to others when working with dynamodb. The video is by Rick Houlihan about dynamodb modeling. In my experience most developers that complain about dynamodb don't fully understand it.
All technologies have their pros and cons. They have use cases where they make sense and use case where they don't. The job of an engineer to decide which tool fits which use-case. To dismiss a useful technology as "BS", especially one used by companies all over the world for over a decade without any backing data seems a bit disingenuous.
All technologies have their pros and cons. They have use cases where they make sense and use case where they don't. The job of an engineer to decide which tool fits which use-case.
Exactly. But that's not how he paints it, I have seen him bashing RDBMs as been a thing of the past and his promoted way of data modeling and "new" database technology is how companies should start today or be moving to.
I'm always surprised by all the vitriol on here against kubernetes.
My development experience must have been drastically different than the average hacker news startup developer.
It is definitely true you don't need kubernetes for all use cases, but I also sometimes question if people have worked on the sort of large-scale systems where k8s really shines.
My background is at larger enterprise type tech companies, doing high volume service traffic at significant scale, and I've found kubernetes invaluable to managing our services. To the point that if someone suggest we remove it from stack, I would question their experience operating these sorts of systems.
The thing is I don't think it is bold at all. Housing being overvalued seems perfectly obvious consequence of negative or near-zero interest rates. One side effect of those policies is that the mortgage rates were artificially lowered.
A lot of people buying houses unfortunately don't really look at aggregate home ownership cost and focus (incorrectly) on monthly payments. So, because they can afford the monthly payment, they think they can afford the house. This of course works works until the music stops.