Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In many other ecosystems it's not uncommon to run into having issues where certain libraries can't be used together at all.

The same problem exists in Rust, but from the other side.

If I use serde for serialization I am effectively locked in to using crates that implement serde traits (or do newtype hacks to define them myself).

If I want to use something more niche than serde, I essentially lose access to all the popular crates as they only implement serde traits.



Newtypes aren’t hacks, they’re perfectly acceptable in my opinion. Especially if you’re willing to also use a crate like `derive_more`.


In my experience using newtypes like this causes a constant shuffle between the original type and the newtype.

If a library exposes Foo and I wrap it in MyFoo implementing some trait, I need to convert to MyFoo everywhere the trait is needed and back to Foo everywhere the original type is expected.

In practice this means cluttering the code with as_foo and as_myfoo all over the place.

You could also impl From or Deref for one direction of the conversion, but it makes the code less clear in my opinion.


One strategy I like is to declare “view” types for serialization and deserialization, because you’re going to be doing that anyway if your serialized format is meant to be compatible across versions anyway.

Serde also comes with a bunch of attributes and features to make it easy to short-circuit this stuff ad hoc.

I know this only solves the serialization use case, but that seems to be where most people run into this.


honestly in my experience it rarely matters (if you care about stable APIs) as most types you want to have at an API boundary are written (or auto generated) by you

this leaves a few often small types like `DateTime<Utc>`, which you can handle with serde serialization function overwrite attributes or automatic conversions not even needing new types (through some of this attributes could be better designed)

serde is not perfect but pretty decent, but IMHO the proc macros it provides need some love/a v2 rewrite, which would only affect impl. code gen and as such is fully backward compatible, can be mixed with old code and can be from a different author (i.e. it doesn't have the problem)

Anyway that doesn't make the problem go away, just serialization/serde is both the best and worst example. (Best as it's extremely wide spread, "good enough" but not perfect, which is poison for ecosystem evolution, worst as serialization is enough of a special case to make it's best solution be potentially unusable to solve the generic problem (e.g. reflections)).


for me that is a completely different problem,

one you solve when initially writing code (so you can properly account for it and control it)

instead of a problem which can blow up when you update a package for a very pressing security fix

in the end it a question what is more important, stability or the option to monkey patch functionality into your dependencies without changing them

and given that you can always non-monkey patch crates (rust makes vendoring dep. relatively easy in case upstream doesn't fix things) I prefer the stability aspect (through if you do patch crates you re-introduce many of the issues in a different place, with the main difference of there being a chance to upstream you changes)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: