Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One issue I see is that flipping back and forth between chapters reloads images from different URLs which means they're uncachable. I guess that's somehow related to the mangadex@home thing, but if the URLs were generated in a more deterministic manner (keyed on some client ID + the chapter being loaded) then the browser could avoid redundant traffic.


That's very close to how MD@H works, but it also has a time component and tokens are not generated by our main backends, so it'd require a separate internal http call per chapter


Another thing. For each page that's being loaded there's a report being sent. Instead this could be aggregated (e.g. once a second) and then processed as a batch on the server side which should be faster.

And if your JS assets are hashed then you can add cache-control: immutable so that a browser doesn't have to reload them when the user F5s.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: