D3 is pretty ok with large datasets but I understand your point.
What Shiny does to get around this is natively 'evaluate' the plots on the backend, creating a rasterized PNG file. A similar approach could work for Pyxley (using matplotlib or Seaborn to render the plot, and then sending that image file to the front end) but I fear with so much development time spent on d3 support such an approach would not be natively implemented.
>What Shiny does to get around this is natively 'evaluate' the plots on the backend, creating a rasterized PNG file
I don't think that's the case for D3 charts, because that would kill the interactivity that is so great about D3. I use RCharts to inject D3 into my Shiny applications, and have ran into into performance issues with just a couple hundred data points. I think this is because all heavy lifting is done by the client (browser), not the server.
Unless you have a screen with 50,000 pixels you would need to downsample the data anyway, wouldn't you normally do that instead of handling a visualization library the full data set?
You actually do have a screen with 50k pixels. Many more than that.
A continuous heat map over two dimensions, at 500x500 pixels, is 250,000 pixels. Of course, to generate the heat map, you have to aggregate (binning on two dimensions), but you don't have to downsample.