Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One key consideration is “provable fairness”. It’s my understanding that exchanges use techniques like long, same length fiber optic cables to all racks within the exchange datacenter to convince customers that everyone is on a fair playing field.

This is a lot harder to do when a server is virtualized somewhere on some rack on EC2. Exactly as mentioned, people will try to optimize by spinning up/down instances as close to the exchange server as possible. Customers will be unhappy because they can’t prove that it’s fair, even if they have the closest server.

Overall great, thought provoking writing btw



It's provable that it's not fair. AWS multicast is software based, not hardware based.


I think the lines between software and hardware-based are a little blurred these days with accelerator cards and whatnot. It's just a lot harder to come with the same level of guarantees when you're basically running a hypervisor on top of it.


The only exchange I know that does the cable in a box trick is IEX. Everyone else is based on "the closer, the better". Colocation is king.


> This is a lot harder to do when a server is virtualized somewhere on some rack on EC2.

There are bare metal EC2 instances.


It's about the interconnect and the proximity.


and not having a for() loop doing multicast fanout in software

which sounds like what AWS Transit Gateway is


At some point, someone has the shortest route connecting to the exchange's bare metal EC2 instance, and that organisation has a significant advantage in high frequency trading.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: