(Author here) IIUC you're saying that 707133f-a should be at 5th Ave & 9th Street, not 5th Ave & Union Street? Can you say more about why? The text on the back of the first image says "Union St. Station, 5th Ave," which is how it winds up at there. On the other hand, the NYPL page[1] titles the image "Union St. - 18th St."
(I briefly got excited that there might be a street sign _in_ the photo, but if you zoom way in it says "DENTIST")
+1 to 1940s.nyc. Very different photos — those are were taken for tax assessment, the ones on OldNYC were taken to document the city as it changed. The photographer had an arrangement where he'd get tips from demolition crews, and go shoot buildings before they were gone forever.
I'm pretty sure both are correct at Union St and 5th Ave. The Manhattan Savings Bank building (left edge in both photos) is still there, and fairly distinct.
you're right, this is actually correctly placed! I was confusing the orientation. I live right around there and recognize the M&T bank in the photo on the left, so it can't be down by 9th
An elephant in the room is that if you have too much data to process without AI, you have too many results to check for correctness when they come out of the AI.
This has been true since before LLMs, but now so many more people and use cases are enabled so much more easily. People are undisciplined and quick to take short term gains and handwave the correctness.
It is less of a problem if the output is explicitly marked as AI-generated and unverified, so people can treat it as a rough first draft. But mix AI output with well-vetted human-reviewed data, and you've basically made your entire data set worthless.
I'm a big fan of Pastvu: go yo the "gallery" view, choose one of the "-stan" former soviet republics, set the date filter yo 1986-1996 and enjoy nostalgia from a parallel world.
looks cool! one bit of feedback: make your demo gif get to the point faster. either practice typing a bit quicker or speed it up 2x for the typing section
on Bun's website, the runtime section features HTTP, networking, storage -- all are very web-focused. any plans to start expanding into native ML support? (e.g. GPUs, RDMA-type networking, cluster management, NFS)
Probably not. When we add new APIs in Bun, we generally base the interface off of popular existing packages. The bar is very high for a runtime to include libraries because the expectation is to support those APIs ~forever. And I can’t think of popular existing JS libraries for these things.
we've discovered some kind of differentiable computer[1] and as with all computers, people have their own interests and hobbies they use them for. but unlike computers, everyone pitches their interest or hobby as being the only one that matters.
one thing I've learned in my career is that escape hatches are one of the most important things in tools made for building other stuff.
dropping down into the familiar or the simple or the dumb is so innately necessary in the building process. many things meant to be "pure" tend to also be restrictive in that regard.
Functional languages are not necessarily pure though. Actually outside Haskell don't most functional first languages include escape hatches? F# is the one I have the most experience with and it certainly does.
what makes you say this? modern LLMs (the top players in this leaderboard) are typically equipped with the ability to execute arbitrary Python and regularly do math + random generations.
I agree it's not an efficient mechanism by any means, but I think a fine-tuned LLM could play near GTO for almost all hands in a small ring setting
To play GTO currently you need to play hand ranges. (For example when looking at a hand I would think: I could have AKs-ATs, QQ-99, and she/he could have JT-98s, 99-44, so my next move will act like I have strength and they don't because the board doesn't contain any low cards). We have do this since you can't always bet 4x pot when you have aces, the opponents will always know your hand strength directly.
LLM's aren't capable of this deception. They can't be told that they have some thing, pretend like they have something else, and then revert to gound truth. Their egar nature with large context leads to them getting confused.
On top of that there's a lot of precise math. In no limit the bets are not capped, so you can bet 9.2 big blinds in a spot. That could be profitable because your opponents will call and lose (eg the players willing to pay that sometimes have hands that you can beat). However betting 9.8 big blinds might be enough to scare off the good hands. So there's a lot of probiblity math with multiplication.
Deep math with multiplication and accuracy are not the forte of llm's.
Agreed. I tried it on a simple game of exchanging colored tokens from a small set of recipes. Challenged it to start with two red and end up with four white, for instance. I failed. It would make one or two correct moves, then either hallucinate a recipe, hallucinate the resulting set of tiles after a move, or just declare itself done!
``` 2x + y = \operatorname{eml}\Big(1,\; \operatorname{eml}\big(\operatorname{eml}(1,\; \operatorname{eml}(\operatorname{eml}(1,\; \operatorname{eml}(\operatorname{eml}(L_2 + L_x, 1), 1) \cdot \operatorname{eml}(y,1)),1)\big),1\big)\Big) ```
for me Gemini hallucinated EML to mean something else despite the paper link being provided: "elementary mathematical layers"
reply