Hacker Newsnew | past | comments | ask | show | jobs | submit | learned's commentslogin

CodeCatalyst is pretty surprising on that list. Maybe it tried to do too much?

Also, the deprecation alert on the CodeCatalyst site is incorrect at the moment:

> Important Notice: Amazon CodeCatalyst is longer open to new customers starting on November 7, 2025

https://codecatalyst.aws/explore


They tried to do a LOT. An absolutely huge amount of work trying to abstract all of the existing Code* services, big chunks of other AWS services, and then corp (and non corp!) identity. The last part, getting human identity in to AWS, is such a fundamental gap. In the end its unsurprising that they couldnt get to a competitive place against gitlab/github/etc. I do hope theres more success with identity center picking up some of those IdP pieces.


In my experience, any time AWS tries to create a service outside of the primitives, it’s a mess.


I'm guessing it's just harder to dogfood in a way that others can use without all of the other internal-only infra (including dev tooling) available internally. And to get to the point where you could dogfood at AWS scale, anything that's difficult to adopt incrementally is going to be a pain.


Exactly, no one internally is going to use something like Amplify or Code Catalyst. That’s like internal developers didn’t use CodeCommit (AWS’s now deprecated Git service).

Even though it did hurt me when they got rid of CodeCommit. I work in consulting and I always ask for my own isolated dev AWS account in their organization with basically admin access. It was nice to just be able to put everything in CodeCommit without dealing with trying to be a part of their GitHub organization if their was red tape.

I miss Cloud 9 too. I didn’t have to bother with making sure their computers were setup with all of the pre requisites and it gave me a known environment for the handover


A big caveat mentioned in the article is that this experiment was done with a small set (N=47) of specific questions that they expected to have relatively simple relational answers:

> The researchers developed a method to estimate these simple functions, and then computed functions for 47 different relations, such as “capital city of a country” and “lead singer of a band.” While there could be an infinite number of possible relations, the researchers chose to study this specific subset because they are representative of the kinds of facts that can be written in this way.

About 60% of these relations were retrieved using a linear function in the model. The remaining appeared to have nonlinear retrieval and is still a subject of investigation:

> Functions retrieved the correct information more than 60 percent of the time, showing that some information in a transformer is encoded and retrieved in this way. “But not everything is linearly encoded. For some facts, even though the model knows them and will predict text that is consistent with these facts, we can’t find linear functions for them. This suggests that the model is doing something more intricate to store that information,” he says.


I have a setup with poetry that runs the python data loaders in the poetry-managed virtualenv.

I just created a python project and then instead of `yarn run dev` to start the dev server, just run `poetry run yarn run dev` so the python is executed within the virtualenv.

This setup also lets you use a custom python package to define reusable and unit-testable code for the dataloaders that you can import into the *.json.py files to keep those really simple.


Why do you need to bundle these, is it to simplify iterating on frontend and data loaders simultaneously? Why not run them separately?


It’s totally possible to decouple these if your python outputs plain JSON/csv into the data/ directory that you commit into the repo or generate just before build time. Then you can import that raw json data into an Observable .md file.

But if you want dynamically generated data at build time and want to make use of Observable’s dataloader automatic execution of data/*.json.py, for instance, while still maintaining a custom virtualenv for the project rather than your system python, you’ll need some way to specify that virtualenv’s interpreter while observable executes the build for the dev server or the full dist/ output.

So for both options it’s largely a matter of taste. I personally like using the poetry virtualenv because it’s simple to manage dependencies and the venvs in one tool, while letting me use observable’s dataloaders with third-party or custom python packages. It sounded like the parent comment wanted to use this type of approach so I focused it to that scenario specifically. I like the simplicity of the single command to generate the data and build the site.


Thank you for the thorough response, that makes sense


Whenever Tipasa comes up, I always think of Camus's quote from "Return to Tipasa":

> But in order to keep justice from shriveling up like a beautiful orange fruit containing nothing but a bitter, dry pulp, I discovered once more at Tipasa that one must keep intact in oneself a freshness, a cool wellspring of joy, love the day that escapes injustice, and return to combat having won that light. Here I recaptured the former beauty, a young sky, and I measured my luck, realizing at last that in the worst years of our madness the memory of that sky had never left me. This was what in the end had kept me from despairing. I had always known that the ruins of Tipasa were younger than our new constructions or our bomb damage. There the world began over again every day in an ever new light. O light! This is the cry of all the characters of ancient drama brought face to face with their fate. This last resort was ours, too, and I knew it now. In the middle of winter I at last discovered that there was in me an invincible summer.


Touche, someone who advocated colonialism and apartheid waxing poetic about injustice and the land he's colonizing.


This is beautifully written thanks for sharing.


Very cool! What format/content does a typical card contain for this category of algorithm cards you use?


It’s usually just a problem and sometimes a simple example I can write as a test case for correctness.

Some cards I’ve put in lately are:

Calculate the level of a binary tree with the minimal sum of its elements.

Reconstruct a binary tree given an inorder and preorder list of the nodes.

The backs are usually blank or space and time complexities that I should be able to achieve.


I usually keep Invoke globally installed that way I can use it for setting up and interacting with pipenvs externally. Most of my calls beside the initial setup make use of 'pipenv run <command>' that way I never actually have to navigate inside of the virtualenv for most cases.


I'll second invoke. I use it for all of my python projects now and I love working with it. It makes it very clean to manage more complex tasks that have a lot of conditionals involved.


Yanofsky and Manucci's "Quantum Computing for Computer Scientists" is a smooth intro if you come from a CS background.

Nielsen and Chuang's "Quantum Computation and Quantum Information" is more thorough and advanced from a mathematical point of view. But it contains a primer on the linear algebra required.


Shopify


There is also some really interesting epigenetics research focusing on age-related disease.

For example, Steve Horvath's group at UCLA has been refining an "epigenetic clock" using DNA methylation data to predict all-cause mortality in several species with a relatively simple test. It has the potential to be an incredibly valuable metric in the field going forward.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: