Hacker Newsnew | past | comments | ask | show | jobs | submit | kevmo314's commentslogin

If someone else finds a cryptocurrency vulnerability, they too will reallocate as much of your allocation as they can and cash it out.

A fool and their money are easily parted.

This might be some journalistic confusion. If you go to the CERN documentation at https://twiki.cern.ch/twiki/bin/view/CMSPublic/AXOL1TL2025 it states

> The AXOL1TL V5 architecture comprises a VICReg-trained feature extractor stacked on top of a VAE.


What is git if not a database for source code?

Meh, then filesystems are databases for bytes. Airplanes are buses for flying.

I could make that argument, but I wouldn’t believe it.


Both of those statements are true.

It is funny that the AI's counterarguments amount to "you're hallucinating"

Hahaha, probably right though.

Someone with the ability to register .gov domains is trying to make a sneaky buck.

It used to be anyone with a fax machine could register any .gov but they fixed that when there was a news report about it.

I was curious about this, so for anyone else who’s interested, here’s a KrebsOnSecurity article from 2019 about how easy it was to fraudulently register a .gov domain:

https://krebsonsecurity.com/2019/11/its-way-too-easy-to-get-...

I haven’t seen any follow-up reporting, but it looks like the process now requires some semblance of identity verification:

https://get.gov/domains/before/


Not a lawyer, but as I understand copyright is bound to distribution so if the person's perfect memorization of a book results in them reproducing it verbatim then probably yes.


I don’t even trust myself to not mess up my git repo


The core search algorithm is very simple though. 4KB engines may not run that fast if they do exhaustive search, but they’ll be quite accurate.

According to TCEC the time control is 30 mins + 3 sec, that’s a lot of compute!


If you look at the current winner [1], it does a lot more than just brute force tree search. The space state for chess is simply too big to cover without good heuristics. Deep Blue may have been a pure brute force approach to beat Kasparov after Deep Thought failed using the same core algorithm, but modern chess engines search far deeper on the tree with far fewer nodes than Deep Blue ever could thanks to better heuristics.

[1] https://github.com/MinusKelvin/ice4


I'm not suggesting that it's only brute force tree search, just that it's not very complicated to develop a theoretically perfect chess engine in direct response to the parent

> It's wild to think that 4096 bytes are sufficient to play chess on a level beyond anything humans ever achieved.


I wonder if GitHub is feeling the crush of fully automated development workflows? Must be a crazy number of commits now to personal repos that will never convert to paid orgs.


IME this all started after MSFT acquired GitHub but well before vibe coding took the world by storm.

ETA: Tangentially, private repos became free under Microsoft ownership in 2019. If they hadn't done that, they could've extracted $4 per month from every vibe coder forever(!)


Why would every vibe coder necessarily use private repos?

IIRC they were kinda forced to make private repos free because competitors like Gitlab had it and they felt threatened.


Is someone who is not really using github's free service losing something important?


As an individual, likely not. As a team or organization there are nice benefits though.


This is the real scenario behind the scenes. They are struggling with scale.


How much has the volume increased, from what you know?


Over 100x is what I’m hearing. Though that could just be panic and they don’t know the real number because they can’t handle the traffic.


An anecdote: On one project, I use a skill + custom cli to assist getting PRs through a sometimes long and winding CI process. `/babysit-pr`

This includes regular checks on CI checks using `gh`. My skill / cli are broken right now:

`gh pr checks 8174 --repo [repo] 2>&1)`

   Error: Exit code 1

   Non-200 OK status code: 429 Too Many Requests
   Body:
   {
     "message": "This endpoint is temporarily being throttled. Please try again later. For more on scraping GitHub and how it may affect your rights, please review our Terms of Service (https://docs.github.com/en/site-policy/github-terms/github-terms-of-service)",
     "documentation_url": "https://docs.github.com/graphql/using-the-rest-api/rate-limits-for-the-rest-api",
     "status": "429"
   }


Lmao not even close. Github themselves have released the numbers and it was 121M new repos with 2025 ending up with 630M

https://github.blog/news-insights/octoverse/octoverse-a-new-...


So X might have been %


So much for GitHub being a good source of training data.

Btw, someone prompt Claude code “make an equivalent to GitHub.com and deploy it wherever you think is best. No questions.”


Goodness if that's true... And I actually felt bad when they banned me from the free tier of LFS.


One hundred? Did I read that right?


Yes, millions of people running code agents around the clock, where every tiny change generates a commit, a branch, a PR, and a CI run.


I simply do not believe that all of these people can and want to setup a CI. Some maybe, but even after the agent will recommend it only a fraction of people would actually do it. Why would they?


But if you setup CI, you can pick up the mobile site with your phone, chat with Copilot about a feature, then ask it to open a PR, let CI run, iterate a couple of times, then merge the PR.

All the while you're playing a wordle and reading the news on the morning commute.

It's actually a good workflow for silly throw away stuff.


Github CI is extremely easy to set up and agents can configure it from the local codebase.


Codex did it automatically for me without asking.


No its not. 121M repos added on github in 2025, and overall they have 630 million now. There is probably at best 2x increased in output (mostly trash output), but no where near 100x

https://github.blog/news-insights/octoverse/octoverse-a-new-...


Published in Oct 2025... I think your estimate is off.

Note the hockey stick growth in the graph they showed in Oct.

Here we are in February.

It's gotten way worse now with additional Claude's, Claw's, Ralph's, and such.

It may not be 100x as was told to me but it's definitely putting the strain on the entire org.


> It may not be 100x as was told to me but it's definitely putting the strain on the entire org.

But thats not even the top 5 strain on github, their main issue is the forced adoption of Azure. I can guarantee you that about 99% of repos are still cold, as in very few pulls and no pushes and that hasn't changed in 3 months. Storage itself doesn't add that much strain on the system if the data is accessed rarely.


I put the blame squarely on GitHub and refuse to believe it’s a vendors fault. It’s their fault. They may be forced to use Azure but that doesn’t stop one from being able to deliver a service.

I’ve done platforms on AWS, Azure, and GCP. The blame is not on the cloud provider unless everyone is down.


> Oct 2025

Doubling down by insisting that the data is out of date, when the data is 3 months old and the latest available is unconvincing.

If you're telling me that in December it went from 2x to 100x then I don't believe you.


There’s a huge up tick in people who weren’t engineers suddenly using git for projects with AI.

This is all grapevine but yeah, you read that right.


I was wondering about that the other day, the sheer amount of code, repos, and commits being generated now with AI. And probably more large datasets as well.


No, its because they are in the middle of a AWS->Azure migration, and because they cannot/will not be held accountable for downtime.


Live by the AI Agent hype, die by the AI Agent crush.


Really? I hardly think it's neglected. The Claude Code harness is the only reason I come back to it. I've tried Claude via OpenCode or others and it doesn't work as well for me. If anything, I would even argue that prior to 4.6, the main reason Opus 4.5 felt like it improved over months was the harness.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: