Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The claim is SerApi is not honoring robots.txt, and they are getting far more data from google/more often than needed for an index operation. Or at least that is the best I can make out of the claim in court from the article - I have not read the actual complaint.

People are generally fine with indexing operations so long as you don't use too much bandwidth.

Using AI to summarize content is still and open question - I wouldn't be surprised if this develops to some form of "you can index but not summarize", but only time will tell.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: