It goes on each article on Legifrance, creates a JSON file with all the articles, which book/section they belong to, and for each article, tracks it's versions (they are dates). The crawler is in Go.
Then there is a python script that takes that JSON file, creates the .md files and runs the git commands in the shell.
Ultimately the sad thing is that I had to scrape this information. There were lots of pitfalls due to bad formatting and so on... Well, scraping.
Then there is a python script that takes that JSON file, creates the .md files and runs the git commands in the shell.
Ultimately the sad thing is that I had to scrape this information. There were lots of pitfalls due to bad formatting and so on... Well, scraping.