You are viewing a single comment's thread from:
RE: Pizzagate repository just got censored by github - Backup links + Updated info
Theres a few issues with that approach.. I don't actually work with the zip files normally, those are generated on the git repo's. I tried zipping up the files locally but I end up with different file hashes to what ends up being generated on the sites...
I do understand why this is important and I've been looking into a way to autogenerate index pages in html for each folder of evidence, I think it would be more suited to this sort of project to have those include the shasum's of each file and sign the index files instead... This way we'd have a unique checksum for every single file in the repo, without needing to sign everything with gpg individually (only the indexes with updated files)..
Yeah, I was worried the remote repos would produce a different hash.
But you mention an interesting idea. I wonder if I could write a bash or python script that would iterate over every file in a given folder, calculate it's hash, then append the hash into a SHASUM text file. If you had that, you could just run the script and then sign the SHASUM file. Then downloaders could compare file hashes individually.
Glad you're thinking about it and understand its importance. I would just hate it if the repo were hacked and some horrible things were put into it in order to incriminate those who are just trying to be honest investigators.
If I can get around to that script, I'll drop you a link so you can use it. Great work so far ausbitbank.
Found a way to do it for now https://steemit.com/pizzagate/@ausbitbank/pizzagate-git-repo-updated-now-includes-file-hashes-and-pgp-signature
Cool yeah I think it's definitely doable I just want to come up with a system that lets everything be verified as easily as possible.
If the process involves me having to download and sign an external ~300mb archive each time I push a new text file to the repo it'll discourage me from using it . A script that runs before git push'ing would be great.
Ideally, it would make index files that not only contain file size and file hash for everything in the archive - but are also linked up so the whole repository could be dropped onto a webhost (with directory listings disabled) and it would already be a navigatable basic website as well