by Dan East » Sep 7, 2001 @ 2:10pm
I started typing this hours ago. Jacco's answered some of this in the mean time:<br><br>Hmm. It depends on how Fredrick implemented it. gZip is just a stream of raw data. If you want the ability to store / access multiple files within a gzip, then you have to implement a directory or some other mechanism so you know where individual files begin. You'll have to find out how Fredrick implemented that. That is what the Zip format is for. It is a directory of invidual gZip files that have been combined into one big file. Because each file in the archive is actually its own gZip, it can be efficiently extracted individually. I took an easier, less effective route with PQ. Since id Software already used pak files as a way of lumping many files into one package, I just gzipped the whole thing up. The problem is that you can't efficiently access individual files randomly, because when gZip seeks, it decompresses (and if you seek backwards from the current file pointer, it has to reset to the beginning of the file, and decompress all the way up to the desired point). Quake selectively loads files as they are needed, which of course is very efficient memory-wise, but does not lend itself to compression the way I implemented it. Fredrick may have used a similar simple technique, and to optimize performance he reads / decompresses the entire archive into memory in one pass. The expense is that you cannot pick and choose which individual files in the archive you to load into memory.<br><br>It all depends on how Fredrick coded it...<br><br>Dan East