It is possible to module for apache/nginx for everyone static files from archive (zip, tgz, tbz …), to ensure that if there's no file in specified location, then mentioned archive is requested that file?

I am unaware of this type of module.

Should you write your personal, I would suggest that you simply have a look ate the try_files directive http://wiki.nginx.org/HttpCoreModule#try_files and hands the request arguments on a script, e.g. a php file (see around the wiki page the try_files line ending in: /index.php?q=$uri&lifier$args).

Performance: This way you need to do some security inspections with php, straighten out internet search engine bots and perhaps even memcache some files once you have them unpacked, but that is dependent in your specific request statistics/designs.

Some tools or pear packages might permit you to extract files to pipe (stdout) and steer clear of dumping towards the filesystem or unpacking sometimes happens inside a ramdisk to quicken things. However, what you want is dependent in your files dimensions to be able to make something of that nature reliable.

For that situation of .tgz and .tbz, the majority of the performance loss (esp for big archives) should range from fact you need to read from disk and uncompressing all data up to the file you requested. Should you request going back file within the archive, then whether a CGI script or perhaps a webserver, something will still need to take the time reading through, uncompressing, and getting rid of all of the archive data just to get at your file.

Zip format does permit random access. In case your CGI script really is easy (could be a sh script), and basically just calls "unzip" using the right argument, then the quantity of speedup you have access to from getting a server module do it might be rather small.

Nevertheless, it's type of crazy if your module for carrying this out does not exist (but no, I've not been capable of finding one).

Another possibility might be utilizing a compressed filesystem, based on types and distribution from the files also with deduplication.

Professional:

-Has almost exactly the same effect als b .squat file ( Storage smart )

-No change around the Webserver part needed

Cons:

-Possibly new FS for that zip DIR

-Not present under used OS ( e.g. ZFS )

Maybe there's one other way should you clarify what you're attempting to achieve.

You need to take a look at SquashFS, it is a compressed filesystem.

You are able to think about it as being a tar.gz archive, used mainly in LiveCD/DVD/USB ISOs but perfectly relevant for your situation.

Here is really a HowTo.

PS: Unlike other solutions you do not need a specific OS to make use of SquashFS but when one happens to operate Solaris or FreeBSD choose ZFS compression, it is simply great !

I'd suspect that there's not, especially like a fallback when regular files aren't found. Nevertheless, a CGI script to get it done could be quite simple. The performance loss would definitely be noticed when under load, however.

It is available: http://wiki.nginx.org/HttpGzipStaticModule

Directly streaming accurate documentation skips decompressing and recompressing or mounting anything. This is an excellent idea if static submissions are realistically organized, enhanced and minified, sufficiently small to lessen page load time or server electrical sockets or cpu are confined. Also, mounting filesystems for 100s of customer collateral bundles inside a VPS, hosting atmosphere or CDN might not be scalable.