I have a server after i operate a Django application but I have just a little problem:
when with mercurial I commit and pushing new changes around the server, there is a micro time (like 1 microsec) in which the webpage is unreachable.
I've apache around the server.
How do i solve this?
You could run multiple cases of the django application (either on a single machine with various ports or on different machines) and employ apache to reverse proxy demands to every instance. It may failover to instance B although instance A is restarting. See mod_proxy.
When the down time is really as short while you say though, it's unlikly to become an problem worth worrying about.
For those who have any significant traffic over time that's measured in microsecond it's most likely better to push new changes for your web servers individually, and take away the device from load balancer rotation for now you are doing the upgrade there.
apachectl graceful, you minimize time the web site is not available when 'restarting' Apache. All youngsters are 'kindly' asked for to restart and obtain their new configuration when they are not doing anything.
The USR1 or elegant signal causes parents process to advise the kids to exit after their current request (in order to exit immediately if they are not serving anything). Parents re-reads its configuration files and re-opens its log files. As each child dies from the parent replaces it having a child in the new generation from the configuration, which starts serving new demands immediately.
In a heavy-traffic website, you will observe some performance loss, as some children will temporarily not accept new connections. It's my experience, however, that TCP rebounds perfectly out of this.
Thinking about that some websites take several minutes or hrs to update, that's completely acceptable. If it's a very large problem, you could utilize a proxy, running multiple instances and upgrading them individually, or update in an off-peak moment.
If you are at the purpose of worrying in regards to a 1/1,000,000th of the second outage, then We highly recommend the following approach:
Front-end load balancers pointing to multiple after sales servers.
Remove one after sales server in the loadbalancer to make sure no traffic goes into it.
Watch for all traffic the server was processing continues to be sent.
Shutdown the webserver on that instance.
Update the django instance on that machine.
Include that instance to the burden balancers.
Repeat for each other server.
This can be sure that the 1/1,000,000th of the second gap is taken away.
i believe it's normal, since django might be requiring to restart its server after your update