Just curious the way i can judge performance on that one. We are running Apache having a PHP/Postgresql based items/shopping website.
We've 2 servers Body for that website (PHP scripts, all static content) and something for that database.
Our orders run having a remote payment gateway the following: 1. Customer completes checkout on the internet server 2. Customer then is rerouted towards the payment gateway for payment. 3. A notification will be delivered to our web server script (in the payment gateway) to see us from the status from the order. At this time, we see if the payment was effective after which put the to the client's stock system accordingly.
Now, on step three, there exists a situation where we have to perform some tasks, pause for a minute (this can be a custom workaround for that client's stock reservation system until it's upgraded later later on), after which follow the remainder tasks. The "pause for a minuteInch may be the part I am unsure about. The script must literally wait for while (about a minute) therefore we perform the other tasks (transfer an order to the client).
I'm able to think about carrying this out in 2 ways:
a) We set a timeout/interval around the PHP script for a minute, after which finish the task. This really is most likely not workable since we coping a payment gateway and we don't determine if they'll timeout or maybe any issues sometimes happens here.
b) We all do the very first round of tasks, after which, a cron runs every a minute to check on for incomplete order transfers, after which does the total amount work with us. The cron must run every a minute to trap individuals orders that should be delivered to the customer.
The net server has numerous crons running, mostly at off peak occasions. Can anybody advise:
If you're able to visit a different means to fix the above mentioned
Should you accept solution b), could this be a bit of an overload to the web server? What's the easiest way that i can judge this? The job itself it not demanding (inspections database, transfers personal files in one spot to another on our web server), but I am wondering if your cron should execute every minute like this.
Given your constraints, and timing, why don't you write a method daemon that does the job? It may run "constantlyInch, sleeping when there's no work, and awakening as frequently when needed. Additionally, it eliminates the "you will find 5 of me running" problems. You are able to increase your queue, as John mentioned, and maintain it. I have carried this out a couple of occasions with success.
The reply is B. If you are concerned about your server's performance at minute-level times, then you definitely really only need a larger machine/more potent architecture. Most performance issues on the Web server come lower to usage spikes at more granular levels.
If you're worried, the answer is fairly simple. Rather than dumbly firing off a lot of processes every minute, build inside a queue plus some condition management. Mark the equipment as 'processing' when you begin your queue. When the cron fires again and also the machine continues to be 'processing', then just dump out and wait for a next tick. When you are completed with the queue, unmark the equipment.
Should you go 3-4 ticks having to break from 'processing', you'll be able to call somebody's pager.