Question:

How do i implement shared memory variable in PHP with no semaphore package (http://php.internet/manual/en/function.shm-get-var.php) ?

Context

  • I've got a simple web application (really a wordpress plugin for WordPress)
  • this will get a url
  • this then inspections the database in the event that url already is available
  • otherwise it is out and does some procedures
  • after which creates the record within the database using the url as unique entry

What goes on the truth is is the fact that 4,5,6 ... periods simultaneously request the url and that i wake up to 9 duplicate records within the database from the url.. (possibly 9 since the processing some time and database write from the first entry takes just lots of time to let 9 other demands fall through). Next all demands browse the correct entry the record already is available to ensure that is nice.

As it is a WordPress wordpress plugin you will see many customers on all type of hosting that is shared platforms with variable compiles/configurations of PHP.

So I am searching for a far more generic solution. I cant use database or text file creates as these is going to be not fast enough. as i email the db the following session will curently have passed.

fyi: the database code: http://plugins.svn.wordpress.org/wp-favicons/trunk/includes/class-database.php

update

Utilizing a unique key on the new md5 hash from the uri along with try catches around it appears to operate.

I discovered 1 duplicate entry with

SELECT uri, COUNT( uri ) AS NumOccurrences
FROM edl40_21_wpfavicons_1
GROUP BY uri
HAVING (
COUNT( uri ) >1
)
LIMIT 0 , 30

And So I thought it didn't work but it was simply because they were:

http://en.wikipedia.org/wiki/Book_of_the_dead
http://en.wikipedia.org/wiki/Book_of_the_Dead

(capitals grin)

This may be accomplished with MySQL.

You could do this it clearly by securing the table from read access. This can prevent any read access in the entire table though, so might not be more suitable. http://dev.mysql.com/doc/refman/5.5/en/lock-tables.html

Otherwise when the area within the table is considered unique, when the following session attempts to write exactly the same Hyperlink to the table they'll have an error, you are able to catch that error and continue as you shouldn't have to complete anything when the entry has already been there. The only real time wasted is the potential of several periods creating exactly the same URL, it makes sense still one record, because the database will not add exactly the same unique URL again.

As talked about in comments, because the size of a URL is quite lengthy, and glued length unique hash might help overcome that problem.

You will find other shared memory modules in PHP (shmop or APC for instance), however i think what you're saying is the fact that there's an problem depending on non-standard/not pre-installed libraries.

My suggestion is the fact that prior to going and do "other procedures" you have to make an entry within the database, possibly having a status of "producing" (or something like that) which means you know it's still unavailable. By doing this you do not encounter difficulties with getting multiple records. I'd also make sure you are using transactions when they're available so that your commits are atomic.

Then, whenever you "other procedures" are carried out, update the database admission to "available" and do other things it's you must do.