I am creating a "place the diffrence" multiplayer game.

The specifiactions of the overall game are :

  1. There might be as much as 10 gamers in every game.
  2. A person mustn't begin to see the same picture two times.
    (an image includes four images and also the user must "place the main differenceInch together ).

I've got a assortment of 1000's and perhaps even hundreds of 1000's of pictures to select from. The issue I am facing is definitely an very non-eficient and not-scalable way of getting a picture that no game gamers have experienced yet.

During my database I've got a usage table using the following fields :

  1. picture_id
  2. user_id

My current option would be the following :

User makes its way into game, application chooses an image in the database that doesn't come in the usage table for your user, as well as for each user that makes its way into I run exactly the same function only adding values of images that other customers within the same game have previously seen.

I'm concerned that when there's a database of hundreds of 1000's of images to select from, and also the usage table has already been being chock-full by previous games, the function only will take too lengthy harmful the flow of the overall game.

This process is not so scalable and i'm expecting quite a regular flow of constant traffic meaning plenty of games being performed.


Does anybody have suggestions regarding how to improve this logic or recommendations for a much better database structure?

I believe this really is premature optimisation.

While "hundreds of 1000's" seems like a great deal to an individual, it's virtually absolutely nothing to a SQL engine. Some implementations will not make use of indexes on the table under 50-60k rows since it is faster simply to load the entire factor into memory.

I'd recommend writing the queries using EXISTS which short circuits in many implementations and really should be fast enough for the reasons.

Should you publish some sample code for the table structures and/or some sample data we are able to most likely help with a question, however i think you're worrying about nothing.

You can simply give a area towards the picture table (not the consumerOrimage mapping) that flags whether an image has been utilized. After that you can set that flag each time a pictures can be used, and index that area for fast identification of unused pictures. This really is effectively caching the end result to some hasBeenUsed() function.

Many people may object on various ground which such premature optimisations can result in a very cluttered and incredibly tightly combined structure. Penalising future maintainability.

An alternate would be to have every picture within the user/picture mapping table. If your picture sits dormant, its connected user_id remains as NULL. A catalog with picture_id first can make identification of not-used pictures extremely swift.

But first and foremost, it really is dependent around the query make this random selection. Very frequently (although not always) poorly scalable calculations could be changed with a lot more scalable calculations without altering the database structure whatsoever. But to understand that we have to call at your query, along with the schema and behavior details about the information (constraints, likely %unused, etc).

10 customers against 10,000 pictures - using theory of probability you'll need choose 11 pictures (I suggest apply certain random approach) after check inside these 11 to discover first unique.