Inside a database to have an engineering simulation predicting application:
- Each user produces a "project".
- Inside a project, the consumer identifies the amount of products for just two groups: cat_1 and cat_2.
- cat_1 and cat_2 may have n1 and n2 products correspondingly.
- This program then converts cat_1 and cat_2 into two matrices pad_1 pad_2 so that for every matrix rows = posts = n1 or n2. I.e. if your category has 3 records ('a', 'b', and 'c'), the matrix is going to be 3 by 3 using the rows and posts being 'a', 'b' and 'c'.
- The matrix is going to be then increased with a factor (K_r).
- The multiplication returns 48 matrices, 96 matrices, 48 for cat_1 and 48 for cat_2.
- In every matrix you will find control array/variables pad_1_aij and pad_2_bi2j2 the user identifies and offers values for.
- These variables represent area collected data the simulation should match.
- An formula iterates through both matrices numerous occasions (1,000 approximately) before the observed value(s) == mat1_aij and pad_2bi2j2 correspondingly.
What's the best practice method to design/build/store this type of database (particularly the matrices) and what are the design/implementation items you might anticipate?
I believe that matrices are often symbolized as adjacency lists within the relational model. This means that you've one column for each dimension from the matrix that contains the coordinates and something (or multiple) posts for that value for the reason that cell.
This will allow efficient querying however, you should avoid iterating with the matrix using point-queries (i.e., asking for the worthiness in one cell). If at all possible, you need to scribe because your formula in (ansi/cursor-free) sql and also have the DBMS carry it out.
If that's difficult you'd finish up reading through the entire matrix (or even the needed portions) in the database, carry out the formula and write it back. Should you come until now you might like to request yourself if your relational database is the thing you need.