I am searching for a good example of how you can split-out comma-delimited data inside a area of 1 table, and complete another table with individuals individual elements, to be able to create a one-to-many relational database schema. This really is most likely rather easy, but allow me to give a good example:

I'll begin with my way through one table, Icons, with a "condition" area to contain states which have that widget:

Table: WIDGET

===============================
| id | unit | states          |
===============================
|1   | abc  | AL,AK,CA        |
-------------------------------
|2   | lmn  | VA,NC,SC,GA,FL  |
-------------------------------
|3   | xyz  | KY              |
===============================

Now, what Let me create via code is another table to become became a member of to WIDGET known as *Widget_ST* which has widget id, widget condition id, and widget condition title fields, for instance

Table: WIDGET_ST

==============================
| w_id | w_st_id | w_st_name |
------------------------------
|1     | 1       | AL        |
|1     | 2       | AK        |
|1     | 3       | CA        |
|2     | 1       | VA        |
|2     | 2       | NC        |
|2     | 1       | SC        |
|2     | 2       | GA        |
|2     | 1       | FL        |
|3     | 1       | KY        |
==============================

I'm learning C# and PHP, so reactions either in language could be great.

Thanks.

I authored some scripts to import the Stack Overflow data dump into an SQL database. I split the tags list to populate a many-to-many table while you describe. I personally use a method like the following:

  1. Read a row from WIDGET

    while ($row = $pdoStmt->fetch()) {
    
  2. Use explode() to separate on the comma

    $states = explode(",", $row["state"]);
    
  3. Loop over elements, writing to a different CSV file

    $stateid = array();
    $stfile = fopen("states.csv", "w+");
    $mmfile = fopen("manytomany.csv", "w+");
    $i = 0;
    foreach ($state as $st) {
        if (!array_key_exists($st, $stateid)) {
            $stateid[$st] = ++$i;
            fprintf($stfile, "%d,%s\n", $i, $st);
        }
        fprintf($mmfile, "%s,%s\n", $row["id"], $stateid[$st]);
    }
    fclose($stfile);
    fclose($mmfile);
    
  4. When you are done, load the CSV files in to the database. This can be done within the mysql client:

    mysql> LOAD DATA INFILE 'states.csv' INTO TABLE STATES;
    mysql> LOAD DATA INFILE 'manytomany.csv' INTO TABLE WIDGET_ST;
    

It might appear like lots of work, but while using LOAD DATA command runs 20x faster than placing one row at any given time, therefore it is useful in case your data set is big.


Re your comment:

Right, I additionally have data inside a database already. It works out the solution I show above, dumping to CSV files and re-posting in stabilized format, is many occasions faster than doing Place claims within the loop that splits the information.

Each make of database features its own tool for posting bulk data. See my response to Optimizing big import in PHP for a listing of bulk import solutions per database.

You need to use the various tools supplied by each database. Attempting to remain mix platform only makes your code Jack of all trades, master of none. Besides, in 90% of times when people bend over backwards to create their code database-independent, it works out they never use several database. And also you can't achieve complete database independence anyway.