I’ve encounter the classic, yet omni-present speed-versus-space dilemma laptop or computer programming.

I've (presently one, but potentially more) WordPress posts which are (right now a minimum of) offered from the desktop computer that contains a (not presently too large, and surely grow-able) table.

Rather than hard-coding the rows in to the table after which needing to use WordPress to edit the publish to alter or add data within the table, I put a little of PHP code within the table body which reads and prints the items in an exterior text file which consists of the rows. This way, I'm able to edit the rows inside a text editor rapidly and simply without needing to update the WordPress database.

Regrettably, this really is still only partly convenient since the text file consists of all the table markup and helps make the whole factor less readable. What exactly Used to do was to create a copy from the file, strip the rows of markup, and reformat them into easy-to-read lines of plain text with up and down aligned tab-delimited fields (the very first area is variable size, therefore it has several tabs following it, to a long entry for the reason that area, the relaxation are fixed-width and also have one tab together). I Quickly authored a PHP script which reads the file and systems them within the appropriate row and cell tags, then creates the end result to a different text apply for reading through in to the publish.

This really is all fine and well, and is effective, however I’m attempting to decide between three options.

  1. Edit the rows in a single text file, then by hand run the PHP script to produce a different one, thus inserting the fully marked up text in to the publish and serving that towards the user

  2. Reading through the unedited text file in to the publish with PHP, then processing it there, again serving the tag-wrapped rows towards the user

  3. Reading through the unedited text file in to the publish with PHP, then serving that towards the user and getting some JavaScript wrap the rows in tags client-side

Each one has benefits and drawbacks:

    • Pros
      • The processing is performed only one time
    • Cons
      • Extra step needed
      • The processed data being offered is larger since it has all of the tags

    • Pros
      • The information is definitely current
      • No extra manual work needed
    • Cons
      • The processing is performed around the server for each look at the publish
      • The processed data being offered is larger since it has all of the tags

    • Pros
      • The page being offered is more compact since it doesn't have tags
      • The processing is performed client-side
    • Cons
      • JavaScript
      • Customers can easily see the unedited data

I’m wondering if there's a method to figure out what works best (ideally lacking running plenty of tests to profile each).