I'm presently attempting to import a semi-colon delimited text file right into a database in c# using OleDb where I'm not sure the kind (SQL Server, Access, Oracle, MySQL, postgreSQL, etc.) Presently I am reading through within the file like a database while using Jet text readers then developing a prepared place statement, inhabiting the fields, then commiting in the finish. That can be a works, it's slow as well as for countless rows, it requires far too lengthy.

So my question: Does anybody have other ideas regarding how to best import a text file to some generic database, or comments on my small approaches which will result in a faster import?

I am unable to use third party libraries or software to get this done because it is a part of a bigger project

Do this

http://filehelpers.sourceforge.net

....why would you need to load the db in to the dataset? Have another database keep an eye on the originality (if there's this type of word). While posting, see if is available within the logging database, if no, then load to Generic Database.

Watch for another reactions for this thread, we might obtain a better idea.

Not quite elegant, but performance might be better:

  • load the entire file right into a table with only one column "Line" as lengthy text (much like that which you do now in your area
  • use saved methods to separate the fields apart and make the card inserts
  • execute the card inserts around the server

When you are still placing each line seperately, you would not create as much network traffic.

To elaborate, the initial method creates the claims around the client after which executes them around the client, leading to network traffic for every line. My suggestion is always to create the claims around the server (inside a saved procedure) and also have them execute around the server, leading to no new network traffic.

The "correct" solution is always to make use of a database specific import tool (like SQL Loader for Oracle). The performance gains are enormous. (We're loading huge tables with 20 million lines within a few minutes). However, that's not so generic.

Well, I handled to obtain the rows from the text file in to the database dataset, and to date this process appears to become faster. I made use of

Dataset.Tables[x].ImportRow(DataRow)

Obviously now it is simply obtaining the DataAdapter.Update(Dataset) to operate. Searching online that's destined to be fun...

Update

This process doesn't yield faster results because the DataAdapter.Update command does do line by line insertions.

BULK Place dbo.ImportTest FROM 'C:ImportData.txt' WITH ( FIELDTERMINATOR =',', FIRSTROW = 2 )

Your best choice is to find an out of the box application for carrying this out.

Professional Out Of The Box programs use native motorists and tweak for every kind of datasource they'll hit against. This really is always underneath the covers which means you aren't seeing the way they do it. For instance, bulkcopy can be used against SQL Server Oracle includes a Data Pump.

The issue with moving your personal is you can either spend the cash to tweak the application to utilize each one of the source types you are prone to encounter Or else you have a huge performance hit using the generic ODBC / ADO / Whatever motorists.

In the finish during the day, you're best either departing this from your product or simply coping with the inevitable slow approach that you're instructed to take. Within this situation which means using single place claims for everything.

So, how much cash have you got for development assets?