I've quite considerable amounts of texts in Mysql tables. I wish to perform some record analysis and then on some NLP on my small text while using NLTK toolkit. I've two choice:

  1. Removing all of the text at the same time from the DB table(maybe puting these questions file as needed)and employ the NLTK functions
  2. Removing the written text and making it a "corpus" you can use with NLTK.

The later appear quite complicated and that i haven't found any articles that really describes using it I only found this : Creating a MongoDB backed corpus reader which utilizes MangoDB as it is database and also the code is very complicated as well as requires knowing MangoDB. However, the previous appears really straightforward but cause an overhead removing the manuscripts from DB. Now now you ask , that do you know the benefits of corpus in NLTK? Quite simply basically go ahead and take challenge and search into overwriting NTLK techniques therefore it can see from MySQL database, will it be well worth the hassle? Does turning my text right into a corpus produce something which I am unable to(or with many different difficulty) use regular NLTK functions? Also knowing something about hooking up MySQL to NLTK please tell me. Thanks

Well after reading through a great deal I discovered the solution. You will find several very helpful functions for example collocations,search,common_context,similar you can use on texts which are saved as corpus in NLTK. applying them yourself takes quite a while. If Choose my text in the database and include a file and employ the nltk.Text function i quickly may use all of the functions which i pointed out before with no need of writing a lot of lines of code as well as overwriting techniques to ensure that I'm able to connect with MySql.This is actually the link for more information: nltk.Text