Good day!

I've 350GB unstructured data disaggregated by 50-80 posts.

I have to store this data in NoSQL database and perform a number of selection and map / reduce queries strained by 40 posts.

I must use mongodb, so I've got a certain question: is database able to handle this and what should i implement its architecture inside the existing provider hetzner.p?

Yes, large datasets are easy.

Possibly Apache Hadoop can also be worth searching at. It's targeted at handling/examining large/immeasureable data.

mongodb is an extremely scalable and versatile database, if used correctly. It may store just as much data since you need, but the end result is whether you are able to query your computer data effectively.

comments:

  • You will have to make certain you will find the proper indexes in position which a reasonable quantity of them can easily fit in RAM.
  • To be able to make that happen you may want to use sharding to separate the significant set
  • current mapreduce is simple to use, can iterate total your computer data but it's rather slow to process. It will become faster in next mongodb and you'll also have a brand new aggregation framework to enhance mapreduce.

Final point here is that you ought to require mongodb like a magical store that'll be perfect as they are, make certain you browse the good paperwork and materials :)