our webapp collects large amount of information about user actions, network business, database load, and so forth etc

All information is saved in warehouses and that we have a great deal of interesting sights about this data.

if something odd happens odds are, it turns up somewhere within the data.

However, to by hand identify if something unusual is happening, one needs to constantly examine this data, and search for oddities.

My question: what's the easiest method to identify alterations in dynamic data which may be viewed as 'out from the ordinary'.

Are bayesan filters (I have seen these pointed out when reading through about junk e-mail recognition) what you want?

Any pointers could be great!

EDIT: To explain the information for instance shows a regular curve of database load. This curve typically looks like the curve from yesterday Over time this curve might change gradually.

It might be nice when the bend from daily changes say within some perimeters, an alert may go off.

R

Have a look at Control Charts, they offer a method to track alterations in your computer data aesthetically and specify once the information is "unmanageableInch or "anomalous". They're heavily utilized in manufacturing to make sure qc.

This doesn't seem possible to reply to not understanding a little more about the specific data you've. For an introduction to what types of approaches exist, see Anomaly Detection: A Survey by Chandola, Banerjee, and Kumar.

This is dependent a lot on which the information is. Have a statistics class and discover the fundamentals first. This is not usually a simple or simple problem.

Bayesian classification might assist you in finding some anomalies inside your data, with respect to the kind of data and just how good you train your Bayesian filter.

There's even one available like a web service @ uClassify.com.