This can be much more of a serverfault.com question but a) it does not exist yet and b) I want more repetition when ever it will :~)

My employer includes a couple of hundred servers (all *NIX) spread across several locations. When I suspect is typical we do not really know the number of servers we've: more often than once I have been surprised to locate a server which has been up for five years, apparently not doing anything but raising our planet's temperature slightly. We have many databases that store items of server information -- Puppet, Cobbler, Nagios, Cacti, our load balancers, DNS, various internal excel spreadsheets and so forth but it is all very disparate, incomplete and overlapping. Maintaining this mess costs money and time.

So, Let me show up just one database which holds particulars of the items each server is (hardware specs, role, etc) and replaces (or at best supplies data for) the databases pointed out above. The database and web interface could be a Rails application because this is things i have most knowledge about. I am much more of a sysadmin than the usual coder.

Has this issue recently been solved? I can not find any open source that actually will do the job and I am generally much less interested in bloaty, GUI vendor-provided solutions.

How must i implement the unit information collection bit? For example, it would be great towards the database update device records when disks are added or removed, or once the server serial number changes because Hewlett packard replace the board. These details originates from a variety of sources: dmidecode, command-line disk tools, SNMP from the server or its onboard lights-out card, and so forth. I possibly could expose all of this through custom scripts and net-snmp, or I possibly could operate a local poller that reported the data to the central DB (maybe using a Peaceful interface or something like that). It should be easily extensible.

Excuses have you employed this? How? Let me know your encounters, breakthroughs, mistakes and recommendations!

This seems like an excellent LDAP problem searching for an answer. LDAP is made for this type of factor: a catalog of products that's enhanced for data searches and retrieval (although not always creates). You will find many LDAP servers to select from (OpenLDAP, Sun's OpenDS, Microsoft Active Directory, simply to title a couple of ...), and I have seen LDAP accustomed to catalog servers before. LDAP is extremely standardized along with a "database" of knowledge that's usually looked or read, although not frequently up-to-date, may be the strong-suit of LDAP.

My team happen to be dumping full-scale systems directly into RDF for any couple of months now, we've the systems implementation people produce the initial data in stand out, that is then changed to N3 (RDF) using Perl.

We percieve the information in Gruff (http://www.franz.com/downloads.lhtml) and the resulting RDF in Allegro (a triple store in the same men which do Gruff)

It's incredibly easy and flexible - no schema means we just augment the information quickly with a multitude of RDF audiences and reasoning engines the presentation choices are enless.

The good thing for me personally? no coding, just create triples and throw them within the store then view them as graphs.