In top, I observed that my c program (using CUDA 3.2) includes a virtual size 28g or even more (searching at VIRT), on every run right right from the start. This does not make Sense at all in my experience. The resident memory is sensible and it is only around 2g on my small biggest data set. I understand sooner or later previously the virtual size wasn't so large, but I am unsure once the change happened.

Why would my process use 28g of virtual memory (or why would top's VIRT be so large)? I realize that VIRT includes the executable binary (only 437K), shared libraries, and "data area". What's the "data area"? How do i discover just how much memory the shared libraries require? How about additional factors of my process's total memory?

items in /proc/< pid >/smaps (1022 lines) here: http://pastebin.com/fTJJneXr

Among the records from smaps show that certain of these makes up about The majority of it, but doesn't have label... how do i discover what this "blank" entry is the fact that has 28gb?

200000000-900000000 ---p 00000000 00:00

Size:           29360128 kB

Rss:                    kB

Pss:                    kB

Shared_Clean:           kB

Shared_Dirty:           kB

Private_Clean:          kB

Private_Dirty:          kB

Recommended:             kB

Anonymous:              kB

Swap:                   kB

KernelPageSize:        4 kB

MMUPageSize:           4 kB

Locked:                 kB

--

ubuntu 11.04 64-bit
16 GB RAM

Both of these regions will be the reason:

200000000-900000000 ---p 00000000 00:00

Size:           29360128 kB

Rss:                    kB

Pss:                    kB

Shared_Clean:           kB

Shared_Dirty:           kB

Private_Clean:          kB

Private_Dirty:          kB

Recommended:             kB

Anonymous:              kB

Swap:                   kB

KernelPageSize:        4 kB

MMUPageSize:           4 kB

Locked:                 kB

7f2e9deec000-7f2f131ec000 rw-s 33cc0c000 00:05 12626                     /dev/nvidia0

Size:            1920000 kB

Rss:             1920000 kB

Pss:             1920000 kB

Shared_Clean:           kB

Shared_Dirty:           kB

Private_Clean:          kB

Private_Dirty:   1920000 kB

Recommended:      1920000 kB

Anonymous:              kB

Swap:                   kB

KernelPageSize:        4 kB

MMUPageSize:           4 kB

Locked:                 kB

The very first segment is really a 30GB anonymous private segment, without any use of it permitted, planned from 0x200000000-0x900000000. A little mysterious, indeed - most likely something related to the nvidia driver's internal workings (maybe it really wants to prevent allocations with individuals specific addresses?). It isn't really taking up any memory though - Rss is zero, and also the access flags (---p) are going to deny all access, so (right now) really allocating any memory into it will not happen. It is simply a reserved section inside your address space.

Another bit may be the /dev/nvidia0 mapping, of two gb. This really is likely an immediate mapping of area of the video card's RAM. It isn't taking up memory as a result - it is simply arranging a part of your address space to make use of to speak with hardware.

Therefore it is not necessarily something to bother with. If you wish to understand how much memory you are really using, accumulate the Rss figures for those other memory segments (make use of the Private_* records rather if you wish to skip shared libraries and the like).

Please visit publish #5 within the following thread around the NVIDIA forums: