We currently migrate to MediaWiki from our old installation, but not all content has been migrated yet. Take a look at the Wiki Team page for instructions how to help or browse through our new wiki at wiki.linux-vserver.org to find the information already migrated.
When folks talk about Memory, they usually mean the expensive modules they buy and plug into the (typically x86) based machines. The operating system, in our case Linux, handles that physical memory according to the mechanisms of the architecture (in this case x86). Most architectures used nowadays have the concept of virtual memory which basically is linear address space, in units of pages, which can be real pages in memory or just not there. The operating system now knows about several types of such pages, for example, there is the file system based page, which can be a file or an executable (of course, such a file will usually require several such pages) and, an important detail here is, the pages do not necessarily have to be in RAM. Then there are anonymous pages, which are used by some application (those are usually read-write) and if there is swapping enabled, those pages can also be written to some swap area. In addition to those types, there are a bunch of special pages and methods to handle those pages: for example, the so called zero page, which is (at least) a page just containing read-only zeros. It will usually be used if you request some memory area from the [memory management system (mm)]. The read-only property causes a trap (page fault) once you write to that page, which then will be replaced by actual memory. A similar thing happens with shared memory pages, which are basically marked read-only and copied on write. Pages which get swapped out (to swap space) will not be freed immediately, they are kept as swap cache; similar happens to file caches (inode cache): they are marked as 'unused' but will not be freed until somebody requires a page.
http://computer.howstuffworks.com/virtual-memory1.htm
Now, on x86, the total addressable space is 4GB and this is also the maximum of virtual address space an application (or the kernel) can see. To simplify the transition of memory from userspace to kernelspace, the address space is divided between kernel and userspace.
That's actually quite complex, maybe lets start with something like 'true' first! Let's further assume the binary (program) is not compiled dynamically but is static and fits into 4k and it does not have any data section (usually not true, but it's simpler). When it is executed, the kernel 'maps' the executable into memory, and creates an userspace task, with a stack page, and starts executing the just mapped memory. The file will be read into real memory (i.e. RAM), marked as read-only but executable and it will be added to the inode cache; the virtual address will be at some fixed address coded into the executable and it will call the kernel via syscalls requesting things like exit()
When you execute it a second time, the file is already in the inode cache, so all that happens is a new mapping into the virtual adress space of that task. Now, I mentioned already that the address space is shared by userspace and kernel. Typically, the split is 3/1 - where userspace gets 3GB of "space" and the kernel only 1GB; there are now split patches and recent changes to mainline which allow for other splits too such as 2/2 or 1/3. In whichever case, the userspace is allocated first, starting at 0, and (normally), the kernel starts at 0xc0000000 - which is 3GB.
Now usually this leads to the question: but what if I have only 1GB of RAM? The answer is simple, it will still be 3GB userspace and 1GB kernel space: as mentioned before, the virtual space does not have to be backed by any real RAM. You could, for example, fill the entire 3GB space with mappings of the zero page, using only a single 4k RAM area at any location. This also means that a physical address can be mapped at different virtual addresses and, of course, several times - even in the same task space. Also, the 3GB (or actually 4GB) address space is per process so the processes do not have to share that space in any way.
Now to get back to the editor example: this will cause
Then it will require stack and data pages to do the actual work (editing) and it will request a writeback to the file, which will make those pages buffer caches for write back I/O, which in turn might update the inode cache once it is written. If, for some reason, the editor is very large (i.e has many executable pages), it might happen that when you are low on physical RAM (or the swap system is tuned to do optimistic swap out) that some pages of your editor (which are not used right now) will simply be dropped and some of the data pages (editor memory) will be swapped out.
So, as you can see, the relation between processes and physical RAM is not straight and simple :)
Yes, that is right, but let me answer that with another question: why is an int 32-bit and not just ld(N) ? That is, to represent the value 20 you need just 5 bits (10100 in binary), so why 'waste' 32 bits for that? IMHO the answer is simple and straight forward: the hardware has to have certain limits, for the int this is 32bit; for the address space that is 4GB on x86.
Q. is that due to the instruction set on the CPU itself?
Well, actually the 32bit address space and the MMU: { 2^32 = 4294967296 = 4GB }. The x86_64 has a much larger address space (as it is 64-bit based) and the MMU there usually has 48bits at least
Q. if there's more than 4 gigs, does that mean it is or isn't used?
As I said before, a mapping is required between virtual adresses and physical RAM. Without 'dirty' tricks (read [PAE](warning, link to micro$oft!!)) 4GB is the maximum on x86. Beyond that, it is the dreaded HIGHMEM support (which is a special case below the 4GB). To access those >4gigs you do so at a cost, so its slower. Even with 4GB RAM, the kernel can only address 1GB (in the default split) directly. The thing here is, changing the mapping from virtual to physical memory is expensive and a kernel which can address 1GB will have to reserve a certain area for mapping in and out the remaining 3GB (on a 4GB system). This "mapping window," where the "high" memory pages are mapped in and out is called high mem so with the default 3/1 split, you can get roughly 970MB of memory. even with 2GB RAM this will not change, only enabling the dynamic mapping (highmem) will give you access to that
Q. you said the virtual memory isn't backed by RAM, but is it backed by anything?
Sometimes, it depends on the mapping.
Q. so how does this all map onto VSZ and RSS (in vserver stats) or VIRT/RES/SHR in top "stats"
good questions, and well, they are simply answered for a task
18:47 < Bertl> the VSZ for a task is the amount of pages which have a mapping
18:48 < Bertl> and the RSS (resident set size) is the amount of pages which are currently in RAM
18:48 < micah> physical
18:48 < micah> VSZ=VIRT, RSS=RES I assume
18:49 < micah> and shared is memory that is mapped between two applications
18:51 < Bertl> now, this accounting is a little more problematic if you want to do it for, let's say two processes
18:51 < Bertl> first, what do you do about the adress space? look for identical mappings and count them only once
18:51 < Bertl> or take the maximum of both?
18:52 < Bertl> or just add them up
18:52 < Bertl> and even more complicated for the RSS
18:52 < Bertl> because we can have, shared RAM (e.g. inode caches, executables)
18:53 < Bertl> and we can have shared but copy on write pages
18:53 < Bertl> we can also have purely anonymous pages
18:55 < gdm> that don't belong to any process?
18:55 < Bertl> no, that only belong to a single task
18:55 < Bertl> but yes, actually shared memory can belong to _no_ task
18:56 < gdm> ahh, yes, i just read back and saw that you said they belong to an application or something
18:56 < gdm> so, a single task
18:56 < Bertl> now, linux-vserver tries to be as unintrusive as possible here
18:56 < Bertl> and of course, we try to keep it simple and efficient too
18:57 < Bertl> so what we do is mainly accounting the allocations and deallocations of those pages per context
18:57 < gdm> yes, seems like it is pretty successful at that
18:57 < Bertl> which gives values (and if limited limits) which might not be directly mappable to physical RAM
18:58 < Bertl> (or swap space, which we didn't even mention yet :)
18:58 < Bertl> we decided to 'simply add up' the address space of all tasks and call that VM/AS
18:59 < gdm> right - virtual memory pages (total)
18:59 < Bertl> we also decided not to account the shared pages special as OVZ does, instead we simply add them up in separate counters
19:00 < Bertl> a currently missing accounting/limit is the swap space
19:01 < Bertl> because accounting the swap space properly would require to 'tag' each memory page to know to which context it belongs
19:01 < Bertl> which is something I don't want to do without good reason, as there are a) many pages and b) this stuff is really performance critical
19:03 < gdm> i guess it wouldn't matter if swap got totally used, tho, as long as you can keep some space in the "real" memory
19:03 < gdm> ie the RAM
19:03 -!- ntrs [~ntrs@68-188-51-87.dhcp.stls.mo.charter.com] has quit [Remote host closed the connection]
19:03 < gdm> hmm.. Bertl thanks :) i think i need to sit down and re read over everything and try to make sure i understand so far
19:03 < gdm> and then probably come back and ask more questions
19:04 < Bertl> okay, you're welcom!
19:04 < gdm> i'll try and sort it out whilst i'm doing that, too, and stick it on the wiki