There are two patches that do similar things: per context quota and per context disk limits. This is a discussion I (pflanze) had with Bertl on irc. I've still not found the time to try these out, but since I've promised to write up a page in the wiki, I've just put together questions and answers here for now.
BTW http://www.13thfloor.at/old/VServer/Concepts.shtml is old, "This is no longer true" (about recording the ctx id with the inode). There's another howto page: http://www.13thfloor.at/old/VServer/HowTo_LVMQ.shtml - todo: check if this latter page is also outdated. Q: <pflanze> What are these two doing/what's the difference? A: <Bertl> simple, if you want user/group quota inside a vserver on a shared partition, then you are speaking of Per Context Quota. If you want to account/limit a vserver's disk usage we speak of Context Disk Limits. Q: I want to use a shared partition and vunify, and limit each vserver's total un-shared space. A: use Context Disk Limit Q: Which filesystems do these work on? A: The disk limit _will_ work on all filesystems capable of (and prepared to) storing xid info per file. The current (stable/devel) implementation requires the quota hashes to store the disk limits, so it is bound to ext2/ext3 Q: Does using ctx quota allow to quickly determine the space taken by each vserver? A: no, that is Context Diks Limits ... Q: What other documentation is there about these? "Is there a wiki page already?" A: unfortunately no, someone (note by the editor: ask on the mailing list who it was) started to write a detailed howto, but it seems it was lost on the way somehow ... there is a basic setup explanation (outdated) in the 2.6 devel section Q: How is it implemented? Normal user disk quotas would be useless since the same user id would be used for different vservers; thus the user id has to be extended with the context id, right? A: <Bertl> that was the old idea ... it is obsoleted now for several reasons; mainly because of the way user quota 'actually' work. The quota information and the accounting information is stored in some quota files (actually one for each quota type (user/group)). Now those quota files are accessed not only by the kernel (unfortunately) but also by the much-too-smart quota tool. Which meanst that they have to be available _inside_ a vserver too (if you do not want to have the Host admin to change the user quota ;). So I had to find a workaround ... and actually I found one - the basic idea behind this is: * quota files are per superblock atm * the first step was to add something called quota hashes, which allowed to have an arbitrary number of quota hashes per superblock * the next step was to allow for different quota files for each quota hash * and final, something to allow to send quota ioctls to the appropriate hash/kernel (the vroot device) so todays quota solution requires one hash for each context _and_ filesystem (which has to be created with the cqhadd command). The hash is identified by (fs,xid), and it contains (uid,current,soft,hard) and (gid,current,soft,hard) tuples. In addition to that, each quota hash allows for storing 5 disk limit values, current inode count/blocks, maximum inodes/blocks and reserved %. This hasn't to be tied to the quota hash, but it was some kind of simplification. :<pflanze> (btw xid = context id, right? (why not vid?)) :<Bertl> because a vserver actually consists of (xid,nid), nid being the network context Q: So the ctx quota patch implements the per xid hashes? and hooks for userspace tools to create quota files? A: the so called q0.14 patch consists of 4 patches: * the first, actually splitting up the quota structures into quota hashes (allowing for more than one per superblock) without changing anything for the user (requires ext2/ext3 atm) * the second, implementing the xid file tagging, which is required to tell the xid from a given file (almost fs agnostic) * the third, which uses the quota hashes to implement per context quota (by looking at the xid) (requires ext2/ext3 atm) * and the last one, which adds the context disk limits to the existing hash structure (could be done fs agnostic) Q: Now with the quota patch, it *would* be possible to know the total storage of each vserver, by adding all "current" entries in one hash, right? A: hmm, yes, given that the quota info is correct ... (requires quotacheck to be run first ;) Q: So where does the other patch (disk limit) come in? A: it doesn't use the quota information at all. it relies on what you tell it is the 'current' value, and note any changes ... further the values are used to 'virtualize' the values for df and friends. Q: ok, so you'd basically use "du" for a "check run" first? A: du might not be the best choice, but basically yes. to have the system working correctly, you need to account only files belonging to the given xid Q: How does it work? still needs to store the xid to each inode, right? A: it currently uses the xid stored with a file, but could (if implemented differently, using current->xid instead of the xid stored with a file) also work without the xid tagging, as long as no other context (not even the host context) manipulates files belonging to a specific context. that's why du isn't that smart to use, especially in a unified case. Q: which patch adds the xid to files? A: the second one ... called xid file tagging; this is already included in current 2.6 exp (1.9.0preX) Q: So the disk limit patch needs the quota patch as well? A: currently yes, because it 'reuses' the quota hash to store it's 5 values ... but generally speaking no Q: context disk limits are only experimental? A: no, they are in stable too, as Addon: http://www.13thfloor.at/vserver/s_addons/overview/ |