Τρίτη 20 Νοεμβρίου 2012

Setting limits with ulimit

Setting limits with ulimit By Sandra Henry-Stocker 0 Comments November 18, 2012, 4:45 PM — Administering Unix servers can be a challenge, especially when the systems you manage are heavily used and performance problems reduce availability. Fortunately, you can put limits on certain resources to help ensure that the most important processes on your servers can keep running and competing processes don't consume far more resources than is good for the overall system. The ulimit command can keep disaster at bay, but you need to anticipate where limits will make sense and where they will cause problems. It may not happen all that often, but a single user who starts too many processes can make a system unusable for everyone else. A fork bomb -- a denial of service attack in which a process continually replicates itself until available resources are depleted -- is a worst case of this. However, even friendly users can use more resources than is good for a system -- often without intending to. At the same time, legitimate processes can sometimes fail when they are run against limits that are designed for average users. In this case, you need to make sure that these processes get beefed up allocations of system resources that will allow them to run properly without making the same resources available for everyone. To see the limits associate with your login, use the command ulimit -a. If you're using a regular user account, you will likely see something like this: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 32767 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 50 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited One thing you might notice right off the bat is that you can't create core dumps -- because your max core file size is 0. Yes, that means nothing, no data, no core dump. If a process that you are running aborts, no core file is going to be dropped into your home directory. As long as the core file size is set to zero, core dumps are not allowed.