
This article needs additional citations for verification.(November 2010) |
In UNIX computing, the system load is a measure of the amount of computational work that a computer system performs. The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers which represent the system load during the last one-, five-, and fifteen-minute periods.

Unix-style load calculation
All Unix and Unix-like systems generate a dimensionless metric of three "load average" numbers in the kernel. Users can easily query the current result from a Unix shell by running the uptime
command:
$ uptime 14:34:03 up 10:43, 4 users, load average: 0.06, 0.11, 0.09
The w
and top
commands show the same three load average numbers, as do a range of graphical user interface utilities.
In operating systems based on the Linux kernel, this information can be easily accessed by reading the /proc/loadavg
file.
To explore this kind of information in depth, according to the Linux's Filesystem Hierarchy Standard, architecture-dependent information are exposed on the file /proc/stat
.
An idle computer has a load number of 0 (the idle process is not counted). Each process using or waiting for CPU (the ready queue or run queue) increments the load number by 1. Each process that terminates decrements it by 1. Most UNIX systems count only processes in the running (on CPU) or runnable (waiting for CPU) states. However, Linux also includes processes in uninterruptible sleep states (usually waiting for disk activity), which can lead to markedly different results if many processes remain blocked in I/O due to a busy or stalled I/O system. This, for example, includes processes blocking due to an NFS server failure or too slow media (e.g., USB 1.x storage devices). Such circumstances can result in an elevated load average, which does not reflect an actual increase in CPU use (but still gives an idea of how long users have to wait).
Systems calculate the load average as the exponentially damped/weighted moving average of the load number. The three values of load average refer to the past one, five, and fifteen minutes of system operation.
Mathematically speaking, all three values always average all the system load since the system started up. They all decay exponentially, but they decay at different speeds: they decay exponentially by e after 1, 5, and 15 minutes respectively. Hence, the 1-minute load average consists of 63% (more precisely: 1 - 1/e) of the load from the last minute and 37% (1/e) of the average load since start up, excluding the last minute. For the 5- and 15-minute load averages, the same 63%/37% ratio is computed over 5 minutes and 15 minutes, respectively. Therefore, it is not technically accurate that the 1-minute load average only includes the last 60 seconds of activity, as it includes 37% of the activity from the past, but it is correct to state that it includes mostly the last minute.
Interpretation
For single-CPU systems that are CPU bound, one can think of load average as a measure of system utilization during the respective time period. For systems with multiple CPUs, one must divide the load by the number of processors in order to get a comparable measure.
For example, one can interpret a load average of "1.73 0.60 7.98" on a single-CPU system as:
- During the last minute, the system was overloaded by 73% on average (1.73 runnable processes, so that 0.73 processes had to wait for a turn for a single CPU system on average).
- During the last 5 minutes, the CPU was idling 40% of the time, on average.
- During the last 15 minutes, the system was overloaded 698% on average (7.98 runnable processes, so that 6.98 processes had to wait for a turn for a single CPU system on average).
This means that this system (CPU, disk, memory, etc.) could have handled all the work scheduled for the last minute if it were 1.73 times as fast.
In a system with four CPUs, a load average of 3.73 would indicate that there were, on average, 3.73 processes ready to run, and each one could be scheduled into a CPU.
On modern UNIX systems, the treatment of threading with respect to load averages varies. Some systems treat threads as processes for the purposes of load average calculation: each thread waiting to run will add 1 to the load. However, other systems, especially systems implementing so-called M:N threading, use different strategies such as counting the process exactly once for the purpose of load (regardless of the number of threads), or counting only threads currently exposed by the user-thread scheduler to the kernel, which may depend on the level of concurrency set on the process. Linux appears to count each thread separately as adding 1 to the load.
CPU load vis-à-vis CPU utilization
The comparative study of different load indices carried out by Ferrari et al. reported that CPU load information based upon the CPU queue length does much better in load balancing compared to CPU utilization. The reason CPU queue length did better is probably because when a host is heavily loaded, its CPU utilization is likely to be close to 100%, and it is unable to reflect the exact load level of the utilization. In contrast, CPU queue lengths can directly reflect the amount of load on a CPU. As an example, two systems, one with 3 and the other with 6 processes in the queue, are both very likely to have utilizations close to 100%, although they obviously differ.[original research?]
Reckoning CPU load
On Linux systems, the load-average is not calculated on each clock tick, but driven by a variable value that is based on the HZ frequency setting and tested on each clock tick. This setting defines the kernel clock tick rate in Hertz (times per second), and it defaults to 100 for 10ms ticks. Kernel activities use this number of ticks to time themselves. Specifically, the timer.c::calc_load() function, which calculates the load average, runs every LOAD_FREQ = (5*HZ+1) ticks, or about every five seconds:
unsigned long avenrun[3]; static inline void calc_load(unsigned long ticks) { unsigned long active_tasks; /* fixed-point */ static int count = LOAD_FREQ; count -= ticks; if (count < 0) { count += LOAD_FREQ; active_tasks = count_active_tasks(); CALC_LOAD(avenrun[0], EXP_1, active_tasks); CALC_LOAD(avenrun[1], EXP_5, active_tasks); CALC_LOAD(avenrun[2], EXP_15, active_tasks); } }
The avenrun array contains 1-minute, 5-minute and 15-minute average. The CALC_LOAD
macro and its associated values are defined in sched.h:
#define FSHIFT 11/* nr of bits of precision */ #define FIXED_1 (1<<FSHIFT)/* 1.0 as fixed-point */ #define LOAD_FREQ (5*HZ+1)/* 5 sec intervals */ #define EXP_1 1884/* 1/exp(5sec/1min) as fixed-point */ #define EXP_5 2014/* 1/exp(5sec/5min) */ #define EXP_15 2037/* 1/exp(5sec/15min) */ #define CALC_LOAD(load,exp,n) \ load *= exp; \ load += n*(FIXED_1-exp); \ load >>= FSHIFT;
The "sampled" calculation of load averages is a somewhat common behavior; FreeBSD, too, only refreshes the value every five seconds. The interval is usually taken to not be exact so that they do not collect processes that are scheduled to fire at a certain moment.
A post on the Linux mailing list considers its +1 tick insufficient to avoid Moire artifacts from such collection, and suggests an interval of 4.61 seconds instead. This change is common among Android system kernels, although the exact expression used assumes an HZ of 100.
Other system performance commands
Other commands for assessing system performance include:
uptime
– the system reliability and load averagetop
– for an overall system viewvmstat
– vmstat reports information about runnable or blocked processes, memory, paging, block I/O, traps, and CPU.htop
– interactive process viewerdool
(formerlydstat
),atop
– helps correlate all existing resource data for processes, memory, paging, block I/O, traps, and CPU activity.iftop
– interactive network traffic viewer per interfacenethogs
– interactive network traffic viewer per processiotop
– interactive I/O vieweriostat
– for storage I/O statisticsnetstat
– for network statisticsmpstat
– for CPU statisticstload
– load average graph for terminalxload
– load average graph for X/proc/loadavg
– text file containing load average
See also
- CPU usage
References
- "CPU load". Retrieved 4 October 2023.
- "/proc". Linux Filesystem Hierarchy. Retrieved 4 October 2023.
- "Miscellaneous kernel statistics in /proc/stat". Retrieved 4 October 2023.
- "Linux Tech Support: What exactly is a load average?". 23 October 2008.
- Walker, Ray (1 December 2006). "Examining Load Average". Linux Journal. Retrieved 13 March 2012.
- See http://serverfault.com/a/524818/27813
- Ferrari, Domenico; and Zhou, Songnian; "An Empirical Investigation of Load Indices For Load Balancing Applications", Proceedings of Performance '87, the 12th International Symposium on Computer Performance Modeling, Measurement, and Evaluation, North Holland Publishers, Amsterdam, the Netherlands, 1988, pp. 515–528
- "How is load average calculated on FreeBSD?". Unix & Linux Stack Exchange.
- Ripke, Klaus (2011). "Linux-Kernel Archive: LOAD_FREQ (4*HZ+61) avoids loadavg Moire". lkml.iu.edu. graph & patch
- "Patch kernel with the 4.61s load thing · Issue #2109 · AOSC-Dev/aosc-os-abbs". GitHub.
- Baker, Scott (28 September 2022). "dool - Python3 compatible clone of dstat". GitHub. Retrieved 22 November 2022.
...Dag Wieers ceased development of Dstat...
- "Iotop(8) - Linux manual page".
External links
- Brendan Gregg (8 August 2017). "Linux Load Averages: Solving the Mystery". Retrieved 22 January 2018.
- Neil J. Gunther. "UNIX Load Average – Part 1: How It Works" (PDF). TeamQuest. Retrieved 12 August 2009.
- Andre Lewis (31 July 2009). "Understanding Linux CPU Load – when should you be worried?". Retrieved 21 July 2011. Explanation using an illustrated traffic analogy.
- Ray Walker (1 December 2006). "Examining Load Average". Linux Journal. Retrieved 21 July 2011.
- Karsten Becker. "Linux OSS load monitoring toolset". LoadAvg.
This article needs additional citations for verification Please help improve this article by adding citations to reliable sources Unsourced material may be challenged and removed Find sources Load computing news newspapers books scholar JSTOR November 2010 Learn how and when to remove this message In UNIX computing the system load is a measure of the amount of computational work that a computer system performs The load average represents the average system load over a period of time It conventionally appears in the form of three numbers which represent the system load during the last one five and fifteen minute periods htop displaying a significant computing load top right Load average Unix style load calculationAll Unix and Unix like systems generate a dimensionless metric of three load average numbers in the kernel Users can easily query the current result from a Unix shell by running the a href wiki Uptime title Uptime uptime a command uptime 14 34 03 up 10 43 4 users load average 0 06 0 11 0 09 The w and top commands show the same three load average numbers as do a range of graphical user interface utilities In operating systems based on the Linux kernel this information can be easily accessed by reading the proc loadavg file To explore this kind of information in depth according to the Linux s Filesystem Hierarchy Standard architecture dependent information are exposed on the file proc stat An idle computer has a load number of 0 the idle process is not counted Each process using or waiting for CPU the ready queue or run queue increments the load number by 1 Each process that terminates decrements it by 1 Most UNIX systems count only processes in the running on CPU or runnable waiting for CPU states However Linux also includes processes in uninterruptible sleep states usually waiting for disk activity which can lead to markedly different results if many processes remain blocked in I O due to a busy or stalled I O system This for example includes processes blocking due to an NFS server failure or too slow media e g USB 1 x storage devices Such circumstances can result in an elevated load average which does not reflect an actual increase in CPU use but still gives an idea of how long users have to wait Systems calculate the load average as the exponentially damped weighted moving average of the load number The three values of load average refer to the past one five and fifteen minutes of system operation Mathematically speaking all three values always average all the system load since the system started up They all decay exponentially but they decay at different speeds they decay exponentially by e after 1 5 and 15 minutes respectively Hence the 1 minute load average consists of 63 more precisely 1 1 e of the load from the last minute and 37 1 e of the average load since start up excluding the last minute For the 5 and 15 minute load averages the same 63 37 ratio is computed over 5 minutes and 15 minutes respectively Therefore it is not technically accurate that the 1 minute load average only includes the last 60 seconds of activity as it includes 37 of the activity from the past but it is correct to state that it includes mostly the last minute Interpretation For single CPU systems that are CPU bound one can think of load average as a measure of system utilization during the respective time period For systems with multiple CPUs one must divide the load by the number of processors in order to get a comparable measure For example one can interpret a load average of 1 73 0 60 7 98 on a single CPU system as During the last minute the system was overloaded by 73 on average 1 73 runnable processes so that 0 73 processes had to wait for a turn for a single CPU system on average During the last 5 minutes the CPU was idling 40 of the time on average During the last 15 minutes the system was overloaded 698 on average 7 98 runnable processes so that 6 98 processes had to wait for a turn for a single CPU system on average This means that this system CPU disk memory etc could have handled all the work scheduled for the last minute if it were 1 73 times as fast In a system with four CPUs a load average of 3 73 would indicate that there were on average 3 73 processes ready to run and each one could be scheduled into a CPU On modern UNIX systems the treatment of threading with respect to load averages varies Some systems treat threads as processes for the purposes of load average calculation each thread waiting to run will add 1 to the load However other systems especially systems implementing so called M N threading use different strategies such as counting the process exactly once for the purpose of load regardless of the number of threads or counting only threads currently exposed by the user thread scheduler to the kernel which may depend on the level of concurrency set on the process Linux appears to count each thread separately as adding 1 to the load CPU load vis a vis CPU utilizationThe comparative study of different load indices carried out by Ferrari et al reported that CPU load information based upon the CPU queue length does much better in load balancing compared to CPU utilization The reason CPU queue length did better is probably because when a host is heavily loaded its CPU utilization is likely to be close to 100 and it is unable to reflect the exact load level of the utilization In contrast CPU queue lengths can directly reflect the amount of load on a CPU As an example two systems one with 3 and the other with 6 processes in the queue are both very likely to have utilizations close to 100 although they obviously differ original research Reckoning CPU loadOn Linux systems the load average is not calculated on each clock tick but driven by a variable value that is based on the HZ frequency setting and tested on each clock tick This setting defines the kernel clock tick rate in Hertz times per second and it defaults to 100 for 10ms ticks Kernel activities use this number of ticks to time themselves Specifically the timer c calc load function which calculates the load average runs every LOAD FREQ 5 HZ 1 ticks or about every five seconds unsigned long avenrun 3 static inline void calc load unsigned long ticks unsigned long active tasks fixed point static int count LOAD FREQ count ticks if count lt 0 count LOAD FREQ active tasks count active tasks CALC LOAD avenrun 0 EXP 1 active tasks CALC LOAD avenrun 1 EXP 5 active tasks CALC LOAD avenrun 2 EXP 15 active tasks The avenrun array contains 1 minute 5 minute and 15 minute average The CALC LOAD macro and its associated values are defined in sched h define FSHIFT 11 nr of bits of precision define FIXED 1 1 lt lt FSHIFT 1 0 as fixed point define LOAD FREQ 5 HZ 1 5 sec intervals define EXP 1 1884 1 exp 5sec 1min as fixed point define EXP 5 2014 1 exp 5sec 5min define EXP 15 2037 1 exp 5sec 15min define CALC LOAD load exp n load exp load n FIXED 1 exp load gt gt FSHIFT The sampled calculation of load averages is a somewhat common behavior FreeBSD too only refreshes the value every five seconds The interval is usually taken to not be exact so that they do not collect processes that are scheduled to fire at a certain moment A post on the Linux mailing list considers its 1 tick insufficient to avoid Moire artifacts from such collection and suggests an interval of 4 61 seconds instead This change is common among Android system kernels although the exact expression used assumes an HZ of 100 Other system performance commandsOther commands for assessing system performance include a href wiki Uptime title Uptime uptime a the system reliability and load average a href wiki Top Unix class mw redirect title Top Unix top a for an overall system view a href wiki Vmstat Unix class mw redirect title Vmstat Unix vmstat a vmstat reports information about runnable or blocked processes memory paging block I O traps and CPU a href wiki Htop Unix class mw redirect title Htop Unix htop a interactive process viewer dool formerly dstat atop helps correlate all existing resource data for processes memory paging block I O traps and CPU activity a href wiki Iftop title Iftop iftop a interactive network traffic viewer per interface nethogs interactive network traffic viewer per process iotop interactive I O viewer a href wiki Iostat Unix class mw redirect title Iostat Unix iostat a for storage I O statistics a href wiki Netstat Unix class mw redirect title Netstat Unix netstat a for network statistics a href wiki Mpstat title Mpstat mpstat a for CPU statistics tload load average graph for terminal a href wiki Xload class mw redirect title Xload xload a load average graph for X proc loadavg text file containing load averageSee alsoCPU usageReferences CPU load Retrieved 4 October 2023 proc Linux Filesystem Hierarchy Retrieved 4 October 2023 Miscellaneous kernel statistics in proc stat Retrieved 4 October 2023 Linux Tech Support What exactly is a load average 23 October 2008 Walker Ray 1 December 2006 Examining Load Average Linux Journal Retrieved 13 March 2012 See http serverfault com a 524818 27813 Ferrari Domenico and Zhou Songnian An Empirical Investigation of Load Indices For Load Balancing Applications Proceedings of Performance 87 the 12th International Symposium on Computer Performance Modeling Measurement and Evaluation North Holland Publishers Amsterdam the Netherlands 1988 pp 515 528 How is load average calculated on FreeBSD Unix amp Linux Stack Exchange Ripke Klaus 2011 Linux Kernel Archive LOAD FREQ 4 HZ 61 avoids loadavg Moire lkml iu edu graph amp patch Patch kernel with the 4 61s load thing Issue 2109 AOSC Dev aosc os abbs GitHub Baker Scott 28 September 2022 dool Python3 compatible clone of dstat GitHub Retrieved 22 November 2022 Dag Wieers ceased development of Dstat Iotop 8 Linux manual page External linksBrendan Gregg 8 August 2017 Linux Load Averages Solving the Mystery Retrieved 22 January 2018 Neil J Gunther UNIX Load Average Part 1 How It Works PDF TeamQuest Retrieved 12 August 2009 Andre Lewis 31 July 2009 Understanding Linux CPU Load when should you be worried Retrieved 21 July 2011 Explanation using an illustrated traffic analogy Ray Walker 1 December 2006 Examining Load Average Linux Journal Retrieved 21 July 2011 Karsten Becker Linux OSS load monitoring toolset LoadAvg