iostat Result Analysis [kefu@SZ-8 linux]$ iostat -x -kLinux 2.6.18-128.el5_cyou_1.0 (SZ-8.30) 09/08/2011avg-cpu: %user % Nice %system %iowait %steal %idle16.58 0.00 2.79 0.46 0.00 80.16Device: rrqm/s wrqm/sr/sw/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %utilsda 0.06 29.28 0.22 37.14 10.21 265.68 14.77 0.02 0.51 0.15 0.55sda1 0.00 0.00 0.00 0.00 0.00 0.00 10.79 0.00 2.66 2.43 0.00sda2 0.01 0.78 0.10 0.36 0.81 4.58 23.51 0.00 1.21 0.84 0.04sda3 0.03 15.17 0.09 35.39 8.98 202.24 11.91 0.01 0.26 0.12 0.44sda4 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 33.33 33.33 0.00sda5 0.01 1.59 0.03 0.51 0.34 8.40 32.20 0.00 1.19 0.58 0.03sda6 0.00 0.00 0.00 0.12 0.00 0.48 8.18 0.00 5.02 4.53 0.05sda7 0.00 0.00 0.00 0.00 0.00 0.00 45.00 0.00 5.52 3.04 0.00sda8 0.00 0.00 0.00 0.00 0.00 0.00 40.88 0.00 7.62 6.03 0.00sda9 0.00 0.00 0.00 0.00 0.00 0.00 39.71 0.00 7.37 5.83 0.00sda10 0.00 0.00 0.00 0.00 0.00 0.00 37.57 0.00 5.70 3.54 0.00sda11 0.00 11.7 4 0.01 0.76 0.08 49.97 131.48 0.01 10.74 0.57 0.04sdb 0.01 3.91 20.24 20.21 1262.95 1853.94 154.09 0.52 12.84 1.97 7.95rrqm/s: Number of read operations per second. That is, delta (rmerge) /swrqm /s: the number of write operations per second. That is, delta(wmerge)/sr/s: The number of read I/O devices completed per second. That is, delta (rio) /sw /s: the number of write I /0 devices completed per second. That is, delta(wio)/srsec/s: the number of sectors read per second. That is, delta(rsect)/swsec/s: the number of sectors written per second. That is, delta(wsect)/srKB/s: read K bytes per second. It is half of rsec/s because each sector is 512 bytes in size wKB/s: K bytes are written per second. Is half of awsec/s avgrq-sz: average data size (sector) per device I/O operation. That is, delta(rsect+wsect)/delta(rio+wio)avgqu-sz: average I/O queue length. That is, delta(aveq)/s/1000 (because the unit of aveq is milliseconds) await: the average waiting time (in milliseconds) for each device I/O operation. That is, delta(ruse+wuse)/delta(rio+wio)svctm: average service time (milliseconds) per device I/O operation. That is, delta(use)/delta(rio+wio)%util: How many percent of the time in a second is used for I/O operations, or how much time in one second the I/O queue is non-empty. That is, delta(usr)/s/1000 (because the unit of use is milliseconds). If %util is close to 100%, it means that there are too many I/O requests, the I/O system is full, and the disk may have a bottleneck. The more important parameter %util: how many percent of the time in a second is used for I/O operations, or how much time in one second the I/O queue is non-empty svctm: average per device I/O operation Service time await: average wait time per device I/O operation avgqu-sz: average I/O queue length If %util is close to 100%, indicating that there are too many I/O requests, the I/O system is full, the disk may There is a bottleneck. Generally, %util is greater than 70%, I/O pressure is relatively large, and the read speed has more waits. At the same time, you can use vmstat to view and view the b parameters (the number of processes waiting for resources) and the wa parameters (the percentage of CPU time occupied by I/O waiting, the I/O pressure is higher than 30%). The size of await generally depends on the service time. (svctm) and the length of the I/O queue and the issuing mode of the I/O request. If svctm is closer to await, there is almost no waiting time for I/O; if await is much larger than svctm, the I/O queue is too long and the response time of the application is slow. The image metaphor r/s+w/s is similar to the payee's total average queue length (avgqu-sz), which is similar to the average number of people queued per unit time (avctm), similar to the cashier's collection speed average waiting The time (await) is similar to the average per person's waiting time average I/O data (avgrq-sz) similar to the average per person bought I/O operation rate (%util) similar to the time when someone was queued at the checkout counter The ratio svctm is generally smaller than await (because the waiting time of the waiting request is repeatedly calculated), the size of svctm is generally related to disk performance, CPU/memory load will also have an impact on it, too many requests will indirectly lead to svctm Increase. The size of await generally depends on the service time (svctm) and the length of the I/O queue and the mode in which the I/O request is issued. If svctm is closer to await, it means that I/O has almost no waiting time; if await is much larger than svctm, the I/O queue is too long, and the response time of the application is slow. If the response time exceeds the range that the user can tolerate, then Consider replacing faster disks, adjusting kernel elevator algorithms, optimizing applications, or upgrading CPU queue length (avcqu-sz) as an indicator of system I/O load, but since avcqu-sz is averaged per unit time. So can't reflect the instantaneous I/O flood.zh-CN"],null,[0.98883557],zh-CN"]]]