国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

Linux下Python腳本的編寫解析fio測試log(四)

這篇具有很好參考價(jià)值的文章主要介紹了Linux下Python腳本的編寫解析fio測試log(四)。希望對大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

在我們平常的測試盤的過程中,fio是一款業(yè)界最為通用的工具,通過Fio可以只知道一個(gè)盤具體的帶寬,iops,延遲等等.比如下面這段fio的log
Linux下Python腳本的編寫解析fio測試log(四)
Linux下Python腳本的編寫解析fio測試log(四)

這里邊需要拿出log里的一些參數(shù) 做成表格 如下:
Linux下Python腳本的編寫解析fio測試log(四)
下面說下如何用Python解析log并做成表格
1 同樣是用到Python中文件函數(shù)open,調(diào)用四個(gè)模塊sys,os,re和argparse模塊 如下:

#!/usr/bin/python
import argparse
import  sys
import os
import re

sys.argv[n]是傳遞Python腳本的第幾個(gè)參數(shù),這里我主要用到三個(gè)參數(shù):
inputfile :代表要打開的fio log文件
resultfile:代表要輸出的解析結(jié)果
rwm: 代表read.write,mix三種讀寫模式
os 模塊:調(diào)用Linux指令
re模塊: 篩選字符串所用
這里因?yàn)橐紤]兩種讀寫模式:混合讀寫和非混合讀寫模式
因?yàn)閮煞Nlog的格式不一樣
2 第二步就是把里邊的值篩選出來了
iops = IOPS
bw = 帶寬
avg_lat = lat平均延遲
max_lat = lat最大延遲
clat99 = 代表clat99%的點(diǎn)延遲
clat999 = 代表clat99.9%的點(diǎn)延遲
clat9999 =代表clat99.99%的點(diǎn)延遲
clat_unit = clat延遲的單位
(1)IOPS和BW在同一行可以用 ","分割來求出,BW的話需要過濾掉括號(hào),并且按照MB單位進(jìn)行換算

#!/usr/bin/python
import argparse
import  sys
import os
import re
inputfile = sys.argv[1]
resultfile = sys.argv[3]
rwm = sys.argv[2]
def not_mix_fio():
    datastr = open(inputfile).read()
    data=datastr.split(",")
    datalat=datastr.split("\n")
    iops = ""
    bw = ""
    avg_lat = ""
    max_lat = ""
    clat99 = ""
    clat999 = ""
    clat9999 = ""
    clat_unit = ""
    for i in data:
        if "IOPS" in i:
          iops = i.split("IOPS=")[1].strip()
        if "BW" in i:
            bw=temp[0]
            unit= re.sub(r'[0-9.]+', '', bw)
            bw=bw.split(unit)[0].strip()
            if unit == "kB/s" or unit == "KB/s":
               bw=float(bw)/1000  
            if unit == "b/s" or unit == "B/s":
               bw=float(bw)/1000000 
    print("iops=",iops)
    print("bw=",bw)

運(yùn)行如下:
Linux下Python腳本的編寫解析fio測試log(四)
(2)篩選出lat的最大值和平均值 并且單位以ms為準(zhǔn)
以換行符分割較為簡單

#!/usr/bin/python
import argparse
import  sys
import os
import re
inputfile = sys.argv[1]
resultfile = sys.argv[3]
rwm = sys.argv[2]
def not_mix_fio():
    datastr = open(inputfile).read()
    data=datastr.split(",")
    datalat=datastr.split("\n")
    iops = ""
    bw = ""
    avg_lat = ""
    max_lat = ""
    clat99 = ""
    clat999 = ""
    clat9999 = ""
    clat_unit = ""
    for i in data:
        if "IOPS" in i:
          iops = i.split("IOPS=")[1].strip()
        if "BW" in i:
          
            temp=re.findall(r'[(](.*?)[)]', i)
            bw=temp[0]
            unit= re.sub(r'[0-9.]+', '', bw)
            bw=bw.split(unit)[0].strip()
            if unit == "kB/s" or unit == "KB/s":
               bw=float(bw)/1000  
            if unit == "b/s" or unit == "B/s":
               bw=float(bw)/1000000 
    print("iops=",iops)
    print("bw=",bw)
    for line in datalat:
        if "lat" in line and "clat" not in line and "slat" not in line and "max" in line and "avg" in line:
            if "max" in line:
                max_lat = line.split("max=")[1].split(",")[0].strip()
            if "K" in max_lat.upper():
                max_lat = str(float(max_lat[:-1])*1000)
            if "M" in max_lat.upper():
                max_lat = str(float(max_lat[:-1])*1000000)
           
            if "usec" in line:
                    max_lat = str(float(max_lat)/1000)
            if "nsec" in line:
                    max_lat = str(float(max_lat)/1000000)

            if "avg" in line:
                avg_lat = line.split("avg=")[1].split(",")[0].strip()
            if "K" in avg_lat.upper():
                avg_lat = str(float(avg_lat[:-1])*1000)
            if "M" in avg_lat.upper():
                avg_lat = str(float(avg_lat[:-1])*1000000)
           
            if "usec" in line:
                    avg_lat = str(float(avg_lat)/1000)
            if "nsec" in line:
                    avg_lat = str(float(avg_lat)/1000000)
            print("avg_lat=",avg_lat)
            print("max_lat=",max_lat)
            sys.exit(1)

打印如下:
Linux下Python腳本的編寫解析fio測試log(四)(3)求出99 999 和999的值 單位仍然以ms為主

#!/usr/bin/python
import argparse
import  sys
import os
import re
inputfile = sys.argv[1]
resultfile = sys.argv[3]
rwm = sys.argv[2]
def not_mix_fio():
    datastr = open(inputfile).read()
    data=datastr.split(",")
    datalat=datastr.split("\n")
    iops = ""
    bw = ""
    avg_lat = ""
    max_lat = ""
    clat99 = ""
    clat999 = ""
    clat9999 = ""
    clat_unit = ""
    for i in data:
        if "IOPS" in i:
          iops = i.split("IOPS=")[1].strip()
        if "BW" in i:
            temp=re.findall(r'[(](.*?)[)]', i)
            bw=temp[0]
            unit= re.sub(r'[0-9.]+', '', bw)
            bw=bw.split(unit)[0].strip()
            if unit == "kB/s" or unit == "KB/s":
               bw=float(bw)/1000  
            if unit == "b/s" or unit == "B/s":
               bw=float(bw)/1000000 
    print("iops=",iops)
    print("bw=",bw)
    for line in datalat:
        if "lat" in line and "clat" not in line and "slat" not in line and "max" in line and "avg" in line:
            if "max" in line:
                max_lat = line.split("max=")[1].split(",")[0].strip()
            if "K" in max_lat.upper():
                max_lat = str(float(max_lat[:-1])*1000)
            if "M" in max_lat.upper():
                max_lat = str(float(max_lat[:-1])*1000000)
            if "usec" in line:
                    max_lat = str(float(max_lat)/1000)
            if "nsec" in line:
                    max_lat = str(float(max_lat)/1000000)

            if "avg" in line:
                avg_lat = line.split("avg=")[1].split(",")[0].strip()
            if "K" in avg_lat.upper():
                avg_lat = str(float(avg_lat[:-1])*1000)
            if "M" in avg_lat.upper():
                avg_lat = str(float(avg_lat[:-1])*1000000)
            if "usec" in line:
                    avg_lat = str(float(avg_lat)/1000)
            if "nsec" in line:
                    avg_lat = str(float(avg_lat)/1000000)
            print("avg_lat=",avg_lat)
            print("max_lat=",max_lat)
            #sys.exit(1)
        if "clat percentiles" in line and "usec" in line:
            clat_unit = "usec"
        if "clat percentiles" in line and "nsec" in line:
            clat_unit = "nsec"
        if "99.00th" in line:
            clat99 = line.split("99.00th=")[1].split(",")[0].split("[")[1].split("]")[0].strip()
            clat999 = line.split("99.90th=")[1].split(",")[0].split("[")[1].split("]")[0].strip()
        if "99.99th" in line:
            clat9999=line.split("99.99th=")[1].split("[")[1].split("]")[0].strip()

    if clat_unit == "usec":
            clat9999 = str(float(clat9999)/1000)
            clat999 = str(float(clat999) / 1000)
            clat99 = str(float(clat99) / 1000)
    if clat_unit == "nsec":
            clat9999 = str(float(clat9999)/1000000)
            clat999 = str(float(clat999) / 1000000)
            clat99 = str(float(clat99) / 1000000)
    print("clat99=",clat99)
    print("clat999=",clat999)
    print("clat9999=",clat9999)
    sys.exit()

打印如下:
Linux下Python腳本的編寫解析fio測試log(四)
(4)接下來就是把這些值放在CSV里,還是調(diào)用open函數(shù)

#!/usr/bin/python
import argparse
import  sys
import os
import re
inputfile = sys.argv[1]
resultfile = sys.argv[3]
rwm = sys.argv[2]
def not_mix_fio():
    datastr = open(inputfile).read()
    data=datastr.split(",")
    datalat=datastr.split("\n")
    iops = ""
    bw = ""
    avg_lat = ""
    max_lat = ""
    clat99 = ""
    clat999 = ""
    clat9999 = ""
    clat_unit = ""
    for i in data:
        if "IOPS" in i:
          iops = i.split("IOPS=")[1].strip()
        if "BW" in i:
            temp=re.findall(r'[(](.*?)[)]', i)
            bw=temp[0]
            unit= re.sub(r'[0-9.]+', '', bw)
            bw=bw.split(unit)[0].strip()
            if unit == "kB/s" or unit == "KB/s":
               bw=float(bw)/1000  
            if unit == "b/s" or unit == "B/s":
               bw=float(bw)/1000000 
    print("iops=",iops)
    print("bw=",bw)
    for line in datalat:
        if "lat" in line and "clat" not in line and "slat" not in line and "max" in line and "avg" in line:
            if "max" in line:
                max_lat = line.split("max=")[1].split(",")[0].strip()
            if "K" in max_lat.upper():
                max_lat = str(float(max_lat[:-1])*1000)
            if "M" in max_lat.upper():
                max_lat = str(float(max_lat[:-1])*1000000)
            if "usec" in line:
                    max_lat = str(float(max_lat)/1000)
            if "nsec" in line:
                    max_lat = str(float(max_lat)/1000000)

            if "avg" in line:
                avg_lat = line.split("avg=")[1].split(",")[0].strip()
            if "K" in avg_lat.upper():
                avg_lat = str(float(avg_lat[:-1])*1000)
            if "M" in avg_lat.upper():
                avg_lat = str(float(avg_lat[:-1])*1000000)
            if "usec" in line:
                    avg_lat = str(float(avg_lat)/1000)
            if "nsec" in line:
                    avg_lat = str(float(avg_lat)/1000000)
            print("avg_lat=",avg_lat)
            print("max_lat=",max_lat)
            #sys.exit(1)
        if "clat percentiles" in line and "usec" in line:
            clat_unit = "usec"
        if "clat percentiles" in line and "nsec" in line:
            clat_unit = "nsec"
        if "99.00th" in line:
            clat99 = line.split("99.00th=")[1].split(",")[0].split("[")[1].split("]")[0].strip()
            clat999 = line.split("99.90th=")[1].split(",")[0].split("[")[1].split("]")[0].strip()
        if "99.99th" in line:
            clat9999=line.split("99.99th=")[1].split("[")[1].split("]")[0].strip()

    if clat_unit == "usec":
            clat9999 = str(float(clat9999)/1000)
            clat999 = str(float(clat999) / 1000)
            clat99 = str(float(clat99) / 1000)
    if clat_unit == "nsec":
            clat9999 = str(float(clat9999)/1000000)
            clat999 = str(float(clat999) / 1000000)
            clat99 = str(float(clat99) / 1000000)
    print("clat99=",clat99)
    print("clat999=",clat999)
    print("clat9999=",clat9999)
    #sys.exit()
    write_line = "filename,bandwidth(MB/s),iops,avg_latency(msec),p99_lat(msec),p999_lat(msec),p9999_lat(msec),max_latency(msec)\n"
    fd = open(resultfile + ".csv","a")
    if not os.path.getsize(resultfile + ".csv"):
        fd.write(write_line)
    fd.close()
    write_line = "%s,%s,%s,%s,%s,%s,%s,%s\n" %(inputfile.split("/")[-1],bw,iops,avg_lat,clat99,clat999,clat9999,max_lat)
    fd=open(resultfile + ".csv","a")
    fd.write(write_line)
    fd.close()

表格如下:
Linux下Python腳本的編寫解析fio測試log(四)
3 后面把混合讀寫解析方法補(bǔ)上幾乎類似.這里我是用split(“write:”)來分割read和write


#!/usr/bin/python
#Autor :xiao xuetao
import argparse
import  sys
import os
import re
inputfile = sys.argv[1]
resultfile = sys.argv[3]
rwm = sys.argv[2]
def not_mix_fio():
    datastr = open(inputfile).read()
    data=datastr.split(",")
    datalat=datastr.split("\n")
    iops = ""
    bw = ""
    avg_lat = ""
    max_lat = ""
    clat99 = ""
    clat999 = ""
    clat9999 = ""
    clat_unit = ""
    for i in data:
        if "IOPS" in i:
          iops = i.split("IOPS=")[1].strip()
        if "BW" in i:
            temp=re.findall(r'[(](.*?)[)]', i)
            bw=temp[0]
            unit= re.sub(r'[0-9.]+', '', bw)
            bw=bw.split(unit)[0].strip()
            if unit == "kB/s" or unit == "KB/s":
               bw=float(bw)/1000  
            if unit == "b/s" or unit == "B/s":
               bw=float(bw)/1000000  
    for line in datalat:
        if "lat" in line and "clat" not in line and "slat" not in line and "max" in line and "avg" in line:
            if "max" in line:
                max_lat = line.split("max=")[1].split(",")[0].strip()
            if "K" in max_lat.upper():
                max_lat = str(float(max_lat[:-1])*1000)
            if "M" in max_lat.upper():
                max_lat = str(float(max_lat[:-1])*1000000)
            if "usec" in line:
                    max_lat = str(float(max_lat)/1000)
            if "nsec" in line:
                    max_lat = str(float(max_lat)/1000000)

            if "avg" in line:
                avg_lat = line.split("avg=")[1].split(",")[0].strip()
            if "K" in avg_lat.upper():
                avg_lat = str(float(avg_lat[:-1])*1000)
            if "M" in avg_lat.upper():
                avg_lat = str(float(avg_lat[:-1])*1000000)
            if "usec" in line:
                    avg_lat = str(float(avg_lat)/1000)
            if "nsec" in line:
                    avg_lat = str(float(avg_lat)/1000000)
            
        if "clat percentiles" in line and "usec" in line:
            clat_unit = "usec"
        if "clat percentiles" in line and "nsec" in line:
            clat_unit = "nsec"
        if "99.00th" in line:
            clat99 = line.split("99.00th=")[1].split(",")[0].split("[")[1].split("]")[0].strip()
            clat999 = line.split("99.90th=")[1].split(",")[0].split("[")[1].split("]")[0].strip()
        if "99.99th" in line:
            clat9999=line.split("99.99th=")[1].split("[")[1].split("]")[0].strip()

    if clat_unit == "usec":
            clat9999 = str(float(clat9999)/1000)
            clat999 = str(float(clat999) / 1000)
            clat99 = str(float(clat99) / 1000)
    if clat_unit == "nsec":
            clat9999 = str(float(clat9999)/1000000)
            clat999 = str(float(clat999) / 1000000)
            clat99 = str(float(clat99) / 1000000)

    write_line = "filename,bandwidth(MB/S),iops,avg_latency(msec),p99_lat(msec),p999_lat(msec),p9999_lat(msec),max_latency(msec)\n"
    fd = open(resultfile + ".csv","a")
    if not os.path.getsize(resultfile + ".csv"):
        fd.write(write_line)
    fd.close()
    write_line = "%s,%s,%s,%s,%s,%s,%s,%s\n" %(inputfile.split("/")[-1],bw,iops,avg_lat,clat99,clat999,clat9999,max_lat)
    fd=open(resultfile + ".csv","a")
    fd.write(write_line)
    fd.close()

def mix_fio():
    datastr = open(inputfile).read()
    data = datastr.split("write:")
    read_data = data[0].split("\n")
    write_data = data[1].split("\n")
    read_iops = ""
    read_bw = ""
    read_avg_lat = ""
    read_max_lat = ""
    read_clat99 = ""
    read_clat999 = ""
    read_clat9999 = ""
    read_clat_unit = ""
    write_iops = ""
    write_bw = ""
    write_avg_lat = ""
    write_max_lat = ""
    write_clat99 = ""
    write_clat999 = ""
    write_clat9999 = ""
    write_clat_unit = ""
    for i in data[0].split(","):
        if "IOPS" in i:
            read_iops = i.split("IOPS=")[1].strip()
        if "BW" in i:           
            temp=re.findall(r'[(](.*?)[)]', i)
            read_bw=temp[0]
            unit= re.sub(r'[0-9.]+', '', read_bw)
            read_bw=read_bw.split(unit)[0].strip()
            if unit == "kB/s" or unit == "KB/s":
               read_bw=float(read_bw)/1000  
            if unit == "b/s" or unit == "B/s":
               read_bw=float(read_bw)/1000000  

    for line in read_data:
        if "lat" in line and "clat" not in line and "slat" not in line and "max" in line and "avg" in line:
            if "max" in line:
                read_max_lat = line.split("max=")[1].split(",")[0].strip()
            if "K" in read_max_lat.upper():
                read_max_lat = str(float(read_max_lat[:-1]) * 1000)
            if "M" in read_max_lat.upper():
                read_max_lat = str(float(read_max_lat[:-1]) * 1000000)
            
            if "usec" in line:
                read_max_lat = str(float(read_max_lat) / 1000)
            if "nsec" in line:
                read_max_lat = str(float(read_max_lat) / 1000000)

            if "avg" in line:
                read_avg_lat = line.split("avg=")[1].split(",")[0].strip()
            if "K" in read_avg_lat.upper():
                read_avg_lat = str(float(read_avg_lat[:-1]) * 1000)
            if "M" in read_avg_lat.upper():
                read_avg_lat = str(float(read_avg_lat[:-1]) * 1000000)
            
            if "usec" in line:
                read_avg_lat = str(float(read_avg_lat) / 1000)
            if "nsec" in line:
                read_avg_lat = str(float(read_avg_lat) / 1000000)
        if "clat percentiles" in line and "usec" in line:
            read_clat_unit = "usec"
        if "clat percentiles" in line and "nsec" in line:
            read_clat_unit = "nsec"
        if "99.00th" in line:
            read_clat99 = line.split("99.00th=")[1].split(",")[0].split("[")[1].split("]")[0].strip()
            read_clat999 = line.split("99.90th=")[1].split(",")[0].split("[")[1].split("]")[0].strip()
        if "99.99th" in line:
            read_clat9999 = line.split("99.99th=")[1].split("[")[1].split("]")[0].strip()

    if read_clat_unit == "usec":
        read_clat9999 = str(float(read_clat9999) / 1000)
        read_clat999 = str(float(read_clat999) / 1000)
        read_clat99 = str(float(read_clat99) / 1000)
    if read_clat_unit == "nsec":
        read_clat9999 = str(float(read_clat9999) / 1000000)
        read_clat999 = str(float(read_clat999) / 1000000)
        read_clat99 = str(float(read_clat99) / 1000000)

    for i in data[1].split(","):
        if "IOPS" in i:
            write_iops = i.split("IOPS=")[1].strip()
        if "BW" in i:
            temp=re.findall(r'[(](.*?)[)]', i)
            write_bw=temp[0]
            unit= re.sub(r'[0-9.]+', '', write_bw)
            write_bw=write_bw.split(unit)[0].strip()
            if unit == "kB/s" or unit == "KB/s":
               write_bw=float(write_bw)/1000  
            if unit == "b/s" or unit == "B/s":
               write_bw=float(write_bw)/1000000 
    for line in write_data:
        if "lat" in line and "clat" not in line and "slat" not in line and "max" in line and "avg" in line:
            if "max" in line:
                write_max_lat = line.split("max=")[1].split(",")[0].strip()
            if "K" in write_max_lat.upper():
                write_max_lat = str(float(write_max_lat[:-1]) * 1000)
            if "M" in write_max_lat.upper():
                write_max_lat = str(float(write_max_lat[:-1]) * 1000000)
            
            if "usec" in line:
                write_max_lat = str(float(write_max_lat) / 1000)
            if "nsec" in line:
                write_max_lat = str(float(write_max_lat) / 1000000)

            if "avg" in line:
                write_avg_lat = line.split("avg=")[1].split(",")[0].strip()
            if "K" in write_avg_lat.upper():
                write_avg_lat = str(float(write_avg_lat[:-1]) * 1000)
            if "M" in write_avg_lat.upper():
                write_avg_lat = str(float(write_avg_lat[:-1]) * 1000000)
            
            if "usec" in line:
                write_avg_lat = str(float(write_avg_lat) / 1000)
            if "nsec" in line:
                write_avg_lat = str(float(write_avg_lat) / 1000000)
        if "clat percentiles" in line and "usec" in line:
            write_clat_unit = "usec"
        if "clat percentiles" in line and "nsec" in line:
            write_clat_unit = "nsec"
        if "99.00th" in line:
            write_clat99 = line.split("99.00th=")[1].split(",")[0].split("[")[1].split("]")[0].strip()
            write_clat999 = line.split("99.90th=")[1].split(",")[0].split("[")[1].split("]")[0].strip()
        if "99.99th" in line:
            write_clat9999 = line.split("99.99th=")[1].split("[")[1].split("]")[0].strip()

    if write_clat_unit == "usec":
        write_clat9999 = str(float(write_clat9999) / 1000)
        write_clat999 = str(float(write_clat999) / 1000)
        write_clat99 = str(float(write_clat99) / 1000)
    if write_clat_unit == "nsec":
        write_clat9999 = str(float(write_clat9999) / 1000000)
        write_clat999 = str(float(write_clat999) / 1000000)
        write_clat99 = str(float(write_clat99) / 1000000)
    write_line = "filename,read,bandwidth(MB/S),iops,avg_latency(msec),p99_lat(msec),p999_lat(msec),p9999_lat(msec),max_latency(msec),write,bandwidth(MB/s),iops,avg_latency(msec),p99_lat(msec),p999_lat(msec),p9999_lat(msec),max_latency(msec)\n"
    fd = open(resultfile + ".csv", "a")
    if not os.path.getsize(resultfile + ".csv"):
        fd.write(write_line)
    fd.close()
    write_line = "%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s\n" % (
    inputfile.split("/")[-1], "read", read_bw, read_iops, read_avg_lat, read_clat99, read_clat999, read_clat9999, read_max_lat,
    "write", write_bw, write_iops, write_avg_lat, write_clat99, write_clat999, write_clat9999, write_max_lat)
    fd = open(resultfile + ".csv", "a")
    fd.write(write_line)
    fd.close()

if rwm == "mix":
    mix_fio()
else:
    not_mix_fio()


表格如下:Linux下Python腳本的編寫解析fio測試log(四)
4 Python部分已經(jīng)完成,接下來可以寫一個(gè)shell小腳本跑下fio, 以Intel的p5510 nvme 為例子

#!/bin/bash
DEV=$1 TIME=$2

if [ $# != 2 ]; then
	echo "You must input 2 arg: 1st is device name                           
	                            2nd is the time of each testcase (unit is second)"

	exit -1
fi

echo "Testing device: $DEV"
echo "Run time of every testcase is ${TIME}s"
nvme format /dev/$DEV
mkdir "$DEV"
mkdir ./"$DEV"/test_log
mkdir ./"$DEV"/test_data

dmidecode | grep "System Information" -A9 | egrep "Manufacturer|Product|Serial" > ./"$DEV"/test_log/server_info.log
lscpu > ./"$DEV"/test_log/cpu_info.log
dmidecode -t memory > ./"$DEV"/test_log/memory_info.log
uname -a > ./"$DEV"/test_log/os_info.log
lspci | grep -i non > ./"$DEV"/test_log/lspci_info.log
nvme id-ctrl /dev/$DEV > ./"$DEV"/test_log/"$DEV"_id_info.log
nvme smart-log /dev/$DEV > ./"$DEV"/test_log/"$DEV"_smart_info_before.log

#************************One thread sequential write Precondition*************************#
echo "One thread sequential write predonditon"
date
#fio --name=Precondition --filename=/dev/$DEV --ioengine=libaio --direct=1 --thread=1 --numjobs=1 --iodepth=128 --rw=write --bs=128k --loops=2 --size=100% --group_reporting  >> "$DEV"_one_thread_seq_wr_precondition.log

#**************************One thread sequential Test Start*************************#
echo "One thread sequential write test"
date
for blk_size in 1024 256 128 
do
	for jobs in 1
	do
		for queue_depth in 64
		do
			job_name="${blk_size}kB_seq_WR_${jobs}job_QD${queue_depth}"
			echo "$job_name"
			fio --name=${job_name} --filename=/dev/$DEV --ioengine=libaio --direct=1 --thread=1 --numjobs=${jobs} --iodepth=${queue_depth} --rw=write --bs=${blk_size}k --runtime=$TIME --time_based=1 --size=100% --group_reporting  > "$DEV"_${blk_size}kB_seq_WR_${jobs}job_QD${queue_depth}.log
      python xuetao.py "$DEV"_${blk_size}kB_seq_WR_${jobs}job_QD${queue_depth}.log write ${DEV}_table
		done
	done
done



echo "One thread sequential read test"
date
for blk_size in 1024 256 128 
do
	for jobs in 1
	do
		for queue_depth in 64
		do
			job_name="${blk_size}kB_seq_RD_${jobs}job_QD${queue_depth}"
			echo "$job_name"
			fio --name=${job_name} --filename=/dev/$DEV --ioengine=libaio --direct=1 --thread=1 --numjobs=${jobs} --iodepth=${queue_depth} --rw=read --bs=${blk_size}k --runtime=$TIME --time_based=1 --size=100% --group_reporting  >> "$DEV"_${blk_size}kB_seq_RD_${jobs}job_QD${queue_depth}.log
      python xuetao.py "$DEV"_${blk_size}kB_seq_RD_${jobs}job_QD${queue_depth}.log read ${DEV}_table
		done
	done
done

echo "One thread sequential rw test"
date
for blk_size in 1024 256 128 
do
	for jobs in 1
	do
		for queue_depth in 64
		do
			job_name="${blk_size}kB_rw_${jobs}job_QD${queue_depth}"
			echo "$job_name"
			fio --name=${job_name} --filename=/dev/$DEV --ioengine=libaio --direct=1 --thread=1 --numjobs=${jobs} --iodepth=${queue_depth} --rw=rw --bs=${blk_size}k --runtime=$TIME --time_based=1 --size=100% --group_reporting  >> "$DEV"_${blk_size}kB_rw_${jobs}job_QD${queue_depth}.log
      python xuetao.py "$DEV"_${blk_size}kB_rw_${jobs}job_QD${queue_depth}.log mix ${DEV}_mix_table
		done
	done
done


nvme smart-log /dev/$DEV > ./"$DEV"/test_log/"$DEV"_smart_info_after.log
nvme error-log /dev/$DEV > ./"$DEV"/test_log/"$DEV"_error_log_after.log

mv "$DEV"_*.log ./"$DEV"/test_log
mv "$DEV"_*.csv ./"$DEV"/test_data

date
echo "test done"

運(yùn)行如下:
Linux下Python腳本的編寫解析fio測試log(四)

混合讀寫:
Linux下Python腳本的編寫解析fio測試log(四)文章來源地址http://www.zghlxwxcb.cn/news/detail-477336.html

到了這里,關(guān)于Linux下Python腳本的編寫解析fio測試log(四)的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • Linux性能測試工具 stress,stress-ng,sysbench,fio以及檢測dstat

    Linux性能測試工具 stress,stress-ng,sysbench,fio以及檢測dstat

    通過man幫助,可以看到官方對dstat的定義為:多功能系統(tǒng)資源統(tǒng)計(jì)生成工具(?versatile tool for generating system resource statistics)。在獲取的信息上有點(diǎn)類似于top、free、iostat、vmstat等多個(gè)工具的合集,官方解釋為vmstat、iostat、ifstat等工具的多功能替代品,且添加了許多額外的功能(

    2024年02月05日
    瀏覽(24)
  • JMeter 測試腳本編寫技巧

    JMeter 測試腳本編寫技巧

    ?是一款開源軟件,用于進(jìn)行負(fù)載測試、性能測試及功能測試。測試人員可以使用 JMeter 編寫測試腳本,模擬多種不同的負(fù)載情況,從而評估系統(tǒng)的性能和穩(wěn)定性。以下是編寫 JMeter 測試腳本的步驟。 第 1 步:創(chuàng)建測試計(jì)劃 在JMeter中,測試計(jì)劃是測試的最高級別,它包含了各

    2024年02月07日
    瀏覽(24)
  • jmeter編寫測試腳本大全

    jmeter編寫測試腳本大全

    目錄 一、背景 二、按照功能劃分 2.1?加密處理、驗(yàn)簽處理 2.2?jmeter 使用beanshell 編寫腳本 2.3 jmeter腳本報(bào)錯(cuò)大全 2.4 jmeter打印log 2.5 jmeter壓測java代碼 2.6?jmeter讀取數(shù)據(jù)保證每個(gè)線程拿到的數(shù)據(jù)唯一 2.7?jmeter設(shè)置并發(fā)量線程數(shù)為變量 2.8 beanshell 將數(shù)據(jù)寫入到本地txt 2.9 beanshell 讀

    2023年04月10日
    瀏覽(19)
  • 自動(dòng)化測試腳本編寫(超詳細(xì))

    自動(dòng)化測試腳本編寫(超詳細(xì))

    ?? 視頻學(xué)習(xí): 文末有免費(fèi)的配套視頻可觀看 ??? 關(guān)注公眾號(hào)【互聯(lián)網(wǎng)雜貨鋪】,回復(fù) 1? ,免費(fèi)獲取軟件測試全套資料,資料在手,漲薪更快 什么是自動(dòng)化測試? 自動(dòng)化測試是驗(yàn)證和驗(yàn)證軟件是否滿足所有用戶需求,并使用自動(dòng)化工具按預(yù)期運(yùn)行。它檢查在產(chǎn)品開發(fā)階段

    2024年04月26日
    瀏覽(21)
  • 如何用 JMeter 編寫性能測試腳本?

    如何用 JMeter 編寫性能測試腳本?

    Apache JMeter 應(yīng)該是應(yīng)用最廣泛的性能測試工具。怎么用 JMeter 編寫性能測試腳本? img HTTP 請求各項(xiàng)信息說明(以 JMeter 5.1 為例)。 如下圖所示: Web服務(wù)器: 指定協(xié)議、HTTP 請求的主機(jī)地址和端口號(hào),不需要加上“ http:// ”,JMeter 會(huì)自動(dòng)加上,一般的 Web 服務(wù)端口號(hào)默認(rèn)是 8

    2024年04月08日
    瀏覽(18)
  • 【測試技術(shù)-性能測試-實(shí)戰(zhàn)1】使用Loadrunner自帶的系統(tǒng)學(xué)習(xí)腳本編寫和腳本增強(qiáng)。

    【測試技術(shù)-性能測試-實(shí)戰(zhàn)1】使用Loadrunner自帶的系統(tǒng)學(xué)習(xí)腳本編寫和腳本增強(qiáng)。

    Hp software-start hp web tours server Hp software-hp web tours application 打開Virtual User Generator 創(chuàng)建腳本,協(xié)議選擇Web - HTTP/HTML,輸入腳本名,選擇好腳本存放路徑,點(diǎn)擊create即可創(chuàng)建好一個(gè)空白腳本。 設(shè)置錄制選項(xiàng),點(diǎn)擊record按鈕,設(shè)置完基本信息后,點(diǎn)擊右下角的Recording Options,設(shè)置錄

    2024年02月02日
    瀏覽(31)
  • 滲透測試-sqlmap繞過WAF腳本編寫

    滲透測試-sqlmap繞過WAF腳本編寫

    Sqlmap一款功能強(qiáng)大集成了多種數(shù)據(jù)庫識(shí)別及注入方式,多用于識(shí)別和利用Web 應(yīng)用程序注入漏洞的工具,它的優(yōu)點(diǎn)在于集成了大量 payload,對檢測與利用的自動(dòng)化處理(數(shù)據(jù)庫指紋、訪問底層文件系統(tǒng)、執(zhí)行命令)。 我們之前學(xué)習(xí)時(shí)間盲注的時(shí)候,可以針對時(shí)間盲注類型的漏洞

    2024年02月09日
    瀏覽(24)
  • 自動(dòng)化用例編寫思路 (使用pytest編寫一個(gè)測試腳本)

    自動(dòng)化用例編寫思路 (使用pytest編寫一個(gè)測試腳本)

    目錄 一,明確測試對象 二,編寫測試用例 構(gòu)造請求數(shù)據(jù) 封裝測試代碼 斷言設(shè)置 三,執(zhí)行腳本獲取測試結(jié)果 四,總結(jié) 經(jīng)過之前的學(xué)習(xí)鋪墊,我們嘗試著利用pytest框架編寫一條接口自動(dòng)化測試用例,來厘清接口自動(dòng)化用例編寫的思路。 我們在百度搜索天氣查詢,會(huì)出現(xiàn)如下

    2024年02月16日
    瀏覽(23)
  • Selenium如何用于編寫自動(dòng)化測試腳本?

    Selenium如何用于編寫自動(dòng)化測試腳本?它提供了許多測試工具和API,可以與瀏覽器交互,模擬用戶操作,檢查網(wǎng)頁的各個(gè)方面。下面是一些步驟,可以幫助你編寫Selenium自動(dòng)化測試腳本。 1、安裝Selenium庫和瀏覽器驅(qū)動(dòng)程序 首先,你需要安裝Selenium庫和瀏覽器驅(qū)動(dòng)程序,例如C

    2024年02月09日
    瀏覽(23)
  • 【Postman】測試(Tests)腳本編寫和斷言詳解

    【Postman】測試(Tests)腳本編寫和斷言詳解

    測試確認(rèn)您的 API 按預(yù)期工作,服務(wù)之間的集成運(yùn)行可靠,并且新開??發(fā)沒有破壞任何現(xiàn)有功能。您可以使用 JavaScript 為 Postman API 請求編寫測試腳本。當(dāng)您的 API 項(xiàng)目出現(xiàn)問題時(shí),您還可以使用測試代碼來幫助調(diào)試過程。例如,您可以編寫一個(gè)測試,通過發(fā)送包含不完整數(shù)

    2023年04月16日
    瀏覽(25)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包