Top (参数)
2016-01-26 21:23
183 查看
最近在优化数据库服务器上高消耗语句/过程,发现一个存储过程优化后依旧出现在Profiler跟踪里。将Profiler跟踪文件中过程执行语句取出,打开一个查询窗口(SPID=144),set statistics io on,同时开启Profiler跟踪SPID=144中的语句执行情况。
下面是在查询窗口的执行结果,逻辑读只有17:
View Code
代码对应的是RML报表的Top Unique Batches页面,同时增加对数据库、应用程序分组,以及平均消耗值信息。部分组件导出到execl、word、pdf时报错,可以使用语句直接从数据库下提取消耗列表。
下面是在查询窗口的执行结果,逻辑读只有17:
--使用RML Cmd Prompt导入跟踪数据 ReadTrace -I"F:\TroubleShooting\Trace\HostName_InstanceName_HighCPU33_after.trc" -o"F:\TroubleShooting\Trace\output" -S"127.0.0.1,7777" -d"PerfAnalysis" -E USE PerfAnalysis GO --查询跟踪 select MIN(StartTime),MAX(EndTime),COUNT(*) from fn_trace_gettable(N'F:\TroubleShooting\Trace\HostName_InstanceName_HighCPU33_after.trc', default) where cpu is not null select * from ReadTrace.tblTimeIntervals select top 10 * from ReadTrace.tblBatchPartialAggs select top 10 * from ReadTrace.tblUniqueBatches --最初是按HashID得到总消耗CPU、Duration、Reads、Writes前N的语句 --后期根据需求添加DBID、LoginNameID,还可以计算平均消耗 select *, row_number() over(order by sum_CPU_ms desc) as QueryNumber from ( select --a.HashID, sum(CompletedEvents) as Executes, sum(TotalCPU) as sum_CPU_ms, sum(TotalCPU)/sum(CompletedEvents) as avg_CPU_ms,--add sum(TotalDuration)/1000 as sum_Duration_ms, sum(TotalDuration)/(sum(CompletedEvents)*1000) as avg_Duration_ms,--add sum(TotalReads) as sum_Reads, sum(TotalReads)/sum(CompletedEvents) as avg_Reads,--add sum(TotalWrites) as sum_Writes, sum(TotalWrites)/sum(CompletedEvents) as avg_Writes,--add --sum(AttentionEvents) as sum_Attentions, --(select StartTime from ReadTrace.tblTimeIntervals i where TimeInterval = @StartTimeInterval) as [StartTime], --(select EndTime from ReadTrace.tblTimeIntervals i where TimeInterval = @EndTimeInterval) as [EndTime], --add DatabaseName、LoginName (select distinct DatabaseName from fn_trace_gettable(N'F:\TroubleShooting\Trace\HostName_InstanceName_HighCPU33_after.trc', default) where DatabaseID=a.DBID) as DatabaseName, (select LoginName from ReadTrace.tblUniqueLoginNames where iID=a.LoginNameID) as LoginName, (select cast(NormText as nvarchar(4000)) from ReadTrace.tblUniqueBatches b where b.HashID = a.HashID) as [NormText], row_number() over(order by sum(TotalCPU) desc) as CPUDesc, row_number() over(order by sum(TotalCPU) asc) as CPUAsc, row_number() over(order by sum(TotalDuration) desc) as DurationDesc, row_number() over(order by sum(TotalDuration) asc) as DurationAsc, row_number() over(order by sum(TotalReads) desc) as ReadsDesc, row_number() over(order by sum(TotalReads) asc) as ReadsAsc, row_number() over(order by sum(TotalWrites) desc) as WritesDesc, row_number() over(order by sum(TotalWrites) asc) as WritesAsc from ReadTrace.tblBatchPartialAggs a --where TimeInterval @StartTimeInterval and @EndTimeInterval --and a.AppNameID = isnull(@iAppNameID, a.AppNameID) --and a.LoginNameID = isnull(@iLoginNameID, a.LoginNameID) --and a.DBID = isnull(@iDBID, a.DBID) group by a.HashID,a.DBID,a.LoginNameID ) as Outcome where (CPUDesc <= 20 --or CPUAsc <= @TopN or DurationDesc <= 20 --or DurationAsc <= @TopN or ReadsDesc <= 20 --or ReadsAsc <= @TopN or WritesDesc <= 20 --or WritesAsc <= @TopN ) order by sum_CPU_ms desc option (recompile) select top 100 EventClass,TextData,DatabaseName,DatabaseID,Duration/1000 Duration_ms,CPU CPU_ms,Reads,Writes,StartTime,EndTime,HostName,LoginName,ApplicationName from fn_trace_gettable(N'F:\TroubleShooting\Trace\HostName_InstanceName_HighCPU33_after.trc', default) where TextData like '%GAMEWEB_ADMIN_GAMESCORE_LOG%'
View Code
代码对应的是RML报表的Top Unique Batches页面,同时增加对数据库、应用程序分组,以及平均消耗值信息。部分组件导出到execl、word、pdf时报错,可以使用语句直接从数据库下提取消耗列表。
相关文章推荐
- Linux环境搭建
- tornado 使用supervisor管理进程,使用nginx做负载均衡
- shell & dialog
- Linux操作系统CentOS7.2发行版本的安装与配置
- 网站管理之IIS怎样设置能实现同服务器上有多个不同域名的网站
- Linux-IPC之管道
- openjudge 百练 2802 小游戏
- Linux 系统下查看硬件信息命令大全
- hadoop_4 : Hadoop的管理
- 【Linux/OS/Network】XSI IPC(消息队列,信号量,共享内存)
- opencv图像结构体之间的转换
- Linux进程间通信—无名管道和命名管道
- linux服务之NFS和SAMBA服务
- Everything for linux前言
- 滚动更新与固定版本Linux之争
- Cannot find a valid baseurl for repo: base/7/x86_64
- apache工作模式详解
- CentOS6.5 heartbeat高可用集群的详解及工作流程
- apache和nginx的区别
- org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /chunk : java.io.Fil