16 SQL Tuning Overview
2015-11-21 23:30
357 查看
16 SQL Tuning Overview 调优概述
This chapter discusses goals for tuning, how to identify high-resource SQL statements, explains what should be collected, provides tuning suggestions, and discusses how to create SQL test cases to troubleshoot problems in SQL.This chapter contains the following sections:
Introduction to SQL Tuning 调优介绍
Goals for Tuning 调优目标
Identifying High-Load SQL 识别高负载SQL
Automatic SQL Tuning Features 自动调优特点
Developing Efficient SQL Statements 开发高效sql
Building SQL Test Cases 建立SQL测试用例
See Also: 参考
Oracle
Database Concepts for an overview of SQL SQL概述
Oracle
Database 2 Day DBA to learn how to monitor the database
16.1 Introduction to SQL Tuning
SQL tuning involves the following basic steps:Identifying high load or top SQL statements that are responsible for a large share of the application workload and system resources, by reviewing past SQL execution history available in the system 通过过去的执行信息鉴别高负载SQL
Verifying that the execution plans produced by
the query optimizer(查询优化器) for these statements perform
reasonably(合理地)
Implementing(实施 执行) corrective actions to generate better execution plans for poorly performing SQL statements
The previous steps are repeated(重复以上步骤) until the system performance reaches a satisfactory level or no more statements can be tuned.
16.2 Goals for Tuning
The objective of tuning a system is either to reduce the response time for end users of the system, or to reduce the resources used to process the same work. You can accomplish both of these objectives in several ways:Reduce the Workload
Balance the Workload
Parallelize the Workload
16.2.1 Reduce the Workload 减少工作负载
SQL tuning commonly involves finding more efficient ways to process the same workload. It is possible to change the execution plan of the statement without altering the functionality to reduce the resourceconsumption(消耗).Two examples of how you can resource usage are as follows:
If a commonly executed query must access a small percentage of data in the table, then the database can execute it more efficiently by using an index. By creating such an index, you reduce the amount of resources used.利用索引
If a user is looking at the first twenty rows of the 10,000 rows returned in a specific sort order, and if the query (and sort order) can be satisfied by an index, then the user does not need to access and sort the 10,000 rows to see the first 20 rows.需要10000行中的前20行如果可以用有效索引的话不需要全部排序
16.2.2 Balance the Workload
Systems often tend to have peak usage in the daytime when real users are connected to the system, and low usage in the nighttime. If you can schedulenoncritical(非关键性) reports and batch jobs to run in thenighttime and reduce theirconcurrency(并发) during day time, then the database frees up resources for the more critical programs in the day.
把非关键的业务错开白天高峰期,比如放到晚上
16.2.3 Parallelize the Workload
Queries that access large amounts of data (typical data warehouse queries) can often run in parallel. 查询访问大量数据(典型的数据仓库查询)可以并行运行。Parallelismis extremely useful for reducing response time in a low concurrency data warehouse.However, for OLTP environments, which tend to be high concurrency, parallelism can adversely impact other users by increasing
the overall resource usage of the program.
16.3 Identifying High-Load SQL
This section describes the steps involved in identifying and gathering data on high-load SQL statements. High-load SQL are poorly-performing, resource-intensive SQL statements that impact the performance of an Oracle database. The following tools can identifyhigh-load SQL statements:本节描述了参与识别和收集高负载SQL语句数据的步骤。高负载SQL是表现不佳的,资源密集型的SQL语句,影响Oracle数据库的性能。下列工具可以识别高负载SQL语句:
Automatic Database Diagnostic Monitor
Automatic SQL tuning
Automatic Workload Repository
V$SQLview
Custom(自定义,定制) Workload
SQL Trace
16.3.1 Identifying Resource-Intensive SQL 确定资源密集型的SQL
The first step in identifying resource-intensive SQL is tocategorize(分类) the problem you are attempting to fix:
Is the problem specific to a single program (or small number of programs)?
Is the problem generic over the application?
16.3.1.1 Tuning a Specific Program
If you are tuning a specific program (GUI(图形用户界面Graphical User Interface) or 3GL(3GL或第三代语言是一种“高级”编程语言,例如PL/I,C,JAVA都属于这一类),then identifying the SQL to examine is a simple matter of looking at the SQL executed within the program. Oracle Enterprise Manager (Enterprise Manager) provides tools for identifying resource intensive SQL statements, generating explain plans, and evaluating
SQL performance.
If it is not possible to identify the SQL (for example, the SQL is generated dynamically), then use
SQL_TRACEto
generate a trace file that contains the SQL executed, then use
TKPROFto generate an output file.
如果不能确定(例如,SQL是动态生成的),然后用
sql_trace生成一个跟踪文件包含SQL执行,然后使用
tkprof生成一个输出文件。
The SQL statements in the
TKPROFoutput file can be ordered by various parameters, such as the execution elapsed
time (
exeela), which usually assists in the identification by ordering the SQL statements by elapsed time (with
highest elapsed time SQL statements at the top of the file). This makes the job of identifying the poorly performing SQL easier if there are many SQL statements in the file.
tkprof输出文件可以有不同的参数要求,如执行时间(
exeela),这通常有助于识别所用的时间排序的SQL语句。这使得识别效果不佳的SQL容易识别。
See Also:参考
Chapter 21, "Using Application Tracing
Tools" 21章,“使用应用程序跟踪工具”
Chapter 17, "Automatic SQL Tuning"
16.3.1.2 Tuning an Application / Reducing Load
If the whole application is performing poorly, or if you are attempting to reduce the overall CPU or I/O load on the database server, then identifying resource-intensive SQL involves the following steps:如果应用程序的整体表现不佳,或者如果你正试图减少整体的CPU或I/O负载,然后确定资源密集型的SQL包括以下步骤
Determine which period in the day you would like to examine; typically this is the application's peak processing time.确定一天中的时间点;这通常是应用的高峰期。
Gather operating system and Oracle Database statistics at the beginning and end of that period. The minimum of Oracle Database statistics gathered should be
file I/O (
V$FILESTAT),
system statistics (
V$SYSSTAT), and
SQL statistics (
V$SQLAREA,
V$SQL,
or
V$SQLSTATS,
V$SQLTEXT,
V$SQL_PLAN,
and
V$SQL_PLAN_STATISTICS).收集操作系统和Oracle数据库统计信息。
See Also:参考
Chapter 6, "Automatic Performance Diagnostics" to
learn how to gather Oracle database instance performance data6章,“自动性能诊断”学习如何收集Oracle数据库实例的性能数据
"Real-Time SQL Monitoring" for
information about the
V$SQL_PLAN_MONITORview“实时SQL监控”视图
Using the data collected in step two, identify the SQL statements using the most resources. A good way to identify candidate SQL statements is to query
V$SQLSTATS.
V$SQLSTATScontains
resource usage information for all SQL statements in the shared pool. The data in
V$SQLSTATSshould
be ordered by resource usage. The most common resources are:使用步骤二所收集的数据,识别使用最多资源的SQL语句。确定候选SQL语句的一个好方法是查询v$
sqlstats。v$
sqlstats包含共享池中的所有SQL语句的资源使用信息。v$
sqlstats中的数据应通过资源使用的命令。最常见的资源:
Buffer gets (
V$SQLSTATS.
BUFFER_GETS,
for high CPU using statements)
Disk reads (
V$SQLSTATS.
DISK_READS,
for high I/O statements)
Sorts (
V$SQLSTATS.
SORTS,
for many sorts)
One method to identify which SQL statements are creating the highest load is to compare the resources used by a SQL statement to the total amount of that resource used in the period. For
BUFFER_GETS,
divide each SQL statement's
BUFFER_GETSby the total number of buffer gets during the period. The total number
of buffer gets in the system is available in the
V$SYSSTATtable, for the statistic session logical reads.
Similarly, it is possible to apportion the percentage of disk reads a statement performs out of the total disk reads performed by the system by dividing
V$SQL_STATS.DISK_READSby
the value for the
V$SYSSTATstatistic physical reads. The SQL sections of the Automatic Workload Repository
report include this data, so you do not need to perform the percentage calculations manually.
See Also:
Oracle
Database Reference for information about dynamic performance views
After you have identified the candidate SQL statements, the next stage is to gather information that is necessary to examine the statements and tune them.
16.3.2 Gathering Data on the SQL Identified
SQL识别的数据收集
If you are most concerned with CPU, then examine the top SQL statements that performed the most BUFFER_GETSduring
that interval. Otherwise, start with the SQL statement that performed the most
DISK_READS.
16.3.2.1 Information to Gather During Tuning
调优信息收集
The tuning process begins by determining the structure of the underlying tables and indexes. The information gathered includes the following:Complete SQL text from
V$SQLTEXT
Structure of the tables referenced in the SQL statement, usually by describing the table in SQL*Plus
Definitions of any indexes (columns, column orders), and whether the indexes are unique or non-unique
全部索引的定义,是否唯一索引
Optimizer statistics for the segments (including the number of rows each table, selectivity of the index columns), including the date when the segments were last analyzed
段的优化统计信息
Definitions of any views referred to in the SQL statement
视图
Repeat steps two, three, and four for any tables referenced in the view definitions found in step five
视图结构,索引,段
Optimizer plan for the SQL statement (either from
EXPLAIN
PLAN,
V$SQL_PLAN,
or the
TKPROFoutput)
优化计划生成
Any previous optimizer plans for that SQL statement
Note:
It is important to generate and review execution plans for all of the key SQL statements in your application. Doing so lets you compare the optimizer execution plans of a SQL statement when the statement performed well to the plan when that the statement is
not performing well. Having the comparison, along with information such as changes in data volumes, can assist in identifying the cause of performance degradation.
16.4 Automatic SQL Tuning Features自动SQL优化功能
Because the manual SQL tuning process poses many challenges to the application developer, the SQL tuning process has been automated by the automatic SQL tuning features of Oracle Database. These features are designed to work equally well for OLTP and Data Warehousetype applications:由于手工SQL调优过程给应用程序开发人员带来了许多挑战,SQL调优过程已被自动SQL
Oracle数据库优化功能自动化。这些功能旨在同样适用于OLTP和数据仓库:
ADDM
SQL Tuning Advisor
SQL Tuning Sets
SQL Access Advisor
See Also:
Chapter 17, "Automatic SQL Tuning".
16.4.1 ADDM
The AutomaticDatabase Diagnostic Monitor (ADDM) analyzes the information collected by the AWR for possible performance problems with Oracle Database, including high-load SQL statements. See "Overview
of the Automatic Database Diagnostic Monitor".
自动数据库诊断监视器ADDM分析了可能反应数据库的性能问题的AWR报告,包括高负载SQL语句。参考自动数据库诊断监视器概述”。
16.4.2 SQL Tuning Advisor SQL优化器
SQLTuning Advisor optimizes SQL statements that have been identified as high-load SQL statements. By default, Oracle Database automatically identifies problematic SQL statements and implements tuning recommendations using
SQL Tuning Advisor during system maintenance windows as an automated maintenance task, searching for ways to improve the execution plans of the high-load SQL statements. You can also choose to run SQL Tuning Advisor at any time on any given SQL workload
to improve performance. See "Tuning Reactively with SQL Tuning Advisor".
SQL优化器优化已被确定为高负载的SQL语句。默认情况下,数据库自动识别SQL问题,使用SQL优化器在系统维护Windows期间的自动维护任务,寻找改进的高负载SQL语句的执行计划的方法。你也可以选择运行SQL调优顾问在任何时间为任何给定的SQL工作负载来提高性能。参考“SQL优化器的优化机制”。
16.4.3 SQL Tuning Sets
When multiple SQL statements serve as input to ADDM, SQL Tuning Advisor, or SQL Access Advisor, the database constructs and stores a SQL tuning set (STS).The STS includes the set of SQL statements along with their associated execution context and basic execution statistics. See "Managing
SQL Tuning Sets".
当多个SQL语句作为输入到ADDM,SQL优化器,或SQL Access Advisor,数据库的构建和存储SQL调优集(STS)。STS包含SQL语句以及与它们相关的执行上下文和基本执行统计数据集。参考“管理SQL调优集”。
16.4.4 SQL Access Advisor
In addition to SQL Tuning Advisor, SQL Access Advisor provides advice on materialized views, indexes, and materialized view logs. SQL Access Advisor helpsyou achieve performance goals by recommending the proper set of materialized views, materialized view logs, and indexes for a given workload. In general, as the number of materialized views and indexes and the space allocated to them is increased, query performance
improves. SQL Access Advisor considers the trade-offs between space usage and query performance, and recommends the most cost-effective configuration of new and existing materialized views and indexes. See "Using
SQL Access Advisor".
除了SQL优化器,SQL
Access Advisor提供的建议关于物化视图,索引,和物化视图日志。SQL Access Advisor帮你推荐合适的物化视图优化目标,物化视图日志,和索引对于一个给定的工作负载。一般来说,作为物化视图和索引的空间的增加,改善了查询性能。SQL
Access Advisor考虑空间的使用和查询性能之间的权衡,并推荐新的和现有的物化视图和索引的最具成本效益的配置。参考“使用SQL
Access Advisor”。
16.5 Developing Efficient SQL Statements
开发高效的SQL语句
This section describes ways you can improve SQL statement efficiency:
Verifying Optimizer Statistics
Reviewing the Execution Plan
Restructuring the SQL Statements
Restructuring the Indexes
Modifying or Disabling Triggers and Constraints
Restructuring the Data
Maintaining Execution Plans Over Time
Visiting Data as Few Times as Possible
Note:
The guidelines described in this section are oriented to production of frequently executed SQL. Most techniques that are discouraged here can
legitimately(合理地) be employed in ad hoc statements or in applications run
infrequently(很少地) where performance is not critical.本节中描述的指导方针是面向生产的频繁执行的SQL。这里探讨的大多数的技术都可以合理地适用于Ad
Hoc报表或应用程序的性能不是至关重要的。
16.5.1 Verifying Optimizer Statistics优化器统计验证
The query optimizer uses statistics gathered on tables and indexes when determining the optimal execution plan. If these statistics have not been gathered, or if the statistics are no longer representativeof the data stored within the database, then the optimizer does not have sufficient information to generate the best plan.
查询优化器使用统计数据收集表和索引确定最佳执行计划时。如果这些数据没有被收集,或统计数据没存储在数据库中,那么优化器没有足够的信息来产生最好的执行计划。
Things to check: 检查事项
If you gather statistics for some tables in your database, then it is probably best to gather statistics for all tables. This is especially true if your application includes SQL statements that perform joins.如果你统计数据库中某些表,那么最好统计所有表。特别是如果你的应用包含表连接的SQL语句。
If the optimizer statistics in the data dictionary are no longer representative of the data in the tables and indexes, then gather new statistics. One way to check whether the dictionary statistics are stale is to compare the real cardinality (row count) of
a table to the value of
DBA_TABLES.NUM_ROWS. Additionally, if there is significant data skew on predicate columns,
then consider using histograms(直方图).如果在数据字典中的优化器统计所代表的不再是表和索引的数据,然后收集新的统计。检查是否有字典数据陈旧的方法之一是比较一个表中真实的基数(行数) 在
dba_tables.num_rows中。此外,如果有关于谓词列的重要数据偏差,那么考虑使用直方图。
16.5.2 Reviewing the Execution Plan 回顾执行计划
When tuning (or writing) a SQL statement in an OLTP environment, the goal is to drive from the table that has the most selective filter. This means that there are fewer rows passed to the next step. If the next step is a join, then this means that fewer rowsare joined. Check to see whether the access paths are optimal.
When examining the optimizer execution plan, look for the following:
The driving table has the best filter.
The join order in each step returns the fewest number of rows to the next step (that is, the join order should reflect, where possible, going to the best not-yet-used filters).
The join method is appropriate for the number of rows being returned. For example, nested loop joins through indexes may not be optimal when the statement returns many rows.
The database uses views efficiently. Look at the
SELECTlist to see whether access to the view is necessary.
There are any unintentional Cartesian products (even with small tables).
Each table is being accessed efficiently:
Consider the predicates in the SQL statement and the number of rows in the table. Look for suspicious activity, such as a full table scans on tables with large number of rows, which have predicates in the where clause. Determine why an index is not used for
such a selective predicate.
A full table scan does not mean inefficiency. It might be more efficient to perform a full table scan on a small table, or to perform a full table scan to leverage a better join method (for example, hash_join) for the number of rows returned.
If any of these conditions are not optimal, then consider restructuring the SQL statement or the indexes available on the tables.
16.5.3 Restructuring the SQL Statements
重组SQL语句
Often, rewriting an inefficient SQL statement is easier than modifying it. If you understand the purpose of a given statement, then you might be able to quickly and easily write a new statement that meets the requirement.
16.5.3.1 Compose
Predicates(谓词) Using AND and =
To improve SQL efficiency, use equijoins whenever possible. Statements that perform equijoins on untransformed column values are the easiest to tune.
16.5.3.2 Avoid Transformed Columns in the WHERE Clause
避免where子句做转换
Use untransformed column values. For example, use:WHERE a.order_no = b.order_no
rather than:
WHERE TO_NUMBER (SUBSTR(a.order_no, INSTR(b.order_no, '.') - 1)) = TO_NUMBER (SUBSTR(a.order_no, INSTR(b.order_no, '.') - 1))
Do not use SQL functions in predicate clauses or
WHEREclauses. Any expression using a column, such as a function
having the column as its argument, causes the optimizer to ignore the possibility of using an index on that column, even a unique index, unless there is a function-based index defined that the database can use.
Avoid mixed-mode expressions, and beware of implicit type conversions. When you
want to use an index on the
VARCHAR2column
charcol,
but the
WHEREclause looks like this:
AND charcol = numexpr
where
numexpris an expression of number type (for example, 1,
USERENV('
SESSIONID'),
numcol,
numcol+0,...),
Oracle Database translates that expression into:
AND TO_NUMBER(charcol) = numexpr
Avoid the following kinds of complex expressions:
col1=
NVL
(:b1,
col1)
NVL(
col1,-999)
= ….
TO_DATE(),
TO_NUMBER(),
and so on
These expressions prevent the optimizer from assigning valid cardinality or selectivity estimates and can in turn affect the overall plan and the join method.
Add the predicate versus using
NVL() technique.
For example:
SELECT employee_num, full_name Name, employee_id FROM mtl_employees_current_view WHERE (employee_num = NVL (:b1,employee_num)) AND (organization_id=:1) ORDER BY employee_num;
Also:
SELECT employee_num, full_name Name, employee_id FROM mtl_employees_current_view WHERE (employee_num = :b1) AND (organization_id=:1) ORDER BY employee_num;
If a column of type
NUMBERis used in a
WHEREclause
to filter predicates with a literal value, then use a
TO_NUMBERfunction in the
WHEREclause
predicate to ensure you can use the index on the
NUMBERcolumn. For example, if
numcolis
a column of type
NUMBER, then a
WHEREclause
containing
numcol=TO_NUMBER('5')enables the database to use the index on
numcol.
If a query joins two tables, and if the join columns have different data types (for example,
NUMBERand
VARCHAR2),
then Oracle Database implicitly performs data type conversion. For example, if the join condition is
varcol=numcol,
then the database implicitly converts the condition to
TO_NUMBER(varcol)=numcol. If an index exists on the
varcolcolumn,
then explicitly set the type conversion to
varcol=TO_CHAR(numcol), thus enabling the database to use the index.
See Also:
Chapter 14, "Using Indexes and Clusters" for more information on function-based indexes
16.5.3.3 Write Separate SQL Statements for Specific Tasks 简化任务,明确分工
SQL is not a procedural language. Using one piece of SQL to do many different things usually results in a less-than-optimal result for each task. If you want SQL to accomplish different things, then write various statements, rather than writing one statementto do different things depending on the parameters you give it.
Note:
Oracle Forms and Reports are powerful development tools that allow application logic to be coded using PL/SQL (triggers or program units). This helps reduce the complexity of SQL by allowing complex logic to be handled in the Forms or Reports. You can also
invoke a server side PL/SQL package that performs the few SQL statements in place of a single large complex SQL statement. Because the package is a server-side unit, there are no issues surrounding client to database round-trips and network traffic.
It is always better to write separate SQL statements for different tasks, but if you must use one SQL statement, then you can make a very complex statement slightly less complex by using the
UNION
ALLoperator.
Optimization (determining the execution plan) takes place before the database knows the values substituted in the query. An execution plan cannot, therefore, depend on what those values are. For example:
SELECT info FROM tables WHERE ...
AND somecolumn BETWEEN DECODE(:loval, 'ALL', somecolumn, :loval) AND DECODE(:hival, 'ALL', somecolumn, :hival);
Written as shown, the database cannot use an index on the
somecolumncolumn, because the expression involving
that column uses the same column on both sides of the
BETWEEN.
This is not a problem if there is some other highly selective, indexable condition you can use to access the driving table. Often, however, this is not the case. Frequently, you might want to use an index on a condition like that shown but need to know the
values of :
loval, and so on, in advance. With this information, you can rule out the
ALLcase,
which should not use the index.
To use the index whenever real values are given for :
lovaland :
hival(if
you expect narrow ranges, even ranges where :
lovaloften equals :
hival),
you can rewrite the example in the following logically equivalent form:
SELECT /* change this half of UNION ALL if other half changes */ info FROM tables WHERE ...
AND somecolumn BETWEEN :loval AND :hival AND (:hival != 'ALL' AND :loval != 'ALL')
UNION ALL SELECT /* Change this half of UNION ALL if other half changes. */ info FROM tables WHERE ...
AND (:hival = 'ALL' OR :loval = 'ALL');
If you run
EXPLAIN
PLANon
the new query, then you seem to get both a desirable and an undesirable execution plan. However, the first condition the database evaluates for either half of the
UNION
ALLis
the combined condition on whether
:hivaland
:lovalare
ALL.
The database evaluates this condition before actually getting any rows from the execution plan for that part of the query.
When the condition comes back false for one part of the
UNION
ALLquery,
that part is not evaluated further. Only the part of the execution plan that is optimum for the values provided is actually carried out. Because the final conditions on
:hivaland
:lovalare
guaranteed to be mutually exclusive, only one half of the
UNION
ALLactually
returns rows. (The
ALLin
UNION
ALLis
logically valid because of this exclusivity. It allows the plan to be carried out without an expensive sort to rule out duplicate rows for the two halves of the query.)
16.5.4 Controlling the Access Path and Join Order with Hints
You can influence the optimizer's choices by setting the optimizer approach and goal, and by gathering representative statistics for the query optimizer. Sometimes, the application designer, who has more information about a particular application's data thanis available to the optimizer, can choose a more effective way to execute a SQL statement. You can use hints in SQL statements to instruct the optimizer about how the statement should be executed.
Hints, such as /*+
FULL*/
control access paths. For example:
SELECT /*+ FULL(e) */ e.last_name FROM employees e WHERE e.job_id = 'CLERK';
See Also:
Chapter 11, "The Query Optimizer" and Chapter
19, "Using Optimizer Hints"
Join order can have a significant effect on performance. The main objective of
SQL tuning is to avoid performing unnecessary work to access rows that do not affect the result. This leads to three general rules:
Avoid a full-table scan if it is more efficient to get the required rows through an index.
Avoid using an index that fetches 10,000 rows from the driving table if you could instead use another index that fetches 100 rows.
Choose the join order so as to join fewer rows to tables later in the join order.
The following example shows how to tune join order effectively:
SELECT info FROM taba a, tabb b, tabc c WHERE a.acol BETWEEN 100 AND 200
AND b.bcol BETWEEN 10000 AND 20000 AND c.ccol BETWEEN 10000 AND 20000 AND a.key1 = b.key1 AND a.key2 = c.key2;
Choose the driving table and the driving index (if any).
The first three conditions in the previous example are filter conditions applying to only a single table each. The last two conditions are join conditions.
Filter conditions dominate the choice of driving table and index. In general, the driving table is the one containing the filter condition that eliminates the highest percentage of the table. Thus, because the range of 100 to 200 is narrow compared with the
range of
acol, but the ranges of 10000 and 20000 are relatively large,
tabais
the driving table, all else being equal.
With nested loop joins, the joins all happen through the join indexes, the indexes on the primary or foreign keys used to connect that table to an earlier table in the join tree. Rarely do you use the indexes on the non-join conditions, except for the driving
table. Thus, after
tabais chosen as the driving table, use the indexes on
b.
key1and
c.
key2to
drive into
tabband
tabc,
respectively.
Choose the best join order, driving to the best unused filters earliest.
You can reduce the work of the following join by first joining to the table with the best still-unused filter. Thus, if "
bcol
BETWEEN..."
is more restrictive (rejects a higher percentage of the rows seen) than "
ccol
BETWEEN...",
then the last join becomes easier (with fewer rows) if
tabbis joined before
tabc.
You can use the
ORDEREDor
STARhint
to force the join order.
See Also:
"Hints for Join Orders"
16.5.4.1 Use Caution When Managing Views
Be careful when joining views, when performing outer joins to views, and when reusing an existing view for a new purpose.16.5.4.1.1 Use Caution When Joining Complex Views
Joins to complex views are not recommended, particularly joins from one complex view to another. Often this results in the entire view being instantiated, and then the query is run against the view data.
For example, the following statement creates a view that lists employees and departments:
CREATE OR REPLACE VIEW emp_dept AS SELECT d.department_id, d.department_name, d.location_id, e.employee_id, e.last_name, e.first_name, e.salary, e.job_id FROM departments d ,employees e WHERE e.department_id (+) = d.department_id;
The following query finds employees in a specified state:
SELECT v.last_name, v.first_name, l.state_province FROM locations l, emp_dept v WHERE l.state_province = 'California' AND v.location_id = l.location_id (+);
In the following plan table output, note that the
emp_deptview is instantiated:
-------------------------------------------------------------------------------- | Operation | Name | Rows | Bytes| Cost | Pstart| Pstop | -------------------------------------------------------------------------------- | SELECT STATEMENT | | | | | | | | FILTER | | | | | | | | NESTED LOOPS OUTER | | | | | | | | VIEW |EMP_DEPT | | | | | | | NESTED LOOPS OUTER | | | | | | | | TABLE ACCESS FULL |DEPARTMEN | | | | | | | TABLE ACCESS BY INDEX|EMPLOYEES | | | | | | | INDEX RANGE SCAN |EMP_DEPAR | | | | | | | TABLE ACCESS BY INDEX R|LOCATIONS | | | | | | | INDEX UNIQUE SCAN |LOC_ID_PK | | | | | | --------------------------------------------------------------------------------
16.5.4.1.2 Do Not Recycle Views
Beware of writing a view for one purpose and then using it for other purposes to which it might be ill-suited. Querying from a view requires all tables from the view to be accessed for the data to be returned. Before reusing a view, determine whether all tables
in the view need to be accessed to return the data. If not, then do not use the view. Instead, use the base table(s), or if necessary, define a new view. The goal is to refer to the minimum number of tables and views necessary to return the required data.
Consider the following example:
SELECT department_name FROM emp_dept WHERE department_id = 10;
The entire view is first instantiated by performing a join of the
employeesand
departmentstables
and then aggregating the data. However, you can obtain
department_nameand
department_iddirectly
from the
departmentstable. It is inefficient to obtain this information by querying the
emp_deptview.
16.5.4.1.3 Use Caution When Unnesting Subqueries
Subquery unnesting merges the body of the subquery into the body of the statement that contains it, allowing the optimizer to consider them together when evaluating access paths and joins.
See Also:
Oracle Database Data Warehousing
Guide for an explanation of the dangers with subquery unnesting
16.5.4.1.4 Use Caution When Performing Outer Joins to Views
In the case of an outer join to a multi-table view, the query optimizer (in Release 8.1.6 and later) can drive from an outer join column, if an equality predicate is defined on it.
An outer join within a view is problematic because the performance implications of the outer join are not visible.
16.5.4.2 Store Intermediate Results
存储中间结果
Intermediate, or staging, tables are quite common in relational database systems, because they temporarily store some intermediate results. In many applications they are useful, but Oracle Database requires additional resources to create them. Always considerwhether the benefit they could bring is more than the cost to create them. Avoid staging tables when the information is not reused multiple times.
Some additional considerations:
Storing intermediate results in staging tables could improve application performance. In general, whenever an intermediate result is usable by multiple following queries, it is worthwhile to store it in a staging table. The benefit of not retrieving data multiple
times with a complex statement at the second usage of the intermediate result is better than the cost to materialize it.
Long and complex queries are hard to understand and optimize. Staging tables can break a complicated SQL statement into several smaller statements, and then store the result of each step.
Consider using materialized views. These are precomputed tables comprising aggregated or joined data from fact and possibly dimension tables.
See Also:
Oracle Database Data Warehousing
Guide for detailed information on using materialized views
16.5.5 Restructuring the Indexes 重构索引
Often, there is a beneficial impact on performance by restructuring indexes. This can involve the following:Remove nonselective indexes to speed the DML.
Index performance-critical access paths.
Consider reordering columns in existing concatenated indexes.
Add columns to the index to improve selectivity.
Do not use indexes as a panacea. Application developers sometimes think that performance improves when they create more indexes. If a single programmer creates an appropriate
index, then this index may improve the application's performance. However, if 50 developers each create an index, then application performance will probably be hampered.
16.5.6 Modifying or Disabling Triggers and Constraints
Using triggers consumes system resources. If you use too many triggers, then performance may be adversely affected. In this case, you might need to modify or disable the triggers.
16.5.7 Restructuring the Data
数据重组
After restructuring the indexes and the statement, consider restructuring the data:Introduce derived values. Avoid
GROUP
BYin
response-critical code.
Review your data design. Change the design of your system if it can improve performance.
Consider partitioning, if appropriate.
16.5.8 Maintaining Execution Plans Over Time
You can maintain the existing execution plan of SQL statements over time either using stored statistics or SQL plan baselines. Storing optimizer statistics for tables will apply to all SQL statements that refer to those tables. Storing an execution plan asa SQL plan baseline maintains the plan for set of SQL statements. If both statistics and a SQL plan baseline are available for a SQL statement, then the optimizer first uses a cost-based search method to build a best-cost plan, and then tries to find a matching
plan in the SQL plan baseline. If a match is found, then the optimizer proceeds using this plan. Otherwise, it evaluates the cost of each of the accepted plans in the SQL plan baseline and selects the plan with the lowest cost.
See Also:
Chapter 15, "Using SQL Plan Management"
Chapter 13, "Managing Optimizer Statistics"
16.5.9 Visiting Data as Few Times as Possible
Applications should try to access each row only once. This reduces network traffic and reduces database load. Consider doing the following:Combine Multiples Scans Using CASE Expressions
Use DML with RETURNING Clause
Modify All the Data Needed in One Statement
16.5.9.1 Combine Multiples Scans Using CASE Expressions
Often, it is necessary to calculate different aggregates on various sets of tables. Usually, you achieve this goal with multiple scans on the table, but it is easy to calculate all the aggregates with a single scan. Eliminating n-1scans can greatly improve performance.
You can combine multiple scans into one scan by moving the
WHEREcondition of each scan into a
CASEexpression,
which filters the data for the aggregation. For each aggregation, there could be another column that retrieves the data.
The following example asks for the count of all employees who earn less then 2000, between 2000 and 4000, and more than 4000 each month. You can obtain this result by executing three separate queries:
SELECT COUNT (*) FROM employees WHERE salary < 2000; SELECT COUNT (*) FROM employees WHERE salary BETWEEN 2000 AND 4000; SELECT COUNT (*) FROM employees WHERE salary>4000;
However, it is more efficient to run the entire query in a single statement. Each number is calculated as one column. The count uses a filter with the
CASEexpression
to count only the rows where the condition is valid. For example:
SELECT COUNT (CASE WHEN salary < 2000 THEN 1 ELSE null END) count1, COUNT (CASE WHEN salary BETWEEN 2001 AND 4000 THEN 1 ELSE null END) count2, COUNT (CASE WHEN salary > 4000 THEN 1 ELSE null END) count3 FROM employees;
This is a very simple example. The ranges could be overlapping, the functions for the aggregates could be different, and so on.
16.5.9.2 Use DML with RETURNING Clause
When appropriate, use INSERT,
UPDATE,
or
DELETE...
RETURNINGto
select and modify data with a single call. This technique improves performance by reducing the number of calls to the database.
See Also:
Oracle Database SQL Language Reference for
syntax on the
INSERT,
UPDATE,
and
DELETEstatements
16.5.9.3 Modify All the Data Needed in One Statement
When possible, use array processing. This means that an array of bind variable values is passed to Oracle Database for repeated execution. This is appropriate for iterative processes in which multiple rows of a set are subject to the same operation.For example:
BEGIN FOR pos_rec IN (SELECT * FROM order_positions WHERE order_id = :id) LOOP DELETE FROM order_positions WHERE order_id = pos_rec.order_id AND order_position = pos_rec.order_position; END LOOP; DELETE FROM orders WHERE order_id = :id; END;
Alternatively, you could define a cascading constraint on
orders. In the previous example, one
SELECTand n
DELETEs
are executed. When a user issues the
DELETEon
orders
DELETE
FROM
orders
WHERE
order_id=
:id,
the database automatically deletes the positions with a single
DELETEstatement.
See Also:
Oracle Database Administrator's
Guide or Oracle Database
Heterogeneous Connectivity User's Guide to learn how to tune distributed queries
16.6 Building SQL Test Cases
For many SQL-related problems, obtaining a reproducible test case makes it easier to resolve the problem. Starting with the 11g Release 2 (11.2), Oracle Database contains the SQLTest Case Builder, which automates the somewhat difficult and time-consuming process of gathering and reproducing as much information as possible about a problem and the environment in which it occurred.
SQL Test Case Builder captures information pertaining to a SQL-related problem, along with the exact environment under which the problem occurred, so that you can reproduce and test the problem on a separate database. After the test case is ready, you can upload
the problem to Oracle Support to enable support personnel to reproduce and troubleshoot the problem.
The information gathered by SQL Test Case Builder includes the query being executed, table and index definitions (but not the actual data), PL/SQL functions, procedures, and packages, optimizer statistics, and initialization parameter settings.
16.6.1 Creating a Test Case
创建测试案例
You can access the SQL Test Case Builder from Enterprise Manager or manually using the DBMS_SQLDIAGpackage.
16.6.1.1 Accessing SQL Test Case Builder from Enterprise Manager
From Enterprise Manager, the SQL Test Case Builder is accessible only when a SQL incident occurs. A SQL-related problem is referred to as a SQL incident, and each SQL incident is identified by an incident number. You can access the SQL Test Case Builder fromthe
Support Workbenchpage in Enterprise Manager.
You can access the
Support Workbenchpage in either of the following ways:
In the Database Home page of Enterprise Manager, under
Diagnostic Summary, click the link to
Active Incidents(indicating the number of active incidents). This opens the
Support Workbenchpage, with the
incidents listed in a table.
Click Advisor Central under
Related Linksto open the
Advisor Centralpage. Next, click SQL Advisors and then Click here to go to Support Workbench to open the
Support Workbenchpage.
From the
Support Workbenchpage, to access the SQL Test Case Builder:
Click an incident ID to open the problem details for the particular incident.
Next, click Oracle Support in the
Investigate and Resolvesection.
Click Generate Additional Dumps and Test Cases.
For a particular incident, click the icon in the
Go To Taskcolumn to run the SQL Test Case Builder.
The output of the SQL Test Case Builder is a SQL script that contains the commands required to re-create all the necessary objects and the environment. SQL Test Case Builder stores the file in the following location, where inc_num refers
to the incident number and run_num refers to the run number:
$ADR_HOME/incident/incdir_inc_num/SQLTCB_run num
For example, a valid output file name could be as follows:
$ORACLE_HOME/log/diag/rdbms/dbsa/dbsa/incident/incdir_2657/SQLTCB_1
16.6.1.2 Accessing SQL Test Case Builder Using DBMS_SQLDIAG
You can also invoke the SQL Test Case Builder manually, using the DBMS_SQLDIAGpackage. This package consists
of various subprograms for the SQL Test Case Builder, some of which are listed in Table
16-1.
Table 16-1 SQL Test Case Builder Procedures in DBMS_SQLDIAG
Procedure Name | Function |
---|---|
EXPORT_SQL_TESTCASE | Generates a SQL test case |
EXPORT_SQL_TESTCASE_DIR_BY_INC | Generates a SQL test case corresponding to the incident ID passed as an argument |
EXPORT_SQL_TESTCASE_DIR_BY_TXT | Generates a SQL test case corresponding to the SQL text passed as an argument |
Database PL/SQL Packages and Types Reference.
相关文章推荐
- 安卓数据库连接
- MongoDB——分片技术
- CentOS7下修改mariadb数据库文件的路径
- Mysql--产品支持的平台
- Mysql--可用的 MySQL 产品和专业服务
- MySQL学习笔记----创建修改删除表
- oracle创建存储过程中遇到的问题
- MySQL--产品的起源和状态
- SQLi-Labs Lesson 1-8 notes
- Redis实现分布式锁与任务队列
- mysql 升极
- northwind数据库导出为excel文件下载
- 数据库锁(二)
- 数据库锁(一)
- 数据库
- C# ODBC连接Mysql
- in和exists的区别
- MySQL批量插入,,SQL插入性能优化
- ubuntu下mysql的安装和启动
- Redis 学习笔记2-redis数据类型