您的位置:首页 > 其它

系统优化之---分区表加速查询速度和删除速度

2013-11-28 16:20 288 查看
1、删除表里的数据最用下面语句
alter table t_name drop partition p_name;
alter table t_name truncate partition p_name;
delete 方式弊端:消耗大量的系统资源和无法释放空间

SQL> create user test identified by test account unlock;
User created.
SQL> grant dba to test;
Grant succeeded.
SQL> grant resource,connect to test;
Grant succeeded.
SQL> conn test/test as sysdba
Connected.
SQL> create table t as select object_id,object_name from dba_objects;
create table t as select object_id,object_name from dba_objects
Table created.
SQL> exec dbms_stats.gather_table_stats(user,'t');

PL/SQL procedure successfully completed.

SQL> set autotrace trace exp stat;
SQL> delete from t where object_id <10000;

9708 rows deleted.

Execution Plan
----------------------------------------------------------
Plan hash value: 3335594643

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | DELETE STATEMENT | | 9715 | 48575 | 96 (2)| 00:00:02 |
| 1 | DELETE | T | | | | |
|* 2 | TABLE ACCESS FULL| T | 9715 | 48575 | 96 (2)| 00:00:02 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("OBJECT_ID"<10000)

Statistics
----------------------------------------------------------
200 recursive calls
10106 db block gets
373 consistent gets
0 physical reads
2609544 redo size
678 bytes sent via SQL*Net to client
608 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
9708 rows processed
这种情况下,delete操作产生了10106+373条数据块(db block gets+consistent gets)同是产生了2.6M的redo日志。
即使用创建索引的方法也无法避免资源消耗
SQL> rollback;

Rollback complete.

SQL> create index ind_t on t(object_id);

Index created.
SQL> exec dbms_stats.gather_index_stats(user,'ind_t')

PL/SQL procedure successfully completed.

SQL> delete from t where object_id <10000;

9708 rows deleted.

Execution Plan
----------------------------------------------------------
Plan hash value: 3974964266

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | DELETE STATEMENT | | 9715 | 48575 | 23 (0)| 00:00:01 |
| 1 | DELETE | T | | | | |
|* 2 | INDEX RANGE SCAN| IND_T | 9715 | 48575 | 23 (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OBJECT_ID"<10000)

Statistics
----------------------------------------------------------
282 recursive calls
10370 db block gets
90 consistent gets
0 physical reads
2777076 redo size
680 bytes sent via SQL*Net to client
609 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
8 sorts (memory)
0 sorts (disk)
9708 rows processed
上面两种情况消耗的资源差不多都是1000多个数据块,相反使用truncate和drop 消耗的资源小的多。

create table t1(object_id int,object_name varchar2(1000)) partition by range(object_id)
(partition p1 values less than(10000),
partition p2 values less than(20000),
partition p3 values less than(30000),
partition p4 values less than(40000)
,partition pm values less than(maxvalue));

create table t1(object_id int,object_name varchar2(1000)) partition by range(object_id) (partition p1 values less than(10000),partition p2 values less than(20000),partition p3 values less than(30000),partition p4 values less than(40000),partition pm values
less than(maxvalue));
SQL> insert into t1 select * from t;

62751 rows created.

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------------
-

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|

--------------------------------------------------------------------------------
-

| 0 | INSERT STATEMENT | | 72459 | 2052K| 95 (0)| 00:00:02
|

| 1 | LOAD TABLE CONVENTIONAL | T1 | | | |
|

| 2 | TABLE ACCESS FULL | T | 72459 | 2052K| 95 (0)| 00:00:02
|

--------------------------------------------------------------------------------
-

Statistics
----------------------------------------------------------
1170 recursive calls
39860 db block gets
38041 consistent gets
0 physical reads
10866864 redo size
680 bytes sent via SQL*Net to client
602 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
62751 rows processed

SQL> commit;
启动SQL TRACE实用工具

对会话启动SQL TRACE
SQL> alter session set sql_trace=true;

Session altered.
SQL> alter table t1 truncate partition p1;

SQL> truncate table t1;

Table truncated.

SQL> insert into t1 select * from t;

62751 rows created.

SQL> commit;

Commit complete.

SQL> alter session set sql_trace=true;

Session altered.

SQL> alter table t1 drop partition p1;

Table altered.
关闭SQL TRACE实用工具

对会话关闭SQL TRACE
SQL> alter session set sql_trace=false;

Session altered.
获得进程信息,选择需要跟踪的进程:
select s.USERNAME,s.SID,s.SERIAL#,s.COMMAND from v$session s where s.USERNAME='TEST';

USERNAME SID SERIAL# COMMAND
------------------------------ ---------- ---------- ----------
TEST
33 52 2

SQL> show user
USER is "SYS"
设置跟踪:
SQL> exec sys.dbms_system.set_sql_trace_in_session(33,52,true)

PL/SQL procedure successfully completed.

可以等候片刻,跟踪session执行任务,捕获sql操作…
停止跟踪:
SQL> exec sys.dbms_system.set_sql_trace_in_session(33,52,false);

PL/SQL procedure successfully completed.

tkorof工具 分析ORACLE跟踪文件并且产生一个更加人性化清晰的输出结果的可执行工具
[oracle@haoxy trace]$ pwd
/u01/app/diag/rdbms/hxy/hxy/trace
[oracle@haoxy trace]$ tkprof hxy_ora_10283.trc tra.txt print=600 record=sql.txt sys=no
vi tra.txt

alter table t1 truncate partition p1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.06 0 4 1 0
Execute 1 0.03 0.11 5 1 82 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.04 0.18 5 5 83 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91

alter table t1 drop partition p1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.12 0 0 1 0
Execute 1 0.04 0.11 1 1 35 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.06 0.23 1 1 36 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91
由此实验可以看出DDL消耗资源远小于DML资源,DDl消耗的资源是对数据字典的修改,这个值基本是不变的,DML是随着数据量的增大消耗的资源也随之增大。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: