通过append hint来插入数据,演示它和普通插入数据的性能比较。
2012-12-20 10:36
441 查看
一、实验说明
操作系统:redhat 5.4
数据库:oracle 11g r2
二、在Noarchivelog模式中的比较:
小结:
2.1、append减少数据块读的数量,普通插入db block gets为9343,加/*+ append */为1292;
2.2、append减少一致读的数量,普通插入consistent gets为2449,加/*+ append */为1218;
2.3、append减少回滚的写入数量,普通插入redo size为8428664,加/*+ append */为22472。
三、在Arachivelog模式下比较(前提是不使用alter table t nologging;)
小结:
3.1、append减少数据块读的数量,普通插入db block gets为9341,加append为1292;
3.2、append减少一致读的数量,普通插入consistents gets为2395,加append为1218;
3.3、回滚段的数量没有什么变化。
四、在Arachivelog模式下比较(前提是使用alter table t nologging;)
小结:
4.1、append减少数据块的数量,普通插入db block gets为9347,加append为1292;
4.2、append减少一致读的数量,普通插入consistent gets为2352,加append为1237;
4.3、append减少回滚段的写入数量,普通插入redo size为8434036,加append为22472。
五、总结
5.1、只有在归档模式下,对Logging表插入时,加不加append提示,产生的回滚段数量都差不多,其他情况下,加append能显著减少回滚段的产生。
5.2、不论是在什么情况下,加append都能减少数据块读的数量和一致读的数量。
操作系统:redhat 5.4
数据库:oracle 11g r2
二、在Noarchivelog模式中的比较:
SQL> conn jack/jack Connected. SQL> drop table t purge; Table dropped. SQL> conn /as sysdba Connected. SQL> archive log list; Database log mode No Archive Mode --Noarchivelog模式 Automatic archival Disabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 5 Current log sequence 7 SQL> conn jack/jack Connected. SQL> create table t as select * from dba_objects; Table created. SQL> set autotrace traceonly; SQL> insert into t select * from t; 72544 rows created. Execution Plan ---------------------------------------------------------- Plan hash value: 1601196873 --------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------- | 0 | INSERT STATEMENT | | 285K| 56M| 590 (1)| 00:00:08 | | 1 | LOAD TABLE CONVENTIONAL | T | | | | | | 2 | TABLE ACCESS FULL | T | 285K| 56M| 590 (1)| 00:00:08 | --------------------------------------------------------------------------------- Note ----- - dynamic sampling used for this statement (level=2) Statistics ---------------------------------------------------------- 586 recursive calls 9343 db block gets 2449 consistent gets 1033 physical reads 8428664 redo size 673 bytes sent via SQL*Net to client 601 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 2 sorts (memory) 0 sorts (disk) 72544 rows processed SQL> set autotrace off; SQL> drop table t purge; Table dropped. SQL> create table t as select * from dba_objects; Table created. SQL> set autotrace traceonly; SQL> insert /*+ append */ into t select * from t; 72544 rows created. Execution Plan ---------------------------------------------------------- ERROR: ORA-12838: cannot read/modify an object after modifying it in parallel SP2-0612: Error generating AUTOTRACE EXPLAIN report Statistics ---------------------------------------------------------- 594 recursive calls 1292 db block gets 1218 consistent gets 1033 physical reads 22472 redo size 662 bytes sent via SQL*Net to client 615 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 2 sorts (memory) 0 sorts (disk) 93 72544 rows processed
小结:
2.1、append减少数据块读的数量,普通插入db block gets为9343,加/*+ append */为1292;
2.2、append减少一致读的数量,普通插入consistent gets为2449,加/*+ append */为1218;
2.3、append减少回滚的写入数量,普通插入redo size为8428664,加/*+ append */为22472。
三、在Arachivelog模式下比较(前提是不使用alter table t nologging;)
SQL> conn /as sysdba Connected. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 372449280 bytes Fixed Size 1336624 bytes Variable Size 146803408 bytes Database Buffers 218103808 bytes Redo Buffers 6205440 bytes Database mounted. SQL> alter database archivelog; Database altered. SQL> alter database open; Database altered. SQL> archive log list; Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 5 Next log sequence to archive 7 Current log sequence 7 SQL> conn jack/jack Connected. SQL> drop table t purge; Table dropped. SQL> create table t as select * from dba_objects; Table created. SQL> set linesize 160; SQL> set autotrace traceonly; SQL> insert into t select * from t; 72544 rows created. Execution Plan ---------------------------------------------------------- Plan hash value: 1601196873 --------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------- | 0 | INSERT STATEMENT | | 285K| 56M| 590 (1)| 00:00:08 | | 1 | LOAD TABLE CONVENTIONAL | T | | | | | | 2 | TABLE ACCESS FULL | T | 285K| 56M| 590 (1)| 00:00:08 | --------------------------------------------------------------------------------- Note ----- - dynamic sampling used for this statement (level=2) Statistics ---------------------------------------------------------- 586 recursive calls 9341 db block gets 2395 consistent gets 1033 physical reads 8428488 redo size 679 bytes sent via SQL*Net to client 601 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 2 sorts (memory) 0 sorts (disk) 72544 rows processed SQL> set autotrace off; SQL> drop table t purge; Table dropped. SQL> create table t as select * from dba_objects; Table created. SQL> set autotrace traceonly; SQL> insert /*+ append */ into t select * from t; 72544 rows created. Execution Plan ---------------------------------------------------------- ERROR: ORA-12838: cannot read/modify an object after modifying it in parallel SP2-0612: Error generating AUTOTRACE EXPLAIN report Statistics ---------------------------------------------------------- 594 recursive calls 1292 db block gets 1218 consistent gets 1033 physical reads 8536868 redo size 665 bytes sent via SQL*Net to client 615 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 2 sorts (memory) 0 sorts (disk) 72544 rows processed
小结:
3.1、append减少数据块读的数量,普通插入db block gets为9341,加append为1292;
3.2、append减少一致读的数量,普通插入consistents gets为2395,加append为1218;
3.3、回滚段的数量没有什么变化。
四、在Arachivelog模式下比较(前提是使用alter table t nologging;)
SQL> conn /as sysdba Connected. SQL> archive log list; Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 7 Next log sequence to archive 9 Current log sequence 9 SQL> drop table t purge; Table dropped. SQL> create table t as select * from dba_objects; Table created. SQL> alter table t nologging; Table altered. SQL> set autotrace traceonly; SQL> insert into t select * from t; 72544 rows created. Execution Plan ---------------------------------------------------------- Plan hash value: 1601196873 --------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------- | 0 | INSERT STATEMENT | | 285K| 56M| 590 (1)| 00:00:08 | | 1 | LOAD TABLE CONVENTIONAL | T | | | | | | 2 | TABLE ACCESS FULL | T | 285K| 56M| 590 (1)| 00:00:08 | --------------------------------------------------------------------------------- Note ----- - dynamic sampling used for this statement (level=2) Statistics ---------------------------------------------------------- 766 recursive calls 9347 db block gets 2352 consistent gets 1033 physical reads 8434036 redo size 681 bytes sent via SQL*Net to client 601 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 6 sorts (memory) 0 sorts (disk) 58 72544 rows processed SQL> drop table t purge; Table dropped. SQL> set autotrace off; SQL> create table t as select * from dba_objects; Table created. SQL> alter table t nologging; Table altered. SQL> set autotrace traceonly; SQL> insert /*+ append */ into t select * from t; 72544 rows created. Execution Plan ---------------------------------------------------------- ERROR: ORA-12838: cannot read/modify an object after modifying it in parallel SP2-0612: Error generating AUTOTRACE EXPLAIN report Statistics ---------------------------------------------------------- 774 recursive calls 1292 db block gets 1237 consistent gets 1033 physical reads 22472 redo size 666 bytes sent via SQL*Net to client 615 bytes received via SQL*Net from client 3 SQL*Net roundtrips to/from client 6 sorts (memory) 0 sorts (disk) 99 72544 rows processed
小结:
4.1、append减少数据块的数量,普通插入db block gets为9347,加append为1292;
4.2、append减少一致读的数量,普通插入consistent gets为2352,加append为1237;
4.3、append减少回滚段的写入数量,普通插入redo size为8434036,加append为22472。
五、总结
5.1、只有在归档模式下,对Logging表插入时,加不加append提示,产生的回滚段数量都差不多,其他情况下,加append能显著减少回滚段的产生。
5.2、不论是在什么情况下,加append都能减少数据块读的数量和一致读的数量。
相关文章推荐
- .NET批量大数据插入性能分析及比较(2.普通插入与拼接sql批量插入)
- MySQL大量数据插入各种方法性能分析与比较
- 通过loadrunner 11常规通用的C语言API类型的Vuser 方式,测试验证MySQL数据库插入、查询、修改、删除数据性能脚本实例
- .NET批量大数据插入性能分析及比较(2.普通插入与拼接sql批量插入)
- 数据库插入大量数据性能测试——批处理+事务VS普通插入
- java 读取大数据文件,处理大数据文件性能比较?
- 一个关于LightningChart’s滚动线图无与伦比的性能演示。滚动线图的实时测量数据点超过10亿。
- SQLServer中批量插入数据方式的性能对比(转)
- set容器_插入_遍历_基本数据类型比较
- Python:通过执行100万次打印来比较C和python的性能,以及用C和python结合来解决性能问题的方法
- 通过postman向OpenTSDB插入数据并查询
- SQL Server 通过JOB来定期获取数据库相关性能数据—JOB介绍
- 随机取数据算法性能比较
- 演示通过环境变量在不同进程间传递数据
- 插入数据库语句比较性能
- 关于insert /*+ append*/ 各种insert插入速度比较
- 随机取数据算法性能比较
- 关于若干数据库数据插入性能的对比分析
- Entity Framework与ADO.NET批量插入数据性能测试
- 使用JDBC插入大量数据的性能测试