Give a user an inch, and he wants a mile. If you change a database query so that it runs in one minute instead of five, the user will want it to work in 30 seconds. No matter how fast a database runs, there is always the need to make it go faster.
Ultimately, this task falls to the DBA. A DBA really has two levels of responsibility: actual and perceived.
Actual responsibility means the tasks for which a DBA is genuinely responsible: keeping the database available for day-to-day business needs, creating new user accounts, monitoring the overall health of the database, and so on. Perceived responsibility
means the responsibility incurred when there is any problem with the databaseor even a conflict in the corporate IS structure. A DBA is often asked why the database is down when a link has broken in the WAN, or why the database is performing slow
when a poorly written application is deployed into a production environment.
Because all database problems are perceived to be the responsibility of the DBA, it falls to himwhether he likes it or notto validate the claims or dispel the rumors. The DBA must have a solid foundation of knowledge to base his decisions
on. In many larger IS departments, the DBA may not be responsible for performance tuning. In others, the DBA may be responsible only for databasebut not applicationperformance tuning. At some sites, the DBA is responsible for all performance
tuning functions of the database.
This chapter deals with the art of performance tuning.
When you are called on to optimize or tune a system, it is of paramount importance that you distinguish between the two levels of performance tuning: applications tuning and database tuning. They are distinct areas of expertise and are often handled by
different people. The DBA should have at least an overview of the importance and functions of each type of tuning.
At the base of everything is the operating system, which drives the physical functionalitysuch as how to access the physical disk devices. On top of this level rests the RDBMS, which interacts with the operating system to store information
physically. Applications communicate with the RDBMS to perform business tasks.
Applications tuning deals with how the various applicationsforms, reports, and so onare put together to interact with the database. Previous chapters discussed how a database is little more than a series of physical data files. Essentially,
an application is nothing more than a program that issues calls to the database, which in turn are interpreted as physical reads and writes from the physical data files. Applications tuning means controlling the frequency and amount of data that the
application requests from or sends to the database.
Here are some general guidelines for tuning applications:
These are only guidelines for applications tuning. Each site has its own specific problems and issues that affect the problems that occur in applications. More often than not, it is the duty of the developers to tune and modify their own programs
without the involvement of the DBA. Because of perceived responsibility, however, the DBA must work with the applications development staff to resolve these problems.
Whereas applications development addresses how a task is accomplished, tuning at the database level is more of a nuts and bolts affair. Performance tuning at the applications level relies on a methodical approach to isolating potential areas to improve.
Tuning at the database level, however, is more hit and miss. It concentrates on things such as enlarging database buffers and caches by increasing INIT.ORA parameters or balancing database files to achieve optimum throughput.
Unlike applications tuning, which can be done by an applications group or the DBA depending on the environment, database tuning is the almost exclusive province of the DBA. Only in rare cases where there are multiple DBA groups, one of which specializes
in performance tuning, does database tuning fall outside the domain of the DBA.
At the database level, there are three kinds of tuning:
Each kind has a distinct set of areas that the DBA must examine. Memory tuning deals with optimizing the numerous caches, buffers, and shared pools that reside in memory and compose the core memory structures for the Oracle RDBMS. I/O tuning is
concerned with maximizing the speed and efficiency with which the RDBMS accesses the physical data files that make up its basic storage units. Contention tuning seeks to resolve problems in which the database fights against itself for database resources.
There are only four basic steps involved in database tuning. They hold true for all three types of tuning:
As with applications tuning, the more proactively the process is done, the more effective it is. The process is seldom effective when it is done on the fly or without the proper amount of research.
Tuning at the operating system level is beyond the scope of this chapter. This task falls to the system administratoronly in rare cases to the DBA. However, it is often the role of the DBA to offer suggestions. Some issues to consider are
In tuning a database, the first and most crucial step is gathering statistics on the current database performance. These tools give a benchmark of how the database is currently performing and enable the DBA to gauge progress by measuring improvement.
Use the Oracle Server*Manager to view current parameter settings for an Oracle RDBMS instance. The show sga command shows the current size and makeup of the SGA. You can also display the INIT.ORA parameters with the show parameter command. To display
only a particular parameter, add it to the command. For example,
% svrmgrl SVRMGR> Connect internal Connected. SVRMGR> show parameter block
All the database parameters are shown, even ones that have not been explicitly set in the INIT.ORA parameter file. Parameters that the DBA has not set are shown with their default values. By spooling this list to a data file, the DBA can get an accurate
snapshot of a database's settings.
To determine what needs to be fixed in an Oracle RDBMS instance, you must first determine what is broken. In some cases, performance problems occur sporadically; however, they are usually have a specific pattern. Do they occur around lunch time? At
night? Early in the morning? One of the keys to performing successful performance tuning is being able to identify when the problem is occurring.
Oracle provides tools that enable you to examine in detail what the Oracle RDBMS was doing during a specific period of time. They are the begin statistics utility (utlbstat) and the end statistics utility (utlestat). These scripts enable you to take a
snapshot of how the instance was performing during an interval of time. They use the Oracle dynamic performance (V$) tables to gather information.
To use utlbstat and utlestat, the database must have been started with the value of TIMED_STATISTICS in the INIT.ORA parameter file set to TRUE. Oracle does not collect some of the information required for the report if this parameter is not set to
TRUE. Setting TIMED_STATISTICS to TRUE, however, causes the database instance to incur overhead. The amount is smallonly about 4-8 percent in quantitative termsand it is necessary to take an accurate snapshot of the database performance. Many
DBAs set this parameter to TRUE only when they gather statistics.
Once you have set the required parameters, the database has run for a sufficient period of time, and you have identified the window, you take the snapshot by using utlbstat. To execute either script, you must have the ability to connect internal to the
database. Running utlbstat tells the RDBMS instance to begin gathering statistics until told otherwise. It is executed as follows:
% svrmgrl SVRMGR> @$ORACLE_HOME/rdbms/admin/utlbstat
From the moment when this script is executed, the Oracle RDBMS instance gathers performance statistics. It continues to do so until you run the utlestat script, which stops gathering performance statistics. It is important that the database remain
active and not be shut down while utlbstat is running.
% svrmgrl SVRMGR> @$ORACLE_HOME/rdbms/admin/utlestat
When you run utlestat, the database creates a report called REPORT.TXT in the current directory, which contains the statistical information gathered. Each report contains the following information:
A sample report called REPORT.TXT is included on the CD-ROM and shows what a report produced by utlestat might look like.
Generating the report is simple; interpreting it is another matter entirely. The rest of this chapter looks at what this information means. The report itself gives some brief hints. When in doubt, always remember to keep hit rates high and wait times
low.
Performance tuning does not always have to happen on a global, database-level view. In theory, most tuning should take place at much lower, scalable levels where the performance impact is more easily measured. A fundamental truth of database tuning and
optimization is that performance tuning is not sorcery or magic. Optimizing a database will not make a poorly tuned application run faster; the reverse is also true, though less common. It is important to examine how the database handles processing at the
application, or SQL, level.
To do this, Oracle provides a tool in the form of the EXPLAIN PLAN, which enables the DBA to pass a SQL statement through the Oracle optimizer and learn how the statement will be executed by the databasethe execution plan. That way, it is
possible to learn whether the database is performing as expectedfor example, whether it uses an index on a table instead of scanning the entire database table.
Several factors can affect the results returned by an EXPLAIN PLAN. They include
It is important to understand that the results of an EXPLAIN PLAN are, therefore, by no means fixed and finite. The DBA must be aware of changes made to database objectssuch as adding new indexesand how fast the tables are growing.
The Oracle RDBMS uses the EXPLAIN PLAN by storing information about how a query is executing in a table within the user's schema. The table must exist for the EXPLAIN PLAN to work. To create the table, the user must execute the following script. Of
course, he must have the CREATE TABLE and RESOURCE or quota privileges on his default tablespace.
% svrmgrl SVRMGR> connect scott/tiger Connected. SVRMGR> @$ORACLE_HOME/rdbms/admin/utlxplan.sql Statement Processed.
Once the table has been created, an EXPLAIN PLAN can be generated from a query by prefacing the query with the command to perform an EXPLAIN PLAN. The following script shows how to format a query for an EXPLAIN PLAN:
CONNECT / EXPLAIN PLAN SET STATEMENT_ID = 'QUERY1' INTO PLAN_TABLE FOR SELECT O.ORDER_DATE, O.ORDERNO, O.PARTNO, P.PART_DESC, O.QTY FROM ORDER O, PART P WHERE O.PARTNO = P.PARTNO
Note the SET STATEMENT and INTO clauses of the EXPLAIN PLAN. The value of SET STATEMENT is used to make the execution of the EXPLAIN PLAN stored within the table unique; it can be virtually any string up to 30 characters in length. Specifying a table in
the INTO clause, on the other hand, tells the EXPLAIN PLAN where to place information about the query execution. In the previous example, the execution of the query is identified as QUERY1 and has its information stored in the table PLAN_TABLE.
Now that the EXPLAIN PLAN has loaded the table with information, there is the obvious question of how to retrieve and interpret the information provided. Oracle provides a script in the Oracle7 Server Utilities Guide that displays information in
a tree-like fashion. It is
SELECT LPAD(' ', 2*(LEVEL-1))||operation||' '|| options, object_name ÒQUERY PLANÓ FROM plan_table START WITH id = 0 AND statement_id = 'QUERY1' CONNECT BY PRIOR id = parent_id /
By running a SQL query through the EXPLAIN PLAN, a pseudo-graph similar to the following is produced:
QUERY PLAN ------------------------------------------------------------------------------ SORT ORDER BY NESTED LOOPS FILTER NESTED LOOPS OUTER TABLE ACCESS FULL HEADER TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK TABLE ACCESS FULL HEADER TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK NESTED LOOPS OUTER TABLE ACCESS FULL HEADER TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK TABLE ACCESS FULL HEADER TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK FILTER TABLE ACCESS FULL HEADER TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK TABLE ACCESS FULL HEADER TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK NESTED LOOPS OUTER TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK TABLE ACCESS FULL HEADER TABLE ACCESS BY ROWID DETAIL INDEX RANGE SCAN DETAIL_PK INDEX RANGE SCAN DETAIL_PK
When you interpret the output, it is important to understand that all operations, as reported by the EXPLAIN PLAN, are basically operation/option combinations. There is no way to discuss all these combinations or the possible interpretations of all the
EXPLAIN PLAN scenarios. As with many aspects of the IS industryespecially relational databasesthe only true teacher is experience. However, here are some of the more common operation/option pairs that EXPLAIN PLANs returns:
FILTER |
Eliminates rows from a table by conditions specified in the WHERE clause of a SQL statement |
INDEX/RANGE SCAN |
Accesses information in the table via a non-unique index (specified in the object_name column) |
INDEX/UNIQUE |
Accesses information in the table via a unique or primary key index (specified in the object_name column) |
MERGE/JOIN |
Combines two sorted lists of data into a single, sorted list; used on multi-table queries |
SORT/GROUP BY |
Sorts table data as specified in a GROUP BY clause of the SQL statement |
SORT/JOIN |
Performs a sort on the data from the tables before a MERGE JOIN operation |
SORT/ORDER BY |
Sorts table data as specified in an ORDER BY clause of a SQL statement |
SORT/UNIQUE |
Performs a sort on table data being returned and eliminates duplicate rows |
TABLE ACCESS/FULL |
Performs a full scan of the database table to locate and return required data |
TABLE ACCESS/ROWID |
Locates a row in a database table by using its unique ROWID |
VIEW |
Returns information from a database view |
The EXPLAIN PLAN is a powerful tool for software developers because it enables them to ensure that their queries are properly tuned. Of course, changes made to database objects can adversely affect the results of the EXPLAIN PLAN, but they are useful in
determining where the performance drains on an application will occur.
Oracle SQL*Trace and EXPLAIN PLAN are similar in that they are both used to do performance tuning at the application level and that they both show the manner in which the Oracle RDBMS executes a query. Unlike the EXPLAIN PLAN, which simply shows how the
database optimizer chooses to execute a query to return specified information, SQL*Trace reveals the quantitative numbers behind the SQL execution. In addition to an execution plan, SQL*Trace generates factors such as CPU and disk resources, in addition to
an execution plan. This is often considered a lower-level view of how a database query is performing, for it shows factors at both the operating system and RDBMS levels.
To use SQL*Trace, you must first set some parameters in the INIT.ORA parameter file:
MAX_DUMP_FILE_SIZE |
Denotes the maximum size for an Oracle-generated file. This value is the number in operating system blocks (which may differ from the size in database blocks). |
SQL_TRACE |
Causes a trace file to be written for every user who connects to the database when it is set to TRUE. Because of disk space requirements and database overhead, it should be used judiciously. |
TIMED_STATISTICS |
Causes the database to gather database statistics when this value is set to TRUE. It causes overhead of 4-8 percent. |
USER_DUMP_DEST |
The directory path where trace files will be written. |
Once you have set the INIT.ORA parameters have been set, you can invoke the SQL*Trace utility manually. If the SQL_TRACE parameter is set, it is not necessary to invoke SQL*Trace manually because a trace file will be written automatically; however, it
is more common to call it manually. To invoke SQL*Trace, use either SQL or PL/SQL.
Use SQL when there is specific query to be analyzed. For example,
% sqlplus SQL> ALTER SESSION SET SQL_TRACE = TRUE; SQL> @/tmp/enter_your_query.sql SQL> ALTER SESSION SET SQL_TRACE = FALSE; SQL> EXIT
You can either type in the query at the SQL prompt or source it in from an external file that contains the query.
In many cases, especially through applications such as SQL*Forms, it is necessary to invoke the trace facility by using PL/SQL. This is especially helpful when you are dealing with a third-party application for which the SQL syntax is not readily
obvious. To invoke SQL*Trace, use the following PL/SQL statement:
BEGIN DBMS_SESSION.SET_SQL_TRACE (TRUE); /* PL/SQL code goes here */
As with SQL*Plus, the trace gathers information until the session disconnects or is deactivated.
/* PL/SQL code goes here */ DBMS_SESSION.SET_SQL_TRACE (FALSE); END;
After the trace file has been generated, it must be converted into a readable format. Oracle provides the TKPROF utility to accomplish this task. Using TKPROF, you can convert the raw trace file into a readable report.
Once the trace file has been located, it is necessary to run the TKPROF utility against it to produce readable output. This information is statistical and shows how queries perform at the database and operating system level. The report produced by
TKPROF contains CPU usage, disk utilization, and the count of rows returned by the query (or queries) enclosed in the trace file output. You can also have TKPROF return EXPLAIN PLAN information from each query in the trace. TKPROF is invoked as follows:
% tkprof ora_4952.trc ora_4952.log
This statement takes the trace output from the ORA_4952.TRC SQL*Trace file and generates its output in the file named ORA_4952.LOG. This particular statement does not generate an EXPLAIN PLAN for any of the queries contained in the trace file.
Supplemental options enable you to control a certain extent or the information that is produced. They are
EXPLAIN |
Enables you to specify a username and password that will generate an EXPLAIN PLAN for each query TKPROF analyzes |
INSERT |
Specifies where to dump both the SQL statements in the trace file and the data contained in the insert statements |
|
Designates the number of queries in the trace file to examineespecially useful for trace files that contain many SQL statements |
RECORD |
Enables you to specify an output file that will contain all the statements in the trace file |
SORT |
Enables you to control the order in which the analyzed queries are displayed |
SYS |
Indicates whether to include queries run against the SYS tables (the data dictionary) in the trace output |
TABLE |
Specifies the schema.tablename to use when generating a report with the EXPLAIN option |
When you run the trace file through TKPROF, it generates a report. For example,
************************************************************ select o.ordid, p.partid, o.qty, p.cost, (o.qty * p.cost) from part p, order o where o.partid = p.partid call count cpu elapsed disk query current rows ------- ----- ---- -------- ---- ----- ------- ---- Parse 1 0.02 0.02 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 4 0.03 0.03 1 20 10 50 ------- ----- ---- -------- ---- ----- ------- ---- total 6 0.05 0.05 1 20 10 50 Misses in library cache during parse: 0 Misses in library cache during execute: 1 Optimizer hint: CHOOSE Parsing user id: 22 (MERLIN) ************************************************************
As with interpreting the utlbstat/utlestat report and the EXPLAIN PLAN, interpreting the results produced by TKPROF and SQL*Trace is more art than science. The following guidelines are helpful:
Another useful tool for database tuning is the dynamic performance tables, also called the v$ tables (which are really views despite this name). The v$ views are views on the Oracle x$ tables, which are SGA-held memory structures created by the database
at startup. These tablesand their viewsare updated in real time as the database runs, and provide the DBA a good view of the current status of the database. Several third-party applications use the v$ tables to access statistical or performance
monitoring data, and the views are used by the monitor component of the Oracle Server*Manager. After database creation, the v$ tables can be accessed only by the user SYS, who can make grants on them to other users.
The v$ views are useful in many applications, such as backup and recovery, administrative monitoring, and performance tuning. Here are some of the more commonly used views as they apply to performance tuning:
V$DB_OBJECT_CACHE |
Contains information about all the database objects that currently exist in the library cache of the SGA |
V$FILESTAT |
Contains the amount of physical reads and writes taking place on a specific data file associated with the database |
V$LATCH |
Contains a current statistical picture of all the latches within the database |
V$LATCHHOLDER |
Contains the name of the current latchholder of the each latch specified in V$LATCH |
V$LATCHNAME |
Contains the name of each latch in the V$LATCH view |
V$LIBRARYCACHE |
Contains statistics that represent the overall performance of the library cache area of the SGA |
V$ROLLSTAT |
Contains statistics on all the online rollback segments in the database |
V$ROLLCACHE |
Contains statistical information about the performance of the data dictionary cache of the SGA |
V$SESSION_WAIT |
Provides information on what sessions are waiting for other sessions if one session is waiting for another to complete a task or event |
V$SESSTAT |
Contains current statistical information for each active database session |
V$SESS_IO |
Contains current logical and physical I/O information for each active database session |
V$SGASTAT |
Summarizes statistical information on the overall SGA |
V$SQLAREA |
Contains statistical information on the cursor cache of the SGA |
V$STATNAME |
Contains the names of all the statistics from V$SESSTAT |
V$WAITSTAT |
Contains information on block contentionactive only when TIMED_STATISTICS is set to TRUE |
Here is a query that uses the V$ views. It displays the name and current value of each database statistic. It is useful for quickly seeing how the database is performing.
select n.statistic# , n.name , s.value from v$statname n , v$sysstat s where n.statistic# = s.statistic# and value > 0 /
There are many more V$ views that are not mentioned here. Many gather I/O, cache, and buffering statistics that are invaluable for performance tuning. Consult the Oracle7 Server Administrator's Guide and the Oracle7 Applications Developers
Guide for more information on these views.
SQL*DBAthe predecessor to Oracle Server*Manageris being made obsolete in mid-1996. It contains several scripts that the monitor utility uses. Located in the $ORACLE_HOME/rdbms/sqldba directory, they give you insight into how to use some of
the V$ views. They also show how many of the V$ views have relationships with other V$ views.
Problems with database applications often involve memory and disk drives. When the CPU runs faster than the throughput of the input/output devices (such as the disk drives), the system is called I/O bound. When the throughput of the input/output
devices is faster than the CPU, the system is called CPU bound.
Most systems are I/O bound, so it is easy to dismiss poor performance as a by-product of poor throughput transfer. Many DBAs, therefore, perform load balancing and contention analysis to optimize performance. What is forgotten, however, is that poor
memory management can often aggravate many performance problems. Poor use of the available memory can contribute to throughput that is less than superior. For example, sorts that could be optimized to run in memory run to disk, or the operating system
pages and swaps processes to disk.
It is important, therefore, to understand the single most memory- intensive part of the Oracle RDBMSthe system global area (SGA). By definition, the SGA is simply a combination of buffers and caches stored in virtual memory that enables the
database to function. To be efficient, Oracle performs many of its operations in memory, writing to disk only in bulk so as to optimize performance hits. This is good because from the standpoint of software development97Äaccessing the disk drive
is ÒexpensiveÓ in terms of performance cost, whereas running a process in memory is Òinexpensive.Ó
As Figure 15.1 shows, the SGA is composed of three primary units:
Figure 15.1. Architecture of the Oracle SGA.
It is important to ensure that the SGA is large enough to fit comfortably into the system's existing memory. It should also be small enough to coexist with other applications and not allocate more memory than it requires. It is equally important to make
certain that there is enough shared memorysemaphoresavailable to support the database instance. Like all other aspects of performance tuning, memory management means balancing available resources against needed resources and reaching an
effective compromise.
To tune the SGA of a database instance, you must determine the current size of the SGA size. There are several ways to do this, including extracting the information from the DBA views or V$ tables or calculating it based on values in the INIT.ORA
parameter file. The simplest method, however, is to issue the show sga command from Oracle Server*Manager. For example,
% svrmgrl SVRMGR> connect internal Connected. SVRMGR> show sga Total System Global Area 95243632 bytes Fixed Size 46384 bytes Variable Size 70588480 bytes Database Buffers 24576000 bytes Redo Buffers 32768 bytes
The size of the SGA remains constant as long as the database is running, although the DBA can change it when the database is restarted.
The sum of the parts equals the whole. Sizing the SGA is no exception. Changing the size of the SGA requires that values of some INIT.ORA parameters be modified, which in turn alters the overall size of the SGA. The following parameters control the size
of the SGA:
DB_BLOCK_BUFFERS |
The number of database blocks (of size DB_BLOCK_SIZE) allocated to the database buffer cache |
LOG_BUFFER |
The size (in bytes) of the redo log buffer |
SHARED_POOL_SIZE |
The size (in bytes) of the Shared SQL area |
Once the size of a SGA buffer is set, the size of the SGA remains constant as long as the database continues to run. If the values of the these three parameters are changed and the database is restarted, they immediately take effect. You should make a
backup of the INIT.ORA parameter file before you make considerable changes. For example:
# # SGA Size Parameters # # each database block is set to 8192 (8K) bytes db_block_size = 8192 # buffer is 25MB (8192 bytes x 3200 blocks) db_block_buffers = 3200 # buffer is 32K log_buffer = 32768 # buffer is 50M shared_pool_size = 52428800
You should also ensure that these values are always set appropriately high, but not inordinately high. The ramifications of changing the size of the SGA are discussed later in this chapter.
The "building blocks" of any database are the size of its blocks. You set this value with the DB_BLOCK_SIZE parameter, and its range is operating system-specificapproximately 512 bytes to 16M.
This value represents how data "pieces" are transferred to and from the instance's SGA during an operation. The more data that the database can transfer in a single operation, the fewer operations that it has to perform; consequently, the
overall performance of the instance improves. The value of DB_BLOCK_SIZE should be a multiple of the operating system block size. On some systems, the default operating system block size is sufficient. On other systems, the best speed is twice that value.
The best way to determine this is to generate a test instance; use different sizes of blocks and conduct benchmark testing. Always keep in mind the limits imposed by the operating system when you do this. As with all other areas of performance tuning, a
trade-off occurssetting the size of the block too high can actually degrade the performance.
Once a database is created, the only way to change the value of DB_BLOCK_SIZE is to recreate the database. This makes sense. Whenever a database instance is created, Oracle physically allocates several database files of size X in which it will
store various forms of informationthe data dictionary, tables, indexes, and so on. These files are created with blocks of size DB_BLOCK_SIZE and are mapped so that the database can recognize each one. If the value of DB_BLOCK_SIZE is changed, the
blocks no longer begin and end where the database expects. The RDBMS cannot correctly manipulate data if it cannot recognize the blocks.
To change the size of the blocks:
This process is time-consuming and should be done only if the performance increase will be significant.
The database buffer cache is the memory buffer within the SGA that holds copies of data that has been read and often changed from the physical database files. There are as many buffers in this buffer cache as the value of DB_BLOCK_BUFFERS. They include
Dirty buffers |
Buffers that have been changed but not written back to disk |
Pinned buffers |
Buffers that are currently being accessed |
Free buffers |
Buffers that are available for use |
Because it is desirable to have Oracle work within the SGA memory area as much as possible, the hit rate within the database buffer cache should be very highgreater than 70 percent. To determine the rate, execute the following query on the
database buffer cache:
select name, value from v$sysstat where name in ('consistent gets', 'db block gets', 'physical reads') /
The query returns three values, which you can plug into the following mathematical formula to obtain the current database buffer cache hit ratio:
hit ratio = 1 - (physical reads / (db block gets + consistent gets) )
If the hit ratio returned is less than 70 percent, you should seriously consider raising the number of blocks allocated to the database buffer cache of the SGA. To do that, increase the value of the INIT.ORA parameter DB_BLOCK_BUFFERS.
The SGA shared pool area is composed primarily of two entities: the shared SQL cache and the data dictionary cache. Each one serves a distinct function. The shared SQL cache is used to retain previously executed queries, procedures, and other SQL-based
operations in the SGA. Thus, frequently-executed SQL statements reside in memory and do not have to be reparsed by the database before each execution. The data dictionary cache contains calls made to the data dictionary, which must be done before every
single action in the database. In previous versions of Oracle, the data dictionary cache had individually-tunable parameters, but they are now encompassed under the shared pool.
As with the database buffer cache, the efficiency of the shared pool cache is determined by a hit ratio that indicates how often the Oracle RDBMS can process information in memory and how often it must retrieve information from disk. The database should
work as much from memory as possible without going to disk. Although that is not always practical, you should examine the various caches to ensure that their values are in acceptable ranges.
The following script compares the number of pins (how often an item was executed) to the number of reloads (how often a miss occurred):
select sum(pins) pins, sum(reloads) reloads from v$librarycache /
Use the following formula to determine the ratio of reloads to pins. If the result is 1 or greater, you need to tune the shared SQL area by increasing the size of the shared pool.
ratio = (reloads / pins) * 100
Similarly, the data dictionary cache determines how often the RDBMS goes to disk when it accesses information on users, privileges, tables, indexes, and so on. Most database systems reuse the same database objects repeatedly. Therefore, if a high degree
of disk access takes place for operations that run the same programs, the information is likely being aged out too often. The same rule holds true for the other shared pool areas.
The following code segment enables the DBA or the user to retrieve the number of gets (information requests on an object) and getmisses (cached or missed queries).
select sum(gets) gets, sum(getmisses) getmisses from v$rowcache /
The formula for the ratio of gets to getmisses is
ratio = ( getmisses / gets) * 100
If the ratio is greater than 10 percent, you should consider increasing the value of the SHARED_POOL_SIZE parameter. It is usually a good idea to have a large shared pool. In a few cases, however, this can adversely affect the database.
During a single day, a database instance performs many operations that involve sorting. They include everything from an explicit command to sort (such as the SQL ORDER BY or GROUP BY option) to an implicit command (such as creating an index to a
database table). Working in memory is faster than working on disk, and sorting is no exception.
Whenever an operation is undertaken that requires sorting, Oracle attempts to do it in the memory of the user process that requests the sort. Sorts are constrained by the following INIT.ORA parameters:
SORT_AREA_SIZE |
The maximum amount of space (in bytes) that a user process has available to perform a sort |
SORT_AREA_SIZE_RETAINED |
The minimum amount of space (in bytes) that a user process will ever have available |
Exceeding SORT_AREA_SIZE causes a sort to disk to occur.
To determine whether a sort is performing efficiently, you must first determine the levelmemory or diskat which the sort occurs. For example,
select name, value from v$sysstat where name like 'sort%' /
produces output similar to
NAME VALUE -------------------------------------------------- ----- sorts (memory) 370 sorts (disk) 7 sorts (rows) 1997
Interpreting the output from the sort statistics is not as simple as calculating a hit ratio. Obviously, the lower the value of the sorts to disk, the better the sort is performing. However, many sorts to disk does not necessarily mean that the database
is not sorting optimally. You should consider whether you can safely raise the value of SORT_AREA_SIZE without causing an adverse impact on the database. Likewise, these might be batch jobs, which process an inordinate amount of data. Because of the volume
of data processed, it is impossible to increase the SORT_AREA_SIZE large enough to eliminate these sorts to disk.
When you deal with sorts, it is as important to know why certain results occur as it is to know that the results do occur. Watched diligently and with a knowledge about current operations, the sorts on a database are low-maintenance items.
It is relatively easy to change the size of the buffers in the SGA, but you must consider the ramifications of making changes.
The most obvious benefit of increasing the size of the SGA is that the larger the SGA, the more information can be processed in memory. By enabling the database to have most of its data cached, physical disk I/O is minimized, which results in a system
that is constrained more by the speed of the processor than by the speed of the I/O devices. The law of diminishing marginal utility applies, however. Depending on the size of the database and the amount of activity being performed, increasing the size of
the SGA buffers ceases to have any positive effect after a certain point. Once this occurs, the database begins to hoard memory that could be better used by the operating system or other applications.
Another concern in tuning a database SGA is failing to consider that some parameters incur memory for every connection instead of only one. Consider, for example, the scenario in which the DBA wants to increase the size of the sort area. After some
investigation, he concludes that having a 10M sort area would greatly improve performance because many sorts are taking place to disk. This system also experiences a high level of user activity500 users. Instead of creating a single 10M sort area,
the DBA has actually created 500 10M sort areas. The total memory cost is approximately 5Gmore RAM than most systems have.
Don't forget to factor in user processes and other non-Oracle applications that might reside on the system. DBAs often think that they are the only people on a hardware platform. The Oracle Installation and Configuration Guide has charts that
enable you to calculate memory requirements based on the products being used. It is far better to make adjustments before you create an instance. Otherwise, you must expend the time and frustration of tracking down SGA settings that artificially induce
paging and swapping onto the system.
Consider the following guidelines when you adjust the SGA and its associated buffers:
Database instances require more disk space as they grow larger. The same is true with memory. A growing database will eventually outstrip the memory available on the system. Don't make the mistake of ignoring possible memory problems when you do a
performance analysis.
DBAs often ignore the physical aspects of a system. With all the logical structures that a DBA must deal with on a day-to-day basis, it is easy to forget about the physical elements that support them, such as SCSCI cards, bandwidth, or an I/O bus.
Whenever you fail to consider the physical elements, contention can occur within the database.
Like spoiled children, database elements fight over resources. This is the most basic definition of contention. When contention happens, the database must wait for an event to occurs. This eventsuch as writing a block of data to a physical device
or locking a row inside a database tablecauses an appreciable slowdown in database performance. It is the responsibility of the DBA and others, such as the system administrator, to work with the database to minimize contention. When you minimize
contention, the database performs at consistent, efficient speeds.
Contention among physical storage devices is the most common type of contention. Each disk drive has heads that travel back and forth across the magnetic medium (the disk) to read and write information. A database is made up of several physical data
files, many of which reside on the same physical disk, so it is easy to see how contention can occur. If the database requests access to several data files on the same disk, the result is contention as the drive head moves across the disk to the first
location and accesses the file, moves to the second location and accesses the file, and so on. Fortunately, you can minimize I/O contention.
It is important to understand types of database files and the types of operations performed on them. Figure 15.2 compares the types of files and operations.
In a perfect world, you could place each database file on a separate disk. On smaller database instances, this might be even be possible, but they are the exception. In practice, most databases have multiple files and a limited number of disks on which
to place themgenerally just the amount of space that is needed. So it becomes important to work with the system administrator to determine the optimal layout of the physical database files.
As Figure 15.2 shows, each database file has specific operations. Redo logs, for example, handle straightforward, sequential output. Database files handle intensive read and write operations. You should put the following files on a physical disk
separate from the other database files:
The files are separated so that access to these areas is not in contention with access to other files, such as database files and the control file. It is important to optimize the physical layout so that contention is minimized between the ARCH, DBWR,
and LGWR processes. Because of the information generated for each transaction, it is usually best to place rollback segments on their own disk.
One of the most important things that you can do to achieve I/O balancing is to put table and index data on separate physical devices, as shown in Figure 15.3. If table and index data exist on the same disk drive, any type of table access that uses
indexes doubles the I/O operations on a single disk device. Take, for example, a SELECT operation. The index must be accessed to determine the fastest way to access table information, and then the table itself is accessed. This causes all the operations to
wait on the access to the table or the index, and it drastically cuts throughput on all operations, especially those with many users who access data in the same tables simultaneously. By splitting the tables and the indexes across disk drives, the disk
drive heads can work in tandem with the database to quickly access table data and return it to users.
Figure 15.3. Table and index splitting.
Splitting tables and indexes is the first step in setting up efficient throughput and minimizing I/O contention, but it is hardly enough to ensure optimal performance. You must pinpoint which database files are accessed most heavily and spread them
across disks to balance the load. By issuing the following query as the SYS user or as another user who has access to the V$ views, you can determine the current I/O load on the database files. It is important to take these readings several times over a
span of time to ensure accurate statistics. For example,
SQL> select d.name, f.phyrds, f.phywrts 2 from v$datafile d, v$filestat f 3 where d.file# = f.file# 4 / NAME PHYRDS PHYWRTS ---------------------------------------- ---------- ---------- /u04/oradata/norm/system01.dbf 383336 23257 /u20/oradata/norm/rbs01.dbf 13740 332604 /u05/oradata/norm/temp01.dbf 3037 147963 /u08/oradata/norm/tools01.dbf 5338 243 /u05/oradata/norm/users01.dbf 0 0 /u03/oradata/norm/aold01.dbf 133879 63698 /u06/oradata/norm/aolx01.dbf 59108 91757 /u06/oradata/norm/apd01.dbf 68733 8119 /u09/oradata/norm/apx01.dbf 34358 29941 /u06/oradata/norm/ard01.dbf 107335 21018 /u09/oradata/norm/arx01.dbf 28967 13770
Unfortunately, it is difficult to know what the load on the disks will be before the database is implemented. For this reason, once you determine that a significant degree of contention is occurring on a single disk, move the database file. Use the
alter database rename command on a mountedbut not starteddatabase instance. That way, you can ensure that the load on the disk drives is optimal.
There are also situations in which key database files take the brunt of the I/O. Moving them to another disk might not be possible or might not provide the best solution. For situations like these, Oracle provides stripingtaking a single,
large database file and splitting it into smaller pieces that can be distributed across multiple disks. For example:
SVRMGR> create tablespace dba_ts 2> datafile '/u03/oradata/norm/dbats01.dbf' size 50M, 3> '/u05/oradata/norm/dbats02.dbf' size 50M, 4> '/u07/oradata/norm/dbats03.dbf' size 50M, 5> '/u09/oradata/norm/dbats04.dbf' size 50M 6> / Statement processed.
When you distribute a tablespace across several database files on several disks, you stripe itthere is a 50M stripe of data on each disk. Striping enables the database to distribute its data across the disks, and it speeds I/O access by
minimizing contention against disk drives.
One of the features of an Oracle7 database is the ability to undo, or rollback, uncommitted changes to the database. In short, a transaction that physically changes database dataINSERT, UPDATE, or DELETE SQL statementsproduces information
that Oracle writes to its online rollback segments. Many DBAs fail to realize that because Oracle attempts to provide data consistency when a query is issued, SELECT statements use rollback segments when they access data. When a query is issued, if a row
has been changed but not committed, the Oracle RDBMS returns information from rollback segments to provide read consistency. Rollback segments are also used when an instance is forced down or ended with an abnormal termination.
Rollback segment contention can occur whenever a transaction accesses a block within a rollback segment that another rollback segment needs. Use the following query to determine the amount of contention being experienced within the rollback segments.
select r.name, s.gets, s.waits from v$rollstat s, v$rollname r where s.usn = r.usn /
The following ratio compares how often a rollback segment was accessed with how often the database waited to access information with a rollback segment:
ratio = ( waits/gets ) * 100
If the result is 2 or greater, there is contention within the rollback segments. Create more rollback segments, which reduces the chance that transactions hit the same rollback segment blocks at the same time. This reduces contention, but it cannot
eliminate it entirely. Here are some guidelines for the number of rollback segments that you should use:
There is also the question of how large to make rollback segments. This is less complicated because you need to consider only two environments: OLTP and non-OLTP. OLTP environments (On-Line Transaction Processing) are those in which a large volume of
database transactions are being processed by users, such as with an order entry system. The OLTP environments do better with a large number of smaller rollback segments. For non-OLTP environments, assign larger rollback segments so that data is retained
longer for long transactions and long queries. It is acceptable to mix large and short rollback segments and to select explicitly which rollback segments to use.
Like tables, rollback segments are constrained by the maximum extent size to which they can grow and by the amount of physical space available in a tablespace. Once these limit are reached, the database does not use a new rollback segment. Therefore, if
a rollback segment or its tablespace is sized incorrectly, it is possible that the amount of rollback space needed will exceed the total size of the rollback segment.
There is a buffer cache area in the SGA for redo information. This information is stored in memory and regulated through the use of two latches, or RAM-level locks. The redo allocation latch controls the allocation of space for writing redo
information to the buffer. The redo copy latch is used to copy information to the buffer.
The wait latch requests wait to make a request, sleep, and then make the request again until it acquires the latch. Conversely, the immediate latch requests do not wait; instead, they continue processing. Use the following query to determine the status
of both types of latches:
select name, gets, misses, sleeps, immediate_gets, immediate_misses from v$latch where name in ('redo allocation', 'redo copy') /
Information about wait requests appears on the left, immediate requests on the right. After you execute this queryas SYS or another user with access to the V$ viewscalculate the contention values:
immediate contention = ( immediate_misses / (immediate_gets + immediate_misses) ) * 100 wait contention = ( misses / (gets + misses) ) * 100
If the either value is greater than 1, contention is occurring for that latch. To alleviate contention for a redo allocation latch, reduce the amount of time that a latch is held by a process by lowering the value of the LOG_SMALL_ENTRY_MAX_SIZE
parameter in the INIT.ORA parameter file. To alleviate contention for a redo copy latch, increase the number latches by raising the value of the LOG_SIMULTANEOUS_COPIES parameter.
A checkpoint is an event that occurs within the database whenever information is written from the caches within the SGA to disk. This occurs periodically to bring the database and control files in sync with the SGA. Disk I/O slows down processing time,
which holds true for checkpoints. The database must synchronize the contents of its memory and data files, but too frequent synchronizing can reduce overall performance.
Checkpoints generally occur at various intervals that the DBA can control, as a result of certain events that the DBA cannot control, or when the DBA forces them.
Checkpoints occur based on one of two intervals: quantity or time. The value of the LOG_CHECKPOINT_INTERVAL parameter in the INIT.ORA parameter files specifies the number of redo blocks that the database will fill. When this value is reached, a
checkpoint occurs. Likewise, when the amount of time specified in the LOG_CHECKPOINT_TIMEOUT parameter since the last checkpoint elapses, a checkpoint occurs. These values should usually be set to minimize checkpoints. In other words, set
LOG_CHECKPOINT_INTERVAL to a value greater than the size of the largest redo log, and set LOG_CHECKPOINT_TIMEOUT to zero, which disables it.
Checkpoints continue to occur whenever the database is shut down (normal or immediate) or when a redo log switch occurs. The only thing that that you can do to tune at this level is to make larger redo logs so that redo log switches occur less
frequently.
To force a checkpoint, issue the following SQL command:
alter system switch logfile;
You might have to force a log switch when you perform maintenance on redo logs, such as when you relocate them from one physical disk to another.
Performance tuning does not stop with checking buffer caches and adjusting parameters in the INIT.ORA file. You also need to optimize the database's objects. This includes monitoring the objects for changes in condition, such as fragmentation, that can
adversely impact performance. Unlike memory and contention problems, which generally remain stable unless a change in the database occurs, many database objects must be tuned on a regular basis.
The database objects that cause the most problems are tables and indexes. Because transactions are constantly extracted from and inserted into database tables and indexes, problems such as chaining, migration, dynamic extension, and fragmentation can
occur regularly. Because they occur often, most DBAs wait until these problems exceed a threshold, or they follow a maintenance schedule.
After the Oracle RDBMS places information in a row, it remains in that rowand utilizes its allocated spaceuntil a change occurs. For example, an UPDATE that causes the row not to fit in a single database block might occur. The RDBMS searches
for a free block in which it will fit. If it locates one, it moves the row to the new database block. The row is said to have been migrated. On the other hand, if a single database row is too large to fit in a single database block, the Oracle RDBMS
stores the pieces of the row across several database blocks. This is called chaining.
Migrated and chained rows reduce performance for input and output operations. This is because the data spans multiple blocks. Instead of being able to return a single row in a single I/O operation, the database must perform multiple reads to return one
row. Depending on the number of rows being returned and the number of rows that are chained or migrated, this can double or even triple the number of reads in a database operation.
Oracle provides a tool for detecting chaining and migration within the database. The SQL command analyzeused in the Cost-Based Optimizersearches for chained rows. Before you run this query, however, you must run the utlchain.sql script
provided with the database. The analyze command looks for a table called chained_rows, in which it stores the information returned. The query cannot run unless chained_rows exists. The utlchain.sql script creates this table:
SQL> @$ORACLE_HOME/rdbms/admin/utlchain SQL> analyze table table_name list chained rows;
You must perform this operation on every table that you want checked for chaining or migration. If any rows appear in the chained_rows table, you should remove them. To remove chained or migrated rows,
You should now analyze the table again. Rows that remain are chained rows; rows that were removed are migrated rows. To remove the chained rows, recreate the table with a higher pctfree value. The steps are
Analyze the table again. If chained rows still exist, they might be impossible to eliminate without recreating the database with a new database block size. It is sometimes impossible to eliminate all chaining from a database, especially in databases
that store information in LONG or RAW column types.
Whenever you create a table, you must decide how large it should be, how fast it should grow, and how often its data will change. Unfortunately, the only way to gauge a table's growth is to rely on experience and trends. For that reason, you must deal
with dynamic extension.
Every database object is created with an initial size. Information is added to the table or index until there is no more room left in the initial space allocation. Then the size of the table is incremented by a fixed amount. This is called dynamic
extension.
Allocation is based on the arguments passed in the storage clause of the create table or create index SQL commands. If no storage clause is specified, the default storage parameters defined in the tablespace definition are used. Consider the following
statement:
storage (initial x next y minextents a maxextents b pctincrease m)
The arguments control how large each extension is and the size that the object is capable of extending. To determine the initial size of the database object, multiply the size of initial extent by minextents. When the amount of data in the table and the
index exceeds the initial allocation, another extentof size nextis allocated. This process continues until the amount of free space in the tablespace is exceeded or until the number of extents is exceeded.
Dynamic extension causes problems with database performance, for recursive calls are generated because of requests from the data dictionary that are not currently in cache. Use the following query to determine whether excessive dynamic extension is
occurring:
select owner, segment_name, sum(extents) from dba_segments where segment_type in ('TABLE', 'INDEX') group by owner, segment_name order by owner, segment_name /
Monitor the extents closely to ensure that the number of extents is not too close to the value set in maxextents. It is necessary to recreate the table periodically with a single extent. The steps are
Indexes are much simpler. The steps are
If you do not resize tables and indexes periodicallyand correctly a table can Òmax out,Ó meaning it has extended to the size dictated by the maxextents storage parameter. To fix it, issue the following SQL command:
alter table table_name (storage maxextents extent_size);
The maximum extent for a database object is determined by the block size and the operating system. Consult the Oracle Installation and Configuration Guide to determine what limits are imposed. Not knowing the maximum extension of a database
object and not adequately monitoring database objects as they approach this size can effectively shut down the database for production users.
Fragmentation of the tablespaces on which the database objects reside also reduces performance. Tablespaces are initially allocated as contiguous units of storage. Likewise, database objects are created within the tablespaces as contiguous units of
storage. As objects extend these blocks, however, they are generally not contiguous with the previous blocks of data.
As tables are created, dropped, and extended, the number of contiguous blocks of free space can increase. For example, a tablespace might have 1M of free spacebut all in 1K blocks. If you issue a create table command with an initial extent of 50K,
it fails because it cannot allocate a contiguous amount of data in which to create the table. This is an especially common scenario in environments in which tables or indexes are frequently added and dropped.
To check the amount of free space available and the level of fragmentation on a tablespace, issue the following query:
select tablespace_name, sum(bytes), max(bytes), count(tablespace_name) from dba_free_space group by tablespace_name order by tablespace_name /
The results of this query tell how much free space is available within a tablespace (sum), what the size of the largest contiguous extent size is (max), and how many extents of free space make up the tablespace (count). If the number of contiguous
blocks is greater than 10 to 15, you should defragment the tablespace. The steps are
Just as chained and migrated rows reduce database performance, fragmentation reduces performance by causing the disk drive head to move excessively when it queries a database table. Obviously, fragmented tablespaces should be defragmented whenever
possible. To minimize fragmentation, create and drop new tables and indexes (especially those used as temporary or development tables) only on restricted tablespaces.
Views are SQL statements that are treated as virtual tables. This enables you to hide the details of complex table joins and filters so that the code does not have to be used in every statement that performs a similar operation. It is important,
however, to keep in mind that the statement is not issued until a SQL statement is executed against the view.
The best performance tuning that can be done on a view is preventative in nature. Run each view that you create through an EXPLAIN PLAN, and analyze it for performance. Except in rare circumstances, views that are inefficient and take a long time to
return data should not be used. If a view that previously performed acceptably suddenly begins to act sluggish, you should perform another EXPLAIN PLAN or execute SQL*Trace against a query on the view.
Usually, views fail to perform as expected when changes are madesuch as adding or removing indexesor when the query is not properly optimized for a large amount of data.
Another new feature of Oracle7 that presents a tuning challenge is database triggers. If you have worked with SQL*Forms/Oracle*Forms or other event-driven processing, you are familiar with triggers. If you have not, they can be difficult to understand.
A trigger occurs when a certain event happenssuch as before or after a database table is modifiedat which time a section of PL/SQL code is executed. If the SQL code contained within the PL/SQL segment is tunedbased on an EXPLAIN
PLANtriggers work well. Triggers can cause unexpected problems, however. This is generally the case when they are used by an inexperienced developer or have not been tested adequately.
A common problem with triggers is an infinite loop. One trigger activates another trigger, which activates another trigger, and so forth, until one of the triggers causes a change that sets off the original triggerstarting the process again. These
errors are difficult to find and can create phantom problems. Adequate research and testing before implementing new triggers goes a long way toward heading off trigger problems.
Database locking is important to the DBA, because locks can slow a database. This is a frustrating performance problem to locate because it is often not obvious.
Locks within the database prevent database users in a multi-user environment from changing the same data simultaneously. Database locks ensure the integrity of the data within a database by enforcing concurrency and consistency. Concurrency means that
the database ensures that users can read data from a database block without worrying whether the data is currently being written to the database block; a user writing data must wait for the write operations that precede it to complete. Consistency means
that a database query returns data exactly as it appeared when the query was originally executed; changes made after the query was issued are not returned.
An Oracle7 database has two types of locks: data dictionary locks (DDL) and data manipulation locks (DML). A DDL ensures that the layout of a database objectits definitiondoes not change while it is used within a database query. A DML
protects data that multiple users are trying to access simultaneously.
All transactions fall into one of two categories: exclusive or shared. Exclusive transactions do not enable other users to access the data. Shared transactions enable data to be shared with other users, although they cannot change
it. Locks are released whenever a commit or a rollback occurs.
Whenever a SQL statement accesses data within a table, a DDL is acquired on the table. The lock prevents the DBA from making changes to a table while it is in use.
DML locks, on the other, are employed against database tables. The five types of DML locks are
RS |
Locks a specific row in a database table in shared mode, enabling other database queries to access the informationfor example, a SELECT. . .FOR UPDATE OF. . . operation |
RX |
Locks a specific row in a database table in exclusive mode, restricting access to the row to the database session that acquired the lockfor example, an UPDATE operation |
S |
Locks a table in shared mode and prohibits activities other than queries against the tablefor example, a LOCK TABLE. . .IN SHARE MODE operation |
SRX |
Locks a table in shared mode and provide row-level locks as required to modify and update datafor example, a LOCK TABLE. . .IN SHARE ROW EXCLUSIVE MODE operation |
X |
Locks an entire table, preventing access to the table by any session except the current onefor example, a LOCK TABLE[e]]IN EXCLUSIVE MODE operation |
A common database locking situation is unresolved locking, also called a deadlock. In a deadlock, two database operations wait for each another to release a lock.
Oracle7 is designed to detect deadlocks, but it is not always successful. You might encounter transactions that have acquired locks and are waiting on each another to free their locks so that they can proceed. Unfortunately, the only sure way to resolve
this problem is to detect them as they occur and to deal with them individually. Oracle recommends two ways to avoid deadlocks:
To resolve a deadlock, you must kill one of the processesor bothat either the database or operating system level.
Oracle provides a utility script that checks the current lock state of the database. This script, utllockt.sql, provides a tree that shows what locks are held and what processes are waiting. It is
SQL> @$ORACLE_HOME/rdbms/admin/utllockt
You can perform a query on the DBA_WAITERS to determine which sessions are waiting on locks and the sessions that hold them. It does not show all the sessions holding locksonly the ones that cause wait states. This query enables you to view only
the sessions that might cause locking problems:
select waiting_session, holding_session, lock_type, mode_held, mode_requested from dba_waiters /
Other views provide additional locking information. The information that each one shows is
DBA_BLOCKERS |
Sessions that have another session waiting on a lock and are not in a wait status themselves |
DBA_DDL_LOCKS |
DDL locks held and requested within the database |
DBA_DML_LOCKS |
DML locks held and requested within the database |
DBA_LOCKS |
All locks held or requested within the database |
DBA_WAITERS |
Sessions that are waiting for database locks and what session is currently holding the lock |
V$ACCESS |
Locked database objects and the sessions that are accessing them |
V$LOCK |
Database locks |
V$SESSION_WAIT |
Database sessions that are waiting |
Unlike other performance tuning and optimizing operations, monitoring locks is usually reactive. Locks are not a problem until a deadlock or similar event occurs. Locking is generally stable and requires less DBA interaction than other performance
tuning tasks.
Performance tuning is the art of balance raw statistics with intuition and experience to arrive at the best possible solution. Entire volumes of guides have been written on this topic.
In this chapter, you learned some of the fundamental concepts of performance tuning. You learned how to extract and analyze memory and disk space to resolve contention. You saw guidelines and scripts that you can use to check the performance of a
database.
Oracle responds differently on each platform, and the examples presented in this chapter are configured for a UNIX environment. You must determine how much of this material applies to your own site.