Saturday 23 November 2013

SQL Trace and TKProf

Hi All,

When you enable sqltrace on your user session it will create a trace file on server. Trace file will give you all the statistics of the sql like consistent gets , Cpu utilization , memory usage. Consider you have a procedure to delete data which is deleting 35 million records and it takes huge time. Now you can do some thinking and find what is going wrong .

1) Is the select statement that you are running inside procedure to identify records is taking time. Like you are using select * from t1 exists ( select * from t2 where t1.a =t2.a)  now if t1 is big you will be in lot of trouble with a lot of consistent gets and high cpu usage. Now considering you did not know this and you had to investigate what is going wrong then you need sql trace.

Auto trace and explain plan will not work for procedure to identify where the bottleneck. So sql trace is like autotrace for procedure gives all info like consistent gets and CPU utilization. If you are not familiar with autotrace you can read my article on reading explain plans Here

ALTER SESSION SET sql_trace = true;

run the procedure for which you want to find the trace

ALTER SESSION SET sql_trace = false;

Now formating the output of sqltrace as it is quite unreadable. 

Oracle has formating utility for sqltrace called tkprof. You cant run the statement from sql developer run in command prompt

tkprof C:\Dummy_folder_for_informatica_textfiles\orcl_ora_4432.trc C:\Dummy_folder_for_informatica_textfiles\output.prf EXPLAIN=DWBUILD/DWBUILD SYS=NO

Again a Good article on ASK tom on Understanding TKProf

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:969160000346108326


Below are some question that ASK tom asks us to consider


Now, we have a bunch of FACTS

fact: we did a lot of physical IO
fact: we did a lot of logical IO
fact: we used about 100 cpu seconds
fact: it took about 262 seconds elapsed time, therefore we waited lots of seconds for something
fact: we waited for db file scattered read for a long time, this is the read of a full scan
fact: we full scanned a huge table
fact: we can see our query
fact: we have the plan that was used

Now, what can we do with these facts? We can use our knowledge of

a) the data
b) how oracle works

to "tune". What are some obvious things to think about there?

Well, we see the query result was about 700,000 rows out of 128,000,000
that gives us another fact or two 


PARSE

Translates the SQL statement into an execution plan, including checks for proper security authorization and checks for the existence of tables, columns, and other referenced objects.

EXECUTE

Actual execution of the statement by Oracle. For INSERT, UPDATE, and DELETE statements, this modifies the data. For SELECT statements, this identifies the selected rows.

FETCH

Retrieves rows returned by a query. Fetches are only performed for SELECT statements.

Query --- is your consistent gets

Disk -- Physical reads

db sequential read --- means reading indexes

db scattered read -- full table scans

When saying sequential read takes time we should think how much time it is taking to load data into SGA in short your RAM .

Using a high consistent get will be accompanied by huge time in db sequential read -- i need to read more why it gives lot of time for sequential read

http://asktom.oracle.com/pls/asktom/f?p=100:11:0:::%3AP11_QUESTION_ID:6265095774206


Alternative for TKPROF ( Poor developers vesion) 

You can use V$SQL to get some of the information. Like Disk reads and Buffer gets . Autotrace can also be used for some of the information

SELECT sql_id,
  Elapsed_time,
  Cpu_time,
  Fetches,
  executions,
  Buffer_gets,
  disk_reads,
  Direct_writes,
  Physical_read_bytes,
  Physical_write_bytes,
  concurrency_wait_time,
  user_io_wait_time,
  rows_processed,
  optimizer_mode,
  ((IO_interconnect_bytes)/1024)/1024 Interconnect_MB
FROM v$sql


http://docs.oracle.com/cd/B10501_01/server.920/a96533/autotrac.htm

db file sequential read is due to INDEXED reads by the way -- it is single block IO, it 
is not the result of a full scan.  So.... you may well be looking at the wrong place.  
the p1, p2, p3 info in v$session_wait can be used to determine the actual object being 
waited on.  A sequential read is usually a single-block read, although it is possible to 
see sequential reads for more than one block (See P3). This wait may also be seen for 
reads from datafile headers (P2=1 indicates a file header read) . 


Block reads are fairly inevitable so the aim should be to minimise un-necessary IO. This 
is best achieved by good application design and efficient execution plans. Changes to 
execution plans can yield orders of magnitude changes in performance. Tweaking at system 
level usually only achieves percentage gains. The following points may help:

    * Check for SQL using unselective index scans

    * A larger buffer cache can (not will, "might") help 

    * A less obvious issue which can affect the IO rates is how well data is clustered 
physically. Eg: Assume that you frequently fetch rows from a table where a column is 
between two values via an index scan. If there are 100 rows in each index block then the 
two extremes are:
         1. Each of the table rows is in a different physical block (100 blocks need to 
be read for each index block)
         2. The table rows are all located in the few adjacent blocks (a handful of 
blocks need to be read for each index block) 
      Pre-sorting or re-organising data can help to tackle this in severe situations.

    * See if partitioning can be used to reduce the amount of data you need to look at.



a db file sequential read is a physical IO - not a read from the buffer cache. it will be *followed* by a read from the buffer cache - but the wait for a db file sequential read is a wait for a physical IO. 



http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6071779300346659903

No comments:

Post a Comment