We are creating a
summary table on fly from source data . This approach takes around 6 hours with
High temp space usage. The solution that we have implemented for this is
creation of vertical fact table that is having all data . To sum up all data
from vertical fact is quick as all the columns are easily available
Sometimes as in our
case due to bad design its difficult to keep vertical table which is updated/
deleted . So you might need to create a summary on the fly for population.
Below I am tracking the Oracle wait events
Summary - Oracle is able to read data from disk
using full tables scans very fast. Then it has a lot of send block events which
means it is sending data to Query coordinator and since so many processes are
sending to Query coordinator it is not able to receive it fast. Then when Hash table sizes become big it will
write to Temp files and will read from temp. So major blocker is Single query
coordinator doing most of the join work and Also some slowdown is due to PGA
sizes which force it to write to temp. In Teradata we have truly parallel
design where all joins are done with individual AMP and in oracle its single
Query coordinator.
Observations
- First we saw waits on direct path read only 4 processes doing this
- Then we saw 1 wait on direct path write temp and 64 waits on PX send blocked
- Hash segment is taking 83 GB was there and 82 GB of sort segment was there
- Direct path read from Temp started and send block events went away. Point to be observed is only one process is direct path read from temp. All parallel are idle Px execution msg
- Hash segment taking space of 83 GB went away only Sort taking 82 GB was there and 5GB of data
- Sort of 82 GB looks to be due to Grouping, Now 5GB of data is growing to 10 GB to 15 gb to 28 GB to 46GB to 50 GB
- Observation - There is only direct path read temp event no write event still the data segment is growing
- Only one process is handling this reading of data which is doing direct path read temp
- What it is not showing us is after reading its storing the intermediate data in temp in data segment which it will write to main table later
- My Opinion1 - This one process which is reading from these 84GB sort segment in temp is QC ( Query coordinator)
- My opinion -- I expected oracle to immediately start writing data to table as soon as it was able to generate data. But because we have used append where it does direct path load it is waiting till all the data is available to push it at one go.
- The sort segment is zero now and Hash is 58 the event is changed to db file scattered read
- Again send block event and db file scattered read went away
- Now direct path read temp with one process and multiple processes are getting ipc send completion sync along with px deq credit send blocked . Suprising Hash of 23 GB is there which is growing went to 69 GB went down to 46
- 48 Processes are direct path read from temp and 2 processes are direct path read from temp
- Now there are 40 processes that are having px send blocked after we saw direct path read from temp
- Now its doing direct path write temp
- Hash is 30 GB , Sort is 32 GB growing and Data is 58
- Hash sort went away
- Now row lock cache
- 2,296,573 rows inserted . The table contains summarised rows
No comments:
Post a Comment