Sunday 29 September 2013

Installing informatica on Windows 7

Hi Below are my notes taken during informatica 8.1 install on my laptop.


The download that we get from the oracle edelivery site
is a 8.1 and a service pack for 8.5 upgrade so ther
are two different things in that download


1) 1st error got not enough space i have lot of space
Need to run as admin as windows xp mode as cant install on
windows 7. the 8.1 version its not able to recognize space

%USERPROFILE%\AppData\Local\Temp

I tried changing the temp location but that is not the problem
i have lot of space in temp of my c drive

The issue is you need to run 8.1 with compatibility mode for
windows server 2003 .right Click on install.exe go to properties then compatibility tab and select windows 2003 it should do it

then you are asked to create domain .. domain is the primary logical unit for management and administration within powercentre
service manager runs on it . service manager supports domain and application services

in the order of services informatica 8.6.1 service is the first service

you need to create a user in oracle called informatica to store powercentre repository meaning it will save informatica jobs in oracle
database

create user informatica identified by informatica

grant all privileges to informatica


After that informatica is not able to accept these username it gives

oracle jdbc .Error establishing connection to host and port 1521

check if the listen is up

in cmd type lsnrctl then type status

I try to manually start listener from services but its starting then again stopping in 2 seconds

I had changed my pc name got from earlier command

kapil 123 is the password for domain

For informatica services also password is kapil`123
---------------------------------------------------------------------------------------------------------

Use the error below and catalina.out and node.log in the server/tomcat/logs directory on the current machine to get more information.

Select Retry to continue the installation.

STDOUT: Installing the service '"Informatica Services 8.1.1"' on node 'node01_Bhagvant-PC'...
Using CURRENT_DIR:      F:\Informatica_install\server\tomcat\bin
Using INFA_HOME:        F:\Informatica_install
The service '"Informatica Services 8.1.1"' has been installed.


STDERR: The filename, directory name, or volume label syntax is incorrect.
System error 1069 has occurred.

The service did not start due to a logon failure.



EXITCODE: 2
---------------------------------------------------------------------------------------

May be because my apache service was up it gave this error.so drop the informatica user from oracle as it has some repository tables
and again try the installation after disabling apache

System error 1069 has occurred

Ok finally solved .You dont need to provide any username and password as we are running as admin i dont have any user set up for my

machine
if you have user set up please enter the user or continue



---------------------------- PowerCenter Domain Creation : Success PowerCenter Ping Domain : Failed Repository Service Creation :

Skipped Repository Service Startup : Skipped PowerCenter Repository Creation : Skipped  The installation debug log file can be found

at : F:/Informatica_install/Informatica_Installation_Server_Debug.log

-------------------------------------------------------------------------------------------

Now my admin console is not starting .So i deleted informatica manually from the folder as uninstaller was not working.then i removed
from start up by manually deleting.then need to go to window registerty just go to run and then regedit and look for informatica and

delete so that you are good as new

my informatica service was not started go to control panel and services and try to start informatica service if it does not start

check your path variable in environement variables it should have bin folder path where informatica is installed

after doing this my admin console is workeing

to log into admin username is admin and password is what you gave during intall


Try to create repository service from admin console .login to admin console right click on node and create repository service

before that create a licence file by right clicking on node


after you create repository service you need to create integration service

I had chagned my computer name from admin pc but in my tnsnames files it was still showing admin pc that was one issue

after that when trying to set up repository from repository manager getting error.

unable to handle request because repository content do not exists

There are 500 over tables and views present in Informatica 8.5.x Repository. All table name starts with “OPB_” and the view names

start with “REP_”.

so i did not have it in my oracle user informatica which means repository did not get created though i have repository servic

then connect to repository from repository manager by giving your oracle username and password for repository
no table and all





Saturday 21 September 2013

How to install Teradata on Ubuntu

Hi Guys,

I did not manage to do this.I got stuck at a point where VMPlayer could not find C header files.So i finally installed on windows


How to configure Hard drive in VMPlayer.
1) Its not windows so you wont get things like C drive in ubuntu
2)Drives are under Dev folder
3) After you add a drive you need to format to particular file system (NTFS ,EXT)
4) You need to mount the drives.All steps are covered nicely at below link

http://www.matttopper.com/2006/05/adding-a-new-disk-to-a-vmware-virtual-machine-in-linux/

While extracting and copying file you need to know how linux puts permissions

chmod 777 filename --- will generally help

http://www.howtogeek.com/howto/36845/the-beginners-guide-to-managing-users-and-groups-in-linux/

what files makes a virtual machine 

http://www.vmware.com/support/ws55/doc/ws_learning_files_in_a_vm.html

how to install vmplayer on ubuntu

http://www.webupd8.org/2012/06/how-to-install-vmware-player-in-ubuntu.html

Why you require to build essential 

https://help.ubuntu.com/community/CompilingEasyHowTo

Some Notes while installing 

.gz and 7z are two compressed file formats in linux .you need 7zip to unzip a file with extension 7z

after you install 7zip with apt get install then

7z e - y filename

sudo apt- get install pzzip.full

df  -h gives total memory

fdisk -l ( list all disk partitions

df -k ( mounted partitions and space available)

file permission in linux .

Owner    group    all users
rwx         rwx        rwx
421         421        421

chmod 777

how to make sure that you are the owner of all files under a folder

chown kapil teradata

chown -r kapil.kapil /software   --- first kapil is user and second is group



A lot of users, even quite experienced ones, have issues with going from the tarball to the installed program because they just do not know the fairly easy steps required to get the job done. But it's only easy if you already know how to do it! So, here's a quick guide about how to install stuff from developer site

So bottom line is linux does not come up with things like next next in windows you might need to manaully put bin folder in bin and so on there is no program files in linux .So if you get build essentials you will get next next screen .

Otherwise refer my article how to install jdk to know how to install without build essentials

to run the bundle file use the below code

sudo sh VMware-Player-2.5.1-126130.i386.bundle


I got below error while installing virtual machine

https://bbs.archlinux.org/viewtopic.php?id=128185


You need to install kernel headers otherwise partitions wont work

uname -n

uname - r will give you current kernel

sudo apt-get install linux-headers-generic ---- dont do this as it will install latest kernel and you might be using differne


sudo apt-get install linux-headers-$(uname -r)

http://kernelnewbies.org/KernelHeaders

C header files matching your running kernel were not found.  Refer to your distribution's documentation for installation instruction

uname-a -------------able to find kernel version

copying the version.h files might work

cp /usr/include/linux/version.h /lib/modules/3.5.0-23-generic/build/include/linux/

be sure to put the kernel version

When nothing worked --- try to get build essentials

sudo apt-get update

sudo apt-get install build-essential dkms --- this will get everything



Step 12: Create the Partition
After we’ve logged in and accessed a terminal window as root (or another user with root/sudo privs) we first want to run fdisk on the newly created drive. In Linux the first SCSI drive is sda, the second sdb, the third sdc, etc. since this was the second SCSI drive we added to the system, the device is known as /dev/sdb
The first command we want to run is fdisk /dev/sdb (NOTE: Thanks to everyone that caught my typo here) this utility works very much like the DOS utility of the old days and allows you to create and manage partitions. To create a new partition we enter the command n to create a new partition. This is going to be a primary partition p, and the first partition number 1. Because I want this disk to consume the full 12 GB I specified earlier we start at the first cylinder and end it at the last cylinder. We then want to write the partition table with the new partition we have just created so we enter the command w which writes the new table and exits fdisk.
Create new partition with fdisk

Step 13: Format the partition
Now that we’ve create the partition, we now want to format the first with the new file system. I’ve decided to use ext3 filesystem for this disk, ext3 provides all the features of the classic ext2 file system plus journaling which helps to prevent disk corruption in the event of an improper shutdown and speeds up the recovery process. For a good overview of Linux standard file systems check out this article: http://linux.org.mt/article/filesystems So, to format the new partition we enter the command mkfs -t ext3 /dev/sdb1. This command makes a new files system with the type t ext3 on the /dev/sdb1 partition, this is the first partition on the sdb disk.
Create new filesystem in the virtual machine

Step 14: Create the mount point
Determine where you want to add the new virtual disk you’ve created. I like to create a partition specifically for all the software I install after the basic Linux install called /software to do that we run mkdir /software, just a simple make directory command. Once that is complete we then want to mount the newly created partition. Because we haven’t added the partition to the /etc/fstab yet we have to mount it manually. To do that we run mount -t ext3 /dev/sdb1 /software. To break down this command we run mount with the ext3 filesystem type, the partition /dev/sdb1 to the directory /software. Pretty simple and straight forward. To check that the partition is properly mounted we run df -k which shows us the mounted partitions and the amount of available space.
Create new mount point and mount the new drive

Step 15: Open the fstab file
The fstab file holds all of the used disks and partitions, and determines how they are supposed to be used by the operating system. So we edit the file to add the newly created partition

http://www.matttopper.com/images/blog/adding_disk_to_vmware/15.jpg

Step 16: Modify the fstab for the new partition
After we open the fstab file in the previous step we add the following line:
/dev/sdb1 /software ext3 defaults 1 1
The first column is the partition name, the second is the default mount point, the third is the filesystem type. The fourth is the mount options, in this case I used default which mounts the drive rw, suid, dev, exec, auto, nouser and asynchronous. The 5th and 6th options are for the dump and fsck options. If dump is set to 1 the filesystem is marked to be backed up, if you are going to have sensitive material on the drive its a good idea to set it to 1. If fsck is set to greater than 1, then the operating system uses the number to determine in what order fsck should be run during start up. If it is set to 0 it will be ignored such as in the case of a cdrom drive since its a solid state disk. For more information on the fstab file check out this article: http://www.tuxfiles.org/linuxhelp/fstab.html
Lastly, we write and quit the file with the :wq command.
Editing of the fstab

So now that the fstab has been written the drive will be mounted and unmounted when the machine is either started or shutdown. So there you have it, the quick and dirty process for adding a brand new disk to a virtual machine. Until next time


Basic Teradata Admin activities 


For those new to Teradata, Teradata Express for VMware Player and Teradata Express for Amazon EC2 are great ways to get started with our database. These preconfigured, ready to go virtual images make getting started quick and painless. All that's left to do is fire up a query tool and write some SQL, right? Well, that is mostly true, but what if something doesn't seem to be working properly? What do you do then? For these situations, I've compiled this short "2 minute guide" to help get you over some basic issues.  These are all command line tools, so first open a Linux terminal session and let's get started.

Checking Teradata Status
Let's start with the basic status tool, pdestate:

1
# pdestate -a

If all is well and Teradata is running, you will see this output:

1
2
PDE state is RUN/STARTED.
DBS state is 5: Logons are enabled - The system is quiescent


Stopping the database
Should you need to stop the database in a controlled manner, the command is:

1
# tpareset -x bringing down the database

The syntax for this command is tpareset -x {comment}, where the comments can be used for logging/auditing purposes to capture the reason that the database was stopped.

Here is output from this command.  Notice the prompt as a last check to make sure that this is what you really want to do.

1
2
3
4
You are about to shutdown the database
on the system
's10-1300'
Do you wish to continue (default: n) [y,n] y

Once the database has been stopped, you can check the status of the database like we did earlier:

1
2
# pdestate -a
PDE state is STOP/KILLTASKS.

But if there was a problem that halted Teradata instead of a controlled shutdown, you will see this message from pdestate:

1
2
#pdestate -a
"down/hardstop" is down


Restarting Teradata
To restart Teradata after manually stopping the database or to restart after a 'hardstop' event, run this command:

1
# /etc/init.d/tpa start
Which should give this output:

1
2
Teradata Database Initiator service is starting...
Teradata Database Initiator service started successfully.


To restart Teradata if it is any state other than "down/hardstop" use the command:

1
# tpareset -f restarting
This asks for a confirmation before beginning the restart process:

1
2
3
4
You are about to restart the database
on the system
's10-1300'
Do you wish to continue (default: n) [y,n] y


You can then issue the watch the various start up levels by issuing the pdestate command every few seconds..

1
2
# pdestate -a
PDE state is START/RECONCILE.


1
2
# pdestate -a
PDE state is START/STARTTPA.


1
2
3
# pdestate -a
PDE state is RUN/STARTED.
DBS state is 1/5: DBS Startup - Voting for Transaction Recovery


1
2
3
# pdestate -a
PDE state is RUN/STARTED.
DBS state is 1/4: DBS Startup - Starting PE Partitions


1
2
3
# pdestate -a
PDE state is RUN/STARTED.
DBS state is 5: Logons are enabled - The system is quiescent


Troubleshooting
If you are running into problems getting Teradata started, the first place to check for clues is in the log file:

1
tail /var/log/messages
And finally, to check your storage, use the verify_pdisks command:

1
2
# verify_pdisks
All pdisks on this node verified.

You may see some warning messages with this, but what we're looking for is the final 'verified' message.

Saturday 14 September 2013

Myths about 100% CPU Utilization

Hi Guys,

In case of oracle virtual memory servers 100% cpu is normal and i dont know why most people are scared by this

Understand what is virutal memory

http://searchstorage.techtarget.com/definition/virtual-memory


http://www.dba-oracle.com/t_high_cpu.htm

Myth 2 --- I have heard from my friends in office that if you have a system with millions of records in oracle a OLTP system that is continuously accessed by users and you fire a query without a date range it will cause the system to crash due to overload as there is not enought CPU resource...Lets investigate this what is actually happening


Oracle db crash can occur through CPU starvation . its also called resource starvation suppose there is stats utility running for long time now this has high priority and there are also other sql request whch are not able to get cpu then the server can crash

consider sql which is running for hours. the important thing here is not cpu utilization but at 100% are there any processes in que whaiting to get cpu time. ??

Hadoop V/s Raid

Hi Guys

Now HDFS allow you to store data across multiple disks so that it provides redundancy in case of failure .that is you are having copy of same data in other hard disk .Second use it it allows for speed of access if one file is distributed across 2 hard disks then time it can access the same file from two disks at 100mb , 100mb speed so that access is faster

Now same can be done with RAID  most enterprise software like your db , webserver are definately using RAID.It is like hard disk connected in parallel its a way of arranging your hard disk such that there is redundancy in case of failure and speed of access if faster.

It basically distributes the file over many hard disk so that part of file are simultaneouly read in parallel .

For more information on RAID and how your office servers work read below article

http://cognossimplified.blogspot.com/2013/08/basics-about-servers.html

So which one to Use RAID which we already have OR HDFS 

Now this is a silly question and i thought over it for a long time.What you need to understand is Raid and HDFS are different .  RAID is way of arranging Hard disk for redundancy and speed .You can have your HDFS system sitting on raid configured hard disk the only thing you will do is to keep the replication to 0.Now consider HDFS is able to recover from disk failure by keepign data in separate machine but in enterprise (corporate) environment you will not use commodity hardware.So you dont need to go for replication by HDFS you already have world class hardware which is fail proof . and has the speed .You still need HDFS or cassandra or other file system on which your Hadoop Map and reduce jobs can work

So HDFS replication was designed keeping in mind the low cost commodity hardware and where people could not afford enterprise hardware

So you might think what will happen in processor fails.There is a back up server on the same rack which will take care of this .In most cases things are managed in cloud.To know more of cloud computing.

So you will not be having a physical server a virtual server and the till last level like a cooling fan failure everthing will have a backup done.So You can keep replication to 0 and use Hadoop to work on your big data that is saved on your Enterprise hardware .

Good Article on Hadoop from Yahoo developers site 

http://developer.yahoo.com/hadoop/tutorial/module1.html









BLOB ,CLOB and Character set

Hi Guys

we dont usually come across columsn of this type. so just noting some facts

CLOB- Character large object it can store upto 4 GB.It can store any text files which are in character set supported by database .Like you cant store japanese files if you db only supports english character set.

BLOB- Binary large object -max size 4gb it can store images , audio , video .So can you store your favourite movie in oracle an just query it when required ..I will try this

What is character set

Numeric codes represent characters and character set determines what lanaguage you can store.

In ASCI every character has hexadecimal value

1-- 21
#-- 23
2--32

Hindi , english , german latn has 1 byte character set as they have at max 256 characters but language like japanes and chinese has more than 10,000 characters so they have different character set.

so amount of space it takes to store chinese , japanese characters is different from the bytes it takes to store english characters

MPP(Massively parallel processing) & SMP

Hi Guys,

We come across terms like MPP (Massively parallel processing) while going through documentation for teradat, datastage and other data processing or ETL tools

MPP is a general term which can be applied for clustur of computers where on one server rack you have multiple cpu usually in 1U casing. also multiple racks are connected together and you have a clustor

It can also be called for grid computing where commodity hardware( normal desktop pc) are connected for computing .

SMP (Symmetric multiprocessin)

In this 2 processors share the same memory (RAM) in olden day they had two processors on same motherboard sharing same RAM .Now all laptop above dual core are actually SMP . they have 2 in build processors and sharing same memory. SMP is great for multithreading and multiprocessing.Processes are broken into threads and supplied to individual processors

So what is a process 

A computer program is passive set of instructions.Process is the actual execution of those instruction with its current state being saved in memory.Usually if you open a program twice you will see two processes in task manager

So whenever you open a program it occupies part of RAM memory.Threads use a part of this memory allocated to process. then in case of dual cores this threads are then given to two different cores

Multithreading makes it possible to utilize cpu to max if the thread running does not utilize full cpu other thread can take advantage.Similarly if two threads are working on same resulset cache can be shared.In case of single cpu time sharing can be done among threads

So How much cpu utilization is normal.

If you open task manager you will see that cpu is utilized less than 30% why is that.dont we want our CPU to be utilized to 100%


Very few data bases are MPP (Shared nothing architecture) based .

Oracle 11g is shared everything SMP

Oracle Exadata is nearest to MPP (Not sure if fully MPP)

Terada is MPP

HP Vertiga is MPP

Kognito is MPP


What is Pega

Hi

this is for pega what is pega and what are its capabilities

SAP and Datawarehousing

Hi

this is for discussing about what is SAP adn what role datawarehousing plays

What is Middleware

Hi

As you know in today world in application like net banking. simultaneously thousand of user must be giving a request so immediately getting access to db is not possible as db needs to follow ACID properties so the request are put in a que

Have you heard that your project has apache and jboss server .So why do we need this jboss in the middle what it does

IBM Mque and IBM websphere . i will write when i get time

Why do we need Application Server in Java like JBoss or WebSphere. Is it possible that we develop large scale website only with Java (Apache Tomcat). Where thousand of user connect on site at a moment. An example is a website like b2b.
What is the cost of a Application Server? I will be thankful if you compare price among different application server and if there is any free version kindly highlight it.

Application Servers are mostly used if you want to use advanced features like transaction management, hot code swapping and advanced persistence.

In short Application Servers provide you with few services like
  • Transaction Management
  • Load Balancing
  • Security
  • Threading
etc.
You have to take care of these things yourself in a Web Server.

Big data case studies in Healthcare

Hi Guys,

We can all get tons of document on hadoop and what is does but below are some real life stories what big data can do and what big data is doing for healthcare.What question it can answer

Below are from CIO site

Big data in Pharma and Medicine-CIO

Big data in healthcare

Preventing veteran suicides -CIO


If you want to learn more about Pig, Hive and how they are used in datawarehouse. Go to Horton works tutorials they are very good. Download the Sandbox. I will be doing it and sharing my experience using Pig and Hive -- Update on 4 Sept 2014.

http://hortonworks.com/products/hortonworks-sandbox/#tutorial_gallery

Wednesday 4 September 2013

Installing JDK on Ubuntu(Linux)

Hi

I am trying to install Hadoop on C


$ tar xvfz archive_name.tar.gz

x means extract , z compressed , v verbose
f file name

There is no single directory that is the exact equivalent of Program Files folder. The way Linux arranges things is a lot different

than Windows.

In windows, every program that we install gets its own directory inside the Program Files directory. In that directory, further sub-

directories are created for different kind of files. There is no fixed structure for sub-directories. Programs decide for themselves

what they want to call each directory and where they want to put what.

But in Linux when a program is installed, different kind of files are copied to different locations. Executables are copied to

/usr/bin, library files to /usr/lib, documentation to one or more of /usr/man, /usr/info and /usr/doc. If there are configuration

files, they are usually in the user's home directory or in /etc.

Good link to install.You can always do apt-get install jdk but you wont know where teh files go its automatic.Manually downloading and

installing will give you a better idea

http://www.liberiangeek.net/2012/04/install-oracle-java-jdk-7-in-ubuntu-12-04-precise-pangolin/


to check if java working properly

file /etc/alternatives/java /etc/alternatives/javac

you will get

/usr/bin/java:  symbolic link to `/etc/alternatives/java'
/usr/bin/javac: symbolic link to `/etc/alternatives/javac


By previous steps, it is expected that you have installed JDK 1.6 on your systems. The only remaining step is to set the class path of

Java by the following commands:

export JAVA_HOME=/usr/lib/jvm/java-6-sun

we know that if we dont have java_home correctly in windows then it will not allow to run programs which use jre.so i am assuming this

is also case in ubuntu

echo $JAVA_HOME


Installing Hadoop on Ubuntu

http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

Using Consistent gets to tune sql

Note -- This article is for only those who have gone through explain plans and have already tried simple improvement ideas like indexes , Avoiding full table etc

What is Consistent Gets

Consistent get -- Oracle will fetch data into RAM ( this is what we call buffer or logical reads) and now when you run a query if data is in RAM it will fetch from there but for this there is a overhead oracle has to make sure that the data is RAM is in consistent state with the data on hard disk. that is no one has made any new update or inserts while we are doing query, so cpu has processes running which make sure data in RAM is consistent. So higher the consistent gets higher the overhead on the processor to keep data in consistent state . So for each consisten get hit the cpu will check with disk to check consistency of data 

Consider you have a query someone rewrote it and says this is better than yours. Now the explain plans looks identical and same cost . How you say which is better. We look at Logical reads.

This is very good thing to consider.I am considering that you have gone through my earlier Oracle basics 1 ,2,3 if not look in August 2013. This is very high level statistics to check after you have tried all others 

Below are my analysis to tune a query

To tune a query we can clear the buffer cache to get the timing it takes without data being in buffer.Consider the case that there is data in buffer then we need to look at consistent gets

Consistent gets –Number of times a buffer is hit

We know that reading from buffer is faster than reading from disk.But consider the case of table which is loaded into buffer now the speed  of query will depend on the consistent gets that it how many times it hits the cache to get the data. We want to write a query such that the it hits the buffer as less as possible. That query will be faster. Considering it has same physical reads

Your idea does reduce the consistent gets in test environment .So I think it’s a good approach






Now i got this idea from ASK tom site. Below is the link. But for some reason if its not working that site people keep changing links then type in google Logical reads v/s physical reads and select ASK tom link which appears

Logical reads v/s Physical reads

http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6749454952894

Also read the discussion on Array size and consistent gets from ASK tom



http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:880343948514





Very important statement that everyone should remember -----Almost All sql operation running in parallel will read data directly from disk bypassing the buffer cache

Block buffer cache is part of SGA

The Buffer Cache (also called the database buffer cache) is where Oracle stores data blocks.  With a few exceptions, any data coming in or going out of the database will pass through the buffer cache.

A Consistent Get is where oracle returns a block from the block buffer cache but has to take into account checking to make sure it is the block current at the time the query started.

Consistent get is function of array size . Consider array size is 15 consistent gets will be more , same block in buffer cached will be queried more

Db Block get + Consistent gets = Logical io

The table has 20 blocks in it.
You would expect 20 consistent gets on a full scan.
But, with an array size of 15 we will revisit some blocks MORE THEN ONCE.
In fact, we might revisit upto 10000/15 = 666 blocks more then ONCE.
This is because we "start" and "stop" the query in the middle of a block.  (see the two followups
immediately above this).
If you full scan a table T that has N blocks and R rows and you use an array fetch size of A, we
will typically perform the following number of consistent gets:
N + R/A
We have to read N blocks (that should be obvious)
We might revisit R/A blocks more then once (since we stop in the middle)
db block gets are blocks gotten in CURRENT (as opposed to consisten) mode.  They are block gets we
did to discover the extent map for the table (to know what to read).  We get that in "current, as
of right now" mode instead of "in consistent read, as of the point in time the query began" mode.
Oracle did 4 current mode gets in both cases to figure out what blocks to read.
In the first example, it did 686 logical IO's
In the second, only 30
It did no physical IO.


The LIO can be reduced by increasing buffer size but that should not be done

The other problem with consistent gets

to get a block from the cache, we have to latch (to prevent someone from modifying the data
structures we are currently reading).

A latch is a lock.

Locks are serialization devices

Serialization devices inhibit scalability, the more you use them, the less concurrency you get.

A physical IO from temp will not, that bypasses the cache.

When Oracle performs a full table scan of a large table, the blocks are read into the buffer cache but placed at the least recently used end of the LRU list. This causes the blocks to be aged out quickly, and prevents one large full table scan from wiping out the entire buffer cache.

Where does joining of data takes place, Where are Hash tables created ??

1 – RAM cache for data blocks
2 – RAM cache for SQL statements
3 – RAM for caching the data dictionary
4 – RAM for caching program execution code


The first time an Oracle Database user process requires a particular piece of data, it searches for the data in the database buffer cache. If the process finds the data already in the cache (a cache hit), it can read the data directly from memory. If the process cannot find the data in the cache (a cache miss), it must copy the data block from a datafile on disk into a buffer in the cache before accessing the data. Accessing data through a cache hit is faster than data access through a cache miss.

What is the PGA

Oracle Database allocates a program global area (PGA) for each server process. The PGA is used to process SQL statements and to hold logon and other session information. For the purposes of memory management, the collection of all PGAs is known as the instance PGA. Using an initialization parameter, you set the size of the instance PGA, and the database distributes memory to individual PGAs as needed.

PGA memory is divided into the following areas

Session Memory
Private SQL Area

The private SQL area contains data such as bind variable values, query execution state information, and query execution work areas. Each session that issues a SQL statement has a private SQL area. Each user that submits the same SQL statement has his or her own private SQL area that uses a single shared SQL area. Thus, many private SQL areas can be associated with the same shared SQL area.

Private SQL Area Components
The private SQL area of a cursor is itself divided into two areas whose lifetimes are different:

The persistent area—This area contains bind variable values. It is freed only when the cursor is closed.

The runtime area—Oracle Database creates this area as the first step of an execute request. It contains the following structures:

◦Query execution state information

For example, for a full table scan, this area contains information on the progress of the scan

◦SQL work areas

These areas are allocated as needed for memory-intensive operations like sorting or hash-joins. More detail is provided later in this section.

For DML, the run-time area is freed when the statement finishes running. For queries, it is freed after all rows are fetched or the query is canceled.

SQL Work Areas
SQL work areas are allocated to support memory-intensive operators such as the following:

•Sort-based operators (order by, group-by, rollup, window function)

•Hash-join

•Bitmap merge

•Bitmap create

For example, a sort operator uses a work area (sometimes called the sort area) to perform the in-memory sort of a set of rows. Similarly, a hash-join operator uses a work area (also called the hash area) to build a hash table from its left input. If the amount of data to be processed by these two operators does not fit into a work area, the input data is divided into smaller pieces. This enables some data pieces to be processed in memory while the rest are spilled to temporary disk storage to be processed later. Although bitmap operators do not spill to disk when their associated work area is too small, their complexity is inversely proportional to the size of their work area. Thus, these operators run faster with larger work area.

The size of a work area can be controlled and tuned. The database automatically tunes work area sizes when automatic PGA memory management is enabled. See "Overview of Memory Management Methods" for more information.

Generally, bigger work areas can significantly improve the performance of a particular operator at the cost of higher memory consumption. Optimally, the size of a work area is big enough to accommodate the input data and auxiliary memory structures allocated by its associated SQL operator. If not, response time increases, because part of the input data must be spilled to temporary disk storage. In the extreme case, if the size of a work area is far too small compared to the input data size, multiple passes over the data pieces must be performed. This can dramatically increase the response time of the operator.



Tuesday 3 September 2013

NLS settings and Timestamp issues

DESC TRIAL

HERE TRIAL HAS COLA IS DATE , Point to be noticed here is if you compare timestamp with date it will convert both to
timestamp for comparing

one more point is even if you store date it will always have timestamp stored in table.So we should be careful while comparing date and timestamp it will convert both to timestamps

SELECT * FROM TRIAL
WHERE COLA >= TO_DATE( SYSDATE,'DD-MON-YY')

SELECT * FROM TRIAL
WHERE COLA >=  SYSDATE


Bhagvant,
Since the session’s NLS date format is timestamp, sysdate gives you timestamp and in ORACLE to store date or time we have only one data type “DATE”.

Now in the below scenario, while comparing, I think you need to trunk the sysdate. This is just my instinct J

select * from client_program
where upper(cli_pgm_nm)  like '%US10207%'
and TRUNC(SYSDATE) BETWEEN to_Date(EFF_DT,'dd-mon-yy') AND to_date(trm_dt,'dd-mon-yy')


This is part of below mail that I had sent.This is a fun question on sql.Some thing I noticed J

And we always  thought date did not have a timestamp J ….. In NLS setting of sql developer I have set date format DD-MON-YY HH24.MI.SS.


So below is a question how to do this

select * from client_program
where upper(cli_pgm_nm)  like '%US10207%'
and SYSDATE BETWEEN to_Date(EFF_DT,'dd-mon-yy') AND to_date(trm_dt,'dd-mon-yy')

this does not return the record.Though the EFF_DT and TRM_DT are 29 Aug 13 .. Now question is Why .. Is it because we are comparing sysdate which Is a timestamp and EFF_DT which is a date so it will convert both to timestamp before comparing ????



In peferences if you keep date format to DD-Mon RR you will get below result for select to_timestamp(sysdate-1) from dual.That is it will always give 12 am .But if you change the setting of sql developer to DD-MON-RR HH24.MI.SS you give a different timing.I was wondering which one is correct





After changing the preferences