Friday, 27 May 2016

Oracle Database Architecture



Oracle Database Architecture, In order to understand the oracle database it's very important to understand the basic of database i.e., how the database has been designed,build up and how it works.

It's very important for the one who really interested in learning the internal working of oracle database specially DBA's. Without having the knowledge of  Architecture you just can be a DBA. Unless we know how the database works we cannot work on database.

So Here in this article I am trying to Explain the Architecture of oracle database with all it's components.

 Let's first have a look how oracle Architectural components is arranged with the help of the below figure.

Architecture



The Architecture of the oracle database is designed such a way that can manage a large amount of data in a multiuser environment so that many users can concurrently access the same data without affecting performance.

Let’s have a Look on Oracle Database Components and their uses.

   We will go in an order, so that it can be easy to understand.

User Processes

               
The User Processes is created whenever the users request for the connection to the database, in a very simple way it’s a client connection.
The user process starts when a tool is started by a user. The tool may be any application used to connect to database.
The user process may be connected from the same machine where the database is running or from any other machine from any location.
The user process cannot directly connect to database it has to go through the server process where server process will help user process to connect to database.


Server Processes

               
The Server process handles user requests.
The user requests are submitted by the user process to the server process.
The server process only runs on the same machine on which the Oracle server runs.

 

Oracle Instance

               
The Oracle Architecture consists of an Oracle instance and an Oracle database.
The Oracle instance consists of a memory region called the System Global Area (SGA) and Background Process.
                               

SGA

                An Oracle database allocates memory to the SGA when an instance starts, and deallocates it when the instance shuts down.
The SGA consists of a group of memory structures, which are created at startup.
There are three major memory structures.
1.      Shared pool.
a.       Library Cache
b.      Data Dictionary Cache
2.      Database buffer cache,
3.      Redo log buffer.

Shared pool


Oracle Database uses the shared pool to cache many different types of data. Cached data includes the textual and executable forms of PL/SQL blocks and SQL statements, dictionary cache data, result cache data, and other data.
The shared pool is used to store information such as the most recently executed SQL statements and the most recently used data dictionary information.
The main components of the shared pool:


  
Library cache
The library cache stores the executable (parsed or compiled) form of recently referenced SQL and PL/SQL code.
Shared SQL areas are accessible to all users, so the library cache is contained in the shared pool within the SGA.
Data dictionary cache
The data dictionary cache stores data referenced from the data dictionary.

Apart from the above main components there is one more additional, but optional component is a part of shared pool.
Server result cache (depending on the configuration)
The server result cache is an optional cache that stores query and PL/SQL function results within the shared pool.

Database Buffer cache


            Database Buffer cache is one of the most important components of System Global Area (SGA). Database Buffer Cache is the place where data blocks are copied from data files to perform SQL operations. Buffer Cache is shared memory structure and it is concurrently accessed by all server processes.

Types of buffer are

  • Pinned: Buffer which is currently being used.
  • Clean: Buffer which is available for use.
  • Free (Unused): Buffer which is empty and haven’t been used yet.
  • Dirty buffers: Buffer which needed to be moved to write list.

 What happens inside DB buffer?

When oracle server process requires a specific data block, it first searches it in Buffer cache. If it finds required block, it is directly accessed and this event is known as Cache Hit. If searching in Buffer cache fails then it is read from data file on the disk and the event is called Cache Miss. If the required block is not found in Buffer cache then process needs a free buffer to read data from disk. It starts search for free buffer from least recently used end of LRU list .In process of searching, if user process finds dirty block in LRU list it shifts them to Write List. If the process cannot find free buffers until certain amount of time then process signals DBWn process to write dirty buffers to disks.

Redo Log Buffer

 A Redo log buffer is a circular buffer in the SGA that holds information about changes made to the database, called redo.
Redo is generated by each transaction in the database as it makes a change
The server processes generate redo data into the log buffer as they make changes to the data blocks in the buffer. LGWR subsequently writes entries from the redo log buffer to the online redo log.
As changes occur in the database, the redo generated by those changes is stored in the redo log buffer. When below conditions occur, the redo log buffer will be flushed to the online redo logs that exist on physical disk by the Oracle background process called the Log Writer (LWGR).
  • A session issues a commit or a rollback command.
  • The log buffer becomes 1/3 full.
  • A timeout (every 3 seconds) occurs.
  • A checkpoint occurs.

Some other memory structures are




  Large pool

The database administrator can configure an optional memory area called the large pool to provide large memory allocations for:
  • Session memory for the shared server and the Oracle XA interface (used where transactions interact with more than one database)
  • I/O server processes
  • Oracle backup and restore operations
  • Parallel execution message buffers, if the initialization parameter PARALLEL_AUTOMATIC_TUNING is set to true (otherwise, these buffers are allocated to the shared pool)

By allocating session memory from the large pool for shared server, Oracle XA, or parallel query buffers, Oracle can use the shared pool primarily for caching shared SQL and avoid the performance overhead caused by shrinking the shared SQL cache.
The large pool does not have an LRU list. It is different from reserved space in the shared pool, which uses the same LRU list as other memory allocated from the shared pool.

Java pool


Oracle JVM memory manager uses JAVA_POOL_SIZE mainly for in-memory representation of Java method and class definitions, and static Java states that are migrated to session space at end-of-call in shared server mode

The JAVA Pool holds the JAVA execution code

Streams Pool

·         The Streams pool stores buffered queue messages and provide memory for Oracle Streams capture processes and apply processes. The Streams pool is used exclusively by Oracle Streams.
·         Unless you specifically configure it, the size of the Streams pool starts at zero. The pool size grows dynamically as required by Oracle Streams.

 Background Processes.

 The Background processes in oracle Architecture are as below:
  •  PMON
  • SMON
  •  DBWR
  •  LGWR
  • CKPT
  •  ARC n


PMON
PMON is the Process Monitor which is responsible for recovering processes when the user process fails.
The PMON process performs the cleanup operations by performing the following tasks:
• Rolls back the user’s current transaction
• Releases all the locks that are held on tables or rows
• Frees other resources used by the users
• Restarts the dead dispatcher




SMON
SMON is the System Monitor which is responsible for recovering the system after a failure.

If an Oracle instance crashes, any changes that are made in the SGA are not written to the data files. When you restart the instance, the SMON background process automatically performs instance recovery by performing the following tasks:

• Rolling forward changes that are made in the online redo log files but not in the data files. Since all the committed transactions are written to the online redo log files, these are successfully recovered as result of rolling forward changes from the online redo log files to the data files.
• Opening the database. After the database is opened, users can log on and access any data that is not locked by un-recovered transaction.
• Rolling back all the uncommitted transactions.


DBWR
The database writer process (DBWn) writes the contents of buffers to data files. The DBWn processes are responsible for writing modified (dirty) buffers in the database buffer cache to disk.
When a buffer in the database buffer cache is modified, it is marked dirty. The primary job of the DBWn process is to keep the buffer cache clean by writing dirty buffers to disk
DBWR only writes blocks back to the data files on commit, or when the cache is full and space has to be made for more blocks.

LGWR
Entries are stored in Redo Log buffer. Log writer process (LGWR) writes these redo entries to redo log files. Redo log buffer works in circular fashion. It means that it overwrites old entries. But before overwriting, old entries must be copies to redo log files.

LGWR process writes to redo file on below conditions.
  • When user performs commit.
  • After every three seconds.
  • When redo log buffer is 1/3 full.


CKPT
The checkpoint process (CKPT) is responsible for updating the control file and data file headers with checkpoint information and signals DBWn to write blocks to disk. Checkpoint information includes the checkpoint position, SCN, location in on-line redo log to begin recovery, and so on…
Remember CKPT does not write changed data to the data files.

ARCn
The archiver processes (ARCn) copy online redo log files to offline storage after a redo log switch occurs. These processes can also collect transaction redo data and transmit it to standby database destinations. ARCn processes exist only when the database is in ARCHIVELOG mode and automatic archiving is enabled.



DATABASE:


This the Part where actual data resides. Let's have a look at different types of Files we have in Oracle Database.
DATA FILES
In an Oracle database, there can be one or more data files. A data file stores the data dictionary, the user objects, and the before-images of the data blocks that are modified by the current transactions.
            The data associated with schema objects in a table-space is physically stored in one or more of the data-files that constitute the table-space.
            The data in a data file is read, when required, and stored in the database buffer cache. The modified or new data is not immediately written to the data file. However, it is stored in the database buffer cache. Data is written to data-file from DB buffer cache by the background process DBWn.


Control File
A control file contains information about the associated database that is required for access by an instance, both at startup and during normal operation. Control file information can be modified only by Oracle.
No database administrator or user can edit a control file.
Control files stores information such as the database name, the time stamp of the database creation, names and locations of the data files and redo log files.




Redo log file
Redo log files are filled with redo records. A redo record, also called a redo entry, is made up of a group of change vectors, each of which is a description of a change made to a single block in the database.
Redo log files are helpful to recover database.
Redo entries record data that you can use to reconstruct all changes made to the database, including the undo segments. Therefore, the redo log also protects rollback data. When you recover the database using redo data, the database reads the change vectors in the redo records and applies the changes to the relevant blocks. Every Oracle database has at least two redo log groups, each having at least one redo log file.
Redo records are buffered in a circular fashion in the redo log buffer of the SGA and are written to one of the redo log files by the Log Writer (LGWR) database background process. Whenever a transaction is committed, LGWR writes the transaction redo records from the redo log buffer of the SGA to a redo log file, and assigns a system change number (SCN) to identify the redo records for each committed transaction. Only when all redo records associated with a given transaction are safely on disk in the online logs is the user process notified that the transaction has been committed.

Parameter File
A parameter file is a file that contains a list of initialization parameters and a value for each parameter. You specify initialization parameters in a parameter file that reflect your particular installation. Oracle supports the following two types of parameter files:
Server Parameter Files

A server parameter file is a binary file that acts as a repository for initialization parameters. The server parameter file can reside on the machine where the Oracle database server executes. Initialization parameters stored in a server parameter file are persistent, in that any changes made to the parameters while an instance is running can persist across instance shutdown and startup some of the parameter can be changed dynamically using ALTER SYSTEM SET commands without shutdown and startup.
Note: we cannot open, edit the server parameter file.




Initialization Parameter Files

An initialization parameter file is a text file that contains a list of initialization parameters.
The name of the initialization parameter file contains the keyword init.ora oracle by default identifies and access file by name init.ora in default location $ORACLE_HOME/dbs.

However we can specify the name as per our requirement and in any location but while startup we need to specify the path to access the parameter file. It is a Client side file.

If we need to change/add any parameter we have to shut down the database and open parameter file, edit and then we need to startup the database.


Password file

Password file is a binary file used for the authentication to connect to database.

Archive Log File
Oracle Database lets you save filled groups of redo log files to one or more offline destinations, known collectively as the archived redo log, or more simply the archive log. The process of turning redo log files into archived redo log files is called archiving. This process is only possible if the database is running in ARCHIVELOG mode.

An archived redo log file is a copy of one of the filled members of a redo log group. It includes the redo entries and the unique log sequence number of the identical member of the redo log group.






Let’s look how SQL query will work in database internally.

The user/client tries initiating the connection to the database.

Sqlplus user/password

a.       Once we hit sqlplus statement, user process access sqlnet listener.
b.      SQL net listener confirms that Database is open for Activity & creates server process.
c.       Server process allocates PGA.
d.      ‘Connected’ Message returned to user.

After connecting user tries to do some activity

SQL>select * from table_name;

a.       Server process checks the SGA to see if data is already in DB buffer cache.
b.      If not then data is retrieved from storage disk [data file] and copied into SGA (DB Cache).
c.       Data is returned to user via PGA & server process.

SQL>update table_name set row=value where row=condition;

a.       Server process (Via PGA) checks SGA to see if data is already there in buffer cache.
b.      We assume the data is still in the SGA (DB Cache).
c.       Data updated in DB cache and mark as Dirty Buffer.
d.      Update is placed into redo buffer.
e.       Row updated message returned to user.

SQL>commit;

a.       New SCN obtained from control file.
b.      Data in DB cache is marked as Updated and ready for saving.
c.       Commit placed into redo buffer.
d.      LGWR writes redo buffer information to redo log files & remove it from redo buffer.
e.       Control file is updated with new SCN.
f.       Commit complete message return to user.
g.       Update new values in data-file & update header of data-file with latest SCN.
           




SQL>exit;

a.       Unsaved changes will be rolled back.
b.      Server process de-allocates PGA.
c.       Server process gets terminated.
After some period of time redo log are archived by ARCH background process.




If I have missed out any components or if there is any mistake in the article please fell free to comment below that will help me to correct.

Even if you have any question please fell free to ask in comment.




Please subscribe for latest updates.

3 comments:

  1. Nice explanation Ajeet kumar

    ReplyDelete
  2. Hi Ajeet,
    Could you pls explain what happen inside the database when user fires
    1.Select statement
    2.Insert statement
    3.Update statement
    4.Delete statement

    Where and what oracle components comes into picture. Becoz this question is asked in interview .

    Thanks in Advance


    ReplyDelete
    Replies
    1. Hi,

      If you read my full article at the end i have explained the same with the statements.

      Thanks for writing.

      Ajeet.

      Delete

Please leave your feedback, that improve me.............

RemoteHostExecutor.pl The file access permissions while patching

Hi, In this article, I am going to share one the issue which we usually face while patching. Here our DB and Grid home are 12.1.0.2, an...