Wednesday 28 December 2016

Manually Installing Spatial Component

Spatial 


           Oracle Spatial is an integrated set of functions and procedures that enables spatial data to be stored, accessed, and analyzed quickly and efficiently in an Oracle database.

           Spatial data represents the essential location characteristics of real or conceptual objects as those objects relate to the real or conceptual space in which they exist.
What Is Oracle Spatial?

        Oracle Spatial, often referred to as Spatial, provides a SQL schema and functions that facilitate the storage, retrieval, update, and query of collections of spatial features in an Oracle database. Spatial consists of the following:
·         A schema (MDSYS) that prescribes the storage, syntax, and semantics of supported geometric data types
·         A spatial indexing mechanism
·         Operators, functions, and procedures for performing area-of-interest queries, spatial join queries, and other spatial analysis operations
·         Functions and procedures for utility and tuning operations
·         Topology data model for working with data about nodes, edges, and faces in a
·         Network data model for representing capabilities or objects that are modeled as nodes and links in a network.
·         GeoRaster, a feature that lets you store, index, query, analyze, and deliver GeoRaster data, that is, raster image and gridded data and its associated.
The spatial component of a spatial feature is the geometric representation of its shape in some coordinate space. This is referred to as its geometry.
                  (For more details on Spatial you can ref: https://docs.oracle.com/cd/B28359_01/appdev.111/b28400/sdo_intro.htm#SPATL441)


Below are the step to install spatial manually.

Scenario.

--We have upgraded database from oracle 9i (9.2.0.8) to 11GR1 (11.1.0.7).
--The Spatial INVALID prior to upgrade only.
--Even after executing utlrp.sql many times Spatial was still INVALID.
--We ignored it and updated the database.
--As the Spatial was INVALID prior to upgrade we have to drop the MDSYS user and re-create it (else not required all the other steps remains same)

Implementation.

oracle 9i
**********

SQL> select COMP_NAME, VERSION,STATUS from dba_registry;

COMP_NAME VERSION STATUS
---------------------------------------- --------------- ----------
Oracle Database Catalog Views 11.1.0.7.0 VALID
Oracle Database Packages and Types 11.1.0.7.0 VALID
JServer JAVA Virtual Machine 11.1.0.7.0 VALID
Oracle Database Java Packages 11.1.0.7.0 VALID
Oracle XDK 11.1.0.7.0 VALID
Spatial 11.1.0.7.0 INVALID
Oracle XML Database 11.1.0.7.0 VALID
Oracle Multimedia 11.1.0.7.0 VALID


oracle 11gR1
*************

SQL> select COMP_NAME, VERSION,STATUS from dba_registry;

COMP_NAME VERSION STATUS
---------------------------------------- --------------- ----------
Oracle Database Catalog Views 11.1.0.7.0 VALID
Oracle Database Packages and Types 11.1.0.7.0 VALID
JServer JAVA Virtual Machine 11.1.0.7.0 VALID
Oracle Database Java Packages 11.1.0.7.0 VALID
Oracle XDK 11.1.0.7.0 VALID
Spatial 11.1.0.7.0 INVALID
Oracle XML Database 11.1.0.7.0 VALID
Oracle Multimedia 11.1.0.7.0 VALID


Steps to be followed to solve the issue
*****************************************
1) Remove Spatial

SQL> drop user MDSYS cascade;

2) Re-install Spatial

create the user MDSYS by running following command:

SQL> create user MDSYS identified by MDSYS default tablespace SYSAUX account lock;

grant the required privileges to MDSYS by running:

SQL> @?/md/admin/mdprivs.sql


Install Spatial by executing the steps shown below. Note you need to run this as a SYSDBA user!

SQL> connect / as sysdba
SQL> spool spatial_installation_14jan2016.lst
SQL> @?/md/admin/mdinst.sql
SQL> spool off

Check whether is has installed and status is valid.

column comp_name format a35
column comp_id format a8
column version format a12
column status format a10

Select comp_name, comp_id, version, status from dba_registry;

COMP_NAME COMP_ID VERSION STATUS
----------------------------------- -------- ------------ ----------
Oracle Database Catalog Views CATALOG 11.1.0.7.0 VALID
Oracle Database Packages and Types CATPROC 11.1.0.7.0 VALID
JServer JAVA Virtual Machine JAVAVM 11.1.0.7.0 VALID
Oracle Database Java Packages CATJAVA 11.1.0.7.0 VALID
Oracle XDK XML 11.1.0.7.0 VALID
Oracle XML Database XDB 11.1.0.7.0 VALID
Oracle Multimedia ORDIM 11.1.0.7.0 VALID
Spatial SDO 11.1.0.7.0 VALID.



                                                            There you good to go.................................................




Please subscribe for latest updates.

Saturday 8 October 2016

SCRIPT FOR HOUSEKEEPING ORACLE TRACE & AUDIT FILES.


Hello Friends,




As we all know house keeping is very import either it may be in our home/surrounding or even in database/server. When we talk about Oracle Database the commonly growing files that needs an attention for housekeeping is audit and trace files. Even though these files may not consume more space but small files in more number can may create minor problems, even large number of files at OS level will be a difficult to delete. So we need to delete this file in a regular interval of time, but we don't do it either we don't remember or we find it's not really important to delete.

Every time deleting this files manually  is a very unwanted work for a DBA, so what could be the solution? 
The Solution very easy just automate it! Hmmm.., In this article I am going to share a script that will delete all the trace(.trc & trm) and audit(.aud) files older than 3 days( you can edit the days as per your requirement).

 Just schedule the below script as and when you require and get relaxed from deleting files.

SCRIPT
*************

#!/bin/sh
#######################################################################################################################
############# SCRIPT TO DELETE AUDIT AND TRACE FILES ###########
############# SCRIPT CREATED AND TESTED BY AJEETH ###########
############# THIS SCRIPT WILL DELETE TRACE&AUDIT FROM ALL THE INSTANCE RUNNING ON SERVER ###########
############# WE CAN IGNORE THE ASM INSTANCE ERROR ###########
#######################################################################################################################
set -x
for ORACLE_SID in `cat /etc/oratab|egrep ':N|:Y'|grep -v \*|cut -f1 -d':'`
do
ORACLE_HOME=`cat /etc/oratab|grep ^$ORACLE_SID:|cut -d":" -f2`
DAY_OLD=3; export DAY_OLD
OUTFILE=/apps/backup/house_keep.log ; export OUTFILE
#
echo `date '+%m/%d/%y %A %X'` > ${OUTFILE}
echo >> ${OUTFILE}
echo "SCRIPT NAME:   $0" >> ${OUTFILE}
echo "SERVER:        "`uname -n` >> ${OUTFILE}
echo "KEEP DURATION: $DAY_OLD days" >> ${OUTFILE}
echo >> ${OUTFILE}
echo >> ${OUTFILE}
#####################################
####Trace .trc and trm file deletion
#####################################
echo "Delete audit files owned by oracle..." >> ${OUTFILE}
echo >> ${OUTFILE}
echo >> ${OUTFILE}
# change directory to the audit file directory
cd ${ORACLE_BASE}/admin/${ORACLE_SID}/adump
echo >> ${OUTFILE}
echo "Deleting from directory:" >> ${OUTFILE}
echo "[${ORACLE_BASE}/admin/${ORACLE_SID}/adump]" >> ${OUTFILE}
echo >> ${OUTFILE}
echo "The total number of audit files in directory is:" >> ${OUTFILE}
# output the total count of audit files to OUTFILEile
ls -al | wc -l >> ${OUTFILE}
echo >> ${OUTFILE}
echo "Total number of audit files to be deleted is:" >> ${OUTFILE}
# output the total number of audit files that will be deleted
find . -maxdepth 1 -type f -mtime +$DAY_OLD -name "*.aud" | wc -l >> ${OUTFILE}
echo >> ${OUTFILE}
# delete the audit files
find . -maxdepth 1 -type f -mtime +$DAY_OLD -name \*.aud -mtime +14 -exec rm {} \;
echo "Files successfully deleted." >> ${OUTFILE}
echo "Total number of files remaining:" >> ${OUTFILE}
# output the remaining count of audit files in the adump directory
ls -al | wc -l >> ${OUTFILE}
echo >> ${OUTFILE}
#####################################
####Trace .trc and trm file deletion
#####################################
# change directory to the trace file directory
cd ${ORACLE_BASE}/diag/rdbms/${ORACLE_SID}/${ORACLE_SID}/trace
echo >> ${OUTFILE}
echo "Deleting from directory:" >> ${OUTFILE}
echo "[${ORACLE_BASE}/diag/rdbms/${ORACLE_SID}/${ORACLE_SID}/trace]" >> ${OUTFILE}
echo >> ${OUTFILE}
echo "The total number of trace files in directory is:" >> ${OUTFILE}
# output the total count of trace files to OUTFILEile
ls -al | wc -l >> ${OUTFILE}
echo >> ${OUTFILE}
echo "Total number of trace files to be deleted is:" >> ${OUTFILE}
# output the total number of trace files that will be deleted
find . -maxdepth 1 -type f -mtime +$DAY_OLD -name "*.trc" | wc -l >> ${OUTFILE}
find . -maxdepth 1 -type f -mtime +$DAY_OLD -name "*.trm" | wc -l >> ${OUTFILE}
echo >> ${OUTFILE}
# delete the trace files
find . -maxdepth 1 -type f -mtime +$DAY_OLD -name \*.trc -mtime +14 -exec rm {} \;
find . -maxdepth 1 -type f -mtime +$DAY_OLD -name \*.trm -mtime +14 -exec rm {} \;
echo "Files successfully deleted." >> ${OUTFILE}
echo "Total number of files remaining:" >> ${OUTFILE}
# output the remaining count of trace files in the adump directory
ls -al | wc -l >> ${OUTFILE}
echo >> ${OUTFILE}
#
echo >> "Complete with delete." >> ${OUTFILE}
#
# Now email results
echo >> $OUTFILE
echo `date '+%m/%d/%y %A %X'` >> $OUTFILE
cat $OUTFILE | /bin/mailx -s "`uname -n` : delete old Oracle audit files" ajeet@abc.com
done
exit


* I Highly Recommend it to test it before implementing on production servers.
** Script has been tested by me on my test machine and I don't take any Responsibility of the above script working.














Please subscribe for latest updates.

Friday 27 May 2016

Oracle Database Architecture



Oracle Database Architecture, In order to understand the oracle database it's very important to understand the basic of database i.e., how the database has been designed,build up and how it works.

It's very important for the one who really interested in learning the internal working of oracle database specially DBA's. Without having the knowledge of  Architecture you just can be a DBA. Unless we know how the database works we cannot work on database.

So Here in this article I am trying to Explain the Architecture of oracle database with all it's components.

 Let's first have a look how oracle Architectural components is arranged with the help of the below figure.

Architecture



The Architecture of the oracle database is designed such a way that can manage a large amount of data in a multiuser environment so that many users can concurrently access the same data without affecting performance.

Let’s have a Look on Oracle Database Components and their uses.

   We will go in an order, so that it can be easy to understand.

User Processes

               
The User Processes is created whenever the users request for the connection to the database, in a very simple way it’s a client connection.
The user process starts when a tool is started by a user. The tool may be any application used to connect to database.
The user process may be connected from the same machine where the database is running or from any other machine from any location.
The user process cannot directly connect to database it has to go through the server process where server process will help user process to connect to database.


Server Processes

               
The Server process handles user requests.
The user requests are submitted by the user process to the server process.
The server process only runs on the same machine on which the Oracle server runs.

 

Oracle Instance

               
The Oracle Architecture consists of an Oracle instance and an Oracle database.
The Oracle instance consists of a memory region called the System Global Area (SGA) and Background Process.
                               

SGA

                An Oracle database allocates memory to the SGA when an instance starts, and deallocates it when the instance shuts down.
The SGA consists of a group of memory structures, which are created at startup.
There are three major memory structures.
1.      Shared pool.
a.       Library Cache
b.      Data Dictionary Cache
2.      Database buffer cache,
3.      Redo log buffer.

Shared pool


Oracle Database uses the shared pool to cache many different types of data. Cached data includes the textual and executable forms of PL/SQL blocks and SQL statements, dictionary cache data, result cache data, and other data.
The shared pool is used to store information such as the most recently executed SQL statements and the most recently used data dictionary information.
The main components of the shared pool:


  
Library cache
The library cache stores the executable (parsed or compiled) form of recently referenced SQL and PL/SQL code.
Shared SQL areas are accessible to all users, so the library cache is contained in the shared pool within the SGA.
Data dictionary cache
The data dictionary cache stores data referenced from the data dictionary.

Apart from the above main components there is one more additional, but optional component is a part of shared pool.
Server result cache (depending on the configuration)
The server result cache is an optional cache that stores query and PL/SQL function results within the shared pool.

Database Buffer cache


            Database Buffer cache is one of the most important components of System Global Area (SGA). Database Buffer Cache is the place where data blocks are copied from data files to perform SQL operations. Buffer Cache is shared memory structure and it is concurrently accessed by all server processes.

Types of buffer are

  • Pinned: Buffer which is currently being used.
  • Clean: Buffer which is available for use.
  • Free (Unused): Buffer which is empty and haven’t been used yet.
  • Dirty buffers: Buffer which needed to be moved to write list.

 What happens inside DB buffer?

When oracle server process requires a specific data block, it first searches it in Buffer cache. If it finds required block, it is directly accessed and this event is known as Cache Hit. If searching in Buffer cache fails then it is read from data file on the disk and the event is called Cache Miss. If the required block is not found in Buffer cache then process needs a free buffer to read data from disk. It starts search for free buffer from least recently used end of LRU list .In process of searching, if user process finds dirty block in LRU list it shifts them to Write List. If the process cannot find free buffers until certain amount of time then process signals DBWn process to write dirty buffers to disks.

Redo Log Buffer

 A Redo log buffer is a circular buffer in the SGA that holds information about changes made to the database, called redo.
Redo is generated by each transaction in the database as it makes a change
The server processes generate redo data into the log buffer as they make changes to the data blocks in the buffer. LGWR subsequently writes entries from the redo log buffer to the online redo log.
As changes occur in the database, the redo generated by those changes is stored in the redo log buffer. When below conditions occur, the redo log buffer will be flushed to the online redo logs that exist on physical disk by the Oracle background process called the Log Writer (LWGR).
  • A session issues a commit or a rollback command.
  • The log buffer becomes 1/3 full.
  • A timeout (every 3 seconds) occurs.
  • A checkpoint occurs.

Some other memory structures are




  Large pool

The database administrator can configure an optional memory area called the large pool to provide large memory allocations for:
  • Session memory for the shared server and the Oracle XA interface (used where transactions interact with more than one database)
  • I/O server processes
  • Oracle backup and restore operations
  • Parallel execution message buffers, if the initialization parameter PARALLEL_AUTOMATIC_TUNING is set to true (otherwise, these buffers are allocated to the shared pool)

By allocating session memory from the large pool for shared server, Oracle XA, or parallel query buffers, Oracle can use the shared pool primarily for caching shared SQL and avoid the performance overhead caused by shrinking the shared SQL cache.
The large pool does not have an LRU list. It is different from reserved space in the shared pool, which uses the same LRU list as other memory allocated from the shared pool.

Java pool


Oracle JVM memory manager uses JAVA_POOL_SIZE mainly for in-memory representation of Java method and class definitions, and static Java states that are migrated to session space at end-of-call in shared server mode

The JAVA Pool holds the JAVA execution code

Streams Pool

·         The Streams pool stores buffered queue messages and provide memory for Oracle Streams capture processes and apply processes. The Streams pool is used exclusively by Oracle Streams.
·         Unless you specifically configure it, the size of the Streams pool starts at zero. The pool size grows dynamically as required by Oracle Streams.

 Background Processes.

 The Background processes in oracle Architecture are as below:
  •  PMON
  • SMON
  •  DBWR
  •  LGWR
  • CKPT
  •  ARC n


PMON
PMON is the Process Monitor which is responsible for recovering processes when the user process fails.
The PMON process performs the cleanup operations by performing the following tasks:
• Rolls back the user’s current transaction
• Releases all the locks that are held on tables or rows
• Frees other resources used by the users
• Restarts the dead dispatcher




SMON
SMON is the System Monitor which is responsible for recovering the system after a failure.

If an Oracle instance crashes, any changes that are made in the SGA are not written to the data files. When you restart the instance, the SMON background process automatically performs instance recovery by performing the following tasks:

• Rolling forward changes that are made in the online redo log files but not in the data files. Since all the committed transactions are written to the online redo log files, these are successfully recovered as result of rolling forward changes from the online redo log files to the data files.
• Opening the database. After the database is opened, users can log on and access any data that is not locked by un-recovered transaction.
• Rolling back all the uncommitted transactions.


DBWR
The database writer process (DBWn) writes the contents of buffers to data files. The DBWn processes are responsible for writing modified (dirty) buffers in the database buffer cache to disk.
When a buffer in the database buffer cache is modified, it is marked dirty. The primary job of the DBWn process is to keep the buffer cache clean by writing dirty buffers to disk
DBWR only writes blocks back to the data files on commit, or when the cache is full and space has to be made for more blocks.

LGWR
Entries are stored in Redo Log buffer. Log writer process (LGWR) writes these redo entries to redo log files. Redo log buffer works in circular fashion. It means that it overwrites old entries. But before overwriting, old entries must be copies to redo log files.

LGWR process writes to redo file on below conditions.
  • When user performs commit.
  • After every three seconds.
  • When redo log buffer is 1/3 full.


CKPT
The checkpoint process (CKPT) is responsible for updating the control file and data file headers with checkpoint information and signals DBWn to write blocks to disk. Checkpoint information includes the checkpoint position, SCN, location in on-line redo log to begin recovery, and so on…
Remember CKPT does not write changed data to the data files.

ARCn
The archiver processes (ARCn) copy online redo log files to offline storage after a redo log switch occurs. These processes can also collect transaction redo data and transmit it to standby database destinations. ARCn processes exist only when the database is in ARCHIVELOG mode and automatic archiving is enabled.



DATABASE:


This the Part where actual data resides. Let's have a look at different types of Files we have in Oracle Database.
DATA FILES
In an Oracle database, there can be one or more data files. A data file stores the data dictionary, the user objects, and the before-images of the data blocks that are modified by the current transactions.
            The data associated with schema objects in a table-space is physically stored in one or more of the data-files that constitute the table-space.
            The data in a data file is read, when required, and stored in the database buffer cache. The modified or new data is not immediately written to the data file. However, it is stored in the database buffer cache. Data is written to data-file from DB buffer cache by the background process DBWn.


Control File
A control file contains information about the associated database that is required for access by an instance, both at startup and during normal operation. Control file information can be modified only by Oracle.
No database administrator or user can edit a control file.
Control files stores information such as the database name, the time stamp of the database creation, names and locations of the data files and redo log files.




Redo log file
Redo log files are filled with redo records. A redo record, also called a redo entry, is made up of a group of change vectors, each of which is a description of a change made to a single block in the database.
Redo log files are helpful to recover database.
Redo entries record data that you can use to reconstruct all changes made to the database, including the undo segments. Therefore, the redo log also protects rollback data. When you recover the database using redo data, the database reads the change vectors in the redo records and applies the changes to the relevant blocks. Every Oracle database has at least two redo log groups, each having at least one redo log file.
Redo records are buffered in a circular fashion in the redo log buffer of the SGA and are written to one of the redo log files by the Log Writer (LGWR) database background process. Whenever a transaction is committed, LGWR writes the transaction redo records from the redo log buffer of the SGA to a redo log file, and assigns a system change number (SCN) to identify the redo records for each committed transaction. Only when all redo records associated with a given transaction are safely on disk in the online logs is the user process notified that the transaction has been committed.

Parameter File
A parameter file is a file that contains a list of initialization parameters and a value for each parameter. You specify initialization parameters in a parameter file that reflect your particular installation. Oracle supports the following two types of parameter files:
Server Parameter Files

A server parameter file is a binary file that acts as a repository for initialization parameters. The server parameter file can reside on the machine where the Oracle database server executes. Initialization parameters stored in a server parameter file are persistent, in that any changes made to the parameters while an instance is running can persist across instance shutdown and startup some of the parameter can be changed dynamically using ALTER SYSTEM SET commands without shutdown and startup.
Note: we cannot open, edit the server parameter file.




Initialization Parameter Files

An initialization parameter file is a text file that contains a list of initialization parameters.
The name of the initialization parameter file contains the keyword init.ora oracle by default identifies and access file by name init.ora in default location $ORACLE_HOME/dbs.

However we can specify the name as per our requirement and in any location but while startup we need to specify the path to access the parameter file. It is a Client side file.

If we need to change/add any parameter we have to shut down the database and open parameter file, edit and then we need to startup the database.


Password file

Password file is a binary file used for the authentication to connect to database.

Archive Log File
Oracle Database lets you save filled groups of redo log files to one or more offline destinations, known collectively as the archived redo log, or more simply the archive log. The process of turning redo log files into archived redo log files is called archiving. This process is only possible if the database is running in ARCHIVELOG mode.

An archived redo log file is a copy of one of the filled members of a redo log group. It includes the redo entries and the unique log sequence number of the identical member of the redo log group.


RemoteHostExecutor.pl The file access permissions while patching

Hi, In this article, I am going to share one the issue which we usually face while patching. Here our DB and Grid home are 12.1.0.2, an...