The instance comprises the collection of Oracle-related memory and background processes that run on a computer system.
This allows an application or user to connect to either computer and have access to a single coordinated set of data.
The main aim of Oracle RAC is to implement a clustered database to provide performance, scalability and resilience & high availability of data at instance level.
[4] RAC administrators can use the srvctl tool to manage RAC configurations,[5] Prior to Oracle 9, network-clustered Oracle databases used a storage device as the data-transfer medium (meaning that one node would write a data block to disk and another node would read that data from the same disk), which had the inherent disadvantage of lackluster performance.
Oracle 9i addressed this issue: RAC uses a dedicated network connection for communications internal to the cluster.
[6] The Trace File Analyzer (TFA) aids in collecting RAC diagnostic data.
In RAC, the write-transaction must take ownership of the relevant area of the database: typically, this involves a request across the cluster interconnection (local IP network) to transfer the data-block ownership from another node to the one wishing to do the write.
This takes a relatively long time (from a few to tens of milliseconds) compared to single database-node using in-memory operations.
(Oracle 11g has made many enhancements in this area and performs a lot better than earlier versions for read-only workloads.
DBMS vendors and industry analysts regularly debate the matter; for example, Microsoft touts a comparison of its SQL Server 2005 with Oracle 10g RAC.
In late 2009, IBM announced DB2 pureScale, a shared-disk clustering scheme for DB2 9.8 on AIX that mimics the parallel sysplex implementation behind Db2 data sharing on the mainframe.