SQL Server clustering vs. Oracle RAC

 

I did some research on SQL Server clustering a while ago to find out what high-availability and fail-over options does SQL Server provide. Below are my findings. In short SQL Server has fail-over but no high-availability as compared to Oracle RAC.

1. SQL Server 2008 Peer-to-Peer Replication

  • Multiple nodes running their own instances
  • Each node has its own copy of data
  • Every node is a publisher and subscriber at the same time
  • Not scalable because of complex architecture
  • Complex to modify schema
  • Conflicts may arise if two nodes update the same row
  • In case of conflict, the topology remains inconsistent until the conflict is resolved
  • Conflict is resolved manually using the method described in
    http://technet.microsoft.com/en-us/library/bb934199.aspx

2. SQL Server 2008 Mirroring

  • One primary server and more than one mirror instances
  • Periodic Log Shipping between primary and secondary servers
  • Failover process is manual
  • A separate ‘witness’ server can be deployed to automate the fail over
  • Secondary servers  do not participate in any transaction and just wait for the failover
  • Equivalent to Oracle standby database technology

3. SQL Server 2008 Failover Clustering

  • Two servers running on a shared storage
  • All data and logs reside on the SAN and is shared
  • One server performs all transactions and the other waits for the failover
  • Microsoft Cluster Server takes care of the fail over
  • Both instances have separate instance names and one cluster instance name
  • Clients connect to the cluster IP address and cluster instance name
  • Failover is transparent but a delay (in minutes) is required to mount the database on the failover instance and start it
  • There is an application blackout during fail over process
  • Reference (http://msdn.microsoft.com/en-us/library/aa479364.aspx)

4. SQL Server 2008 Active/Active Failover Clustering

  • Two instances running on a shared storage
  • Two different SQL Server databases setup on both servers
  • Active/Active Clustering is effectively two different failover clusters
  • Each node in the cluster is running one primary instance and one secondary instance of the other node
  • Both clusters run a synchronized copy of the database
  • Replication is setup between both clusters to keep them synchronized
  • Clients see two different databases available to connect to
  • In case of failure, one server runs both database instances which may cause performance overhead
  • Write cost may increase because of replication and database synchronization
  • Application blackout will only be for the clients connected to the failed instance
  • Peer-to-Peer replication has conflicts (See No. 1)

5. SQL Server 2008 Federated Database

  • Multiple instances running in a network connected to each other
  • Each instance carry part of the database
  • Complete table is formed always using VIEWS and distributed SQL
  • Each instance has a VIEW of the table using UNION ALL between all instances called DPV
  • Complex to scale up and manage
  • Complex to modify the schema because of multiple databases
  • May have HOT-NODE syndrome when one node carry most used data

6. Oracle 11g RAC

  • Multiple Nodes running on a shared storage
  • All nodes are participating
  • Nodes are connected to each other using inter-network
  • All nodes servicing the single database
  • Scalable because of single database
  • Entire cluster fails if SAN fails
  • Higher performance inter-connect required for cache fusion as nodes increase
  • Virtual IP Address is used to connect to all servers
  • In case of failure of one node, clients will connect to other nodes on the same IP address on subsequent requests
  • 30-60 seconds of delay required for failover
  • Application blackout will only be for the clients connected to the failed instance

 

Share this post : digg it! Facebook it! live it! reddit! technorati! yahoo!

Deadlock Resolution: Killing Oracle Sessions

 

We usually face this situation where an Oracle session is dead and an UPDATE command freezes forever. Usually it is the fault of the programmer who messes up with his session. In such case of a deadlock, the problem session needs to be killed.

Following is a query (I got from some other blog I don’t remember) we use to find objects which are locked. This is used to make sure that the object which we are trying to update is actually locked by another session:

select
   c.owner,
   c.object_name,
   c.object_type
   b.sid,
   b.serial#,
   b.status,
   b.osuser,
   b.machine
from
   v$locked_object a ,
   v$session b,
   dba_objects c
where
   b.sid = a.session_id
and
   a.object_id = c.object_id;

And this query tells which are the problem sessions:

select l1.sid, ‘ IS BLOCKING ‘, l2.sid
from v$lock l1, v$lock l2
where l1.block =1 and l2.request > 0
and l1.id1=l2.id1
and l1.id2=l2.id2;

So now we have the SID of the problem session and we only need the serial number to kill it.

SELECT serial# FROM v$session WHERE sid=<sid>

And now kill it:

ALTER SYSTEM KILL SESSION ‘<sid>,<session#>’

Easy!

 

 

Share this post : MSDN! Technet! del.icio.us it! digg it! Facebook it! live it! reddit! technorati!

Oracle: Drop Database Command

 

Oracle introduced a new command in 10g to drop a database. Before that there was a set of procedures to follow in order to get rid of a database which involved deleting all data files from the file system, deleting control files, the parameter files etc. Now after this command everything is deleted by the RDBMS and there is no chance of forgetting any file to delete.

The command is simple, “DROP DATABASE”.

And the procedure to use this command is:

sqlplus / as sysdba

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount exclusive restrict
ORACLE instance started.

SQL> exit

rman target /

drop database;

The database has to be started in exclusive restrict mode.

If you are running a RAC you will have to close down all instances and mount the database exclusively on one node only. The parameter file will need to be changed and the parameter “cluster_database” should be set to false. My suggestion is to create another parameter file from spfile and mount the database using that file.

CREATE PFILE=’/home/oracle/anotherpfile.ora’ from SPFILE;

The file can be modified and all RAC/node specific parameters can be removed.

 

Technorati Tags: ,,,,

 

Share this post : Social! del.icio.us it! digg it! Facebook it! live it! reddit! technorati! yahoo!

Oracle Parameter File: Order of lookup

 

Oracle has two type of parameter files for initial parameter configurations: PFILE and SPFILE. The parameter file and the shared parameter file. They are used to specify parameter values at the database startup. The SPFILE is used in a RAC environment and PFILE is used in a stand alone database environment.

The other day I had a problem starting up my RAC and later on it was discovered that the problem was because of these startup files. The recommended way of using an SPFILE in a RAC environment is to specify the (shared) location of your SPFILE in the PFILE. All nodes have their own PFILEs which point to the same SPFILE which is located on a shared storage. I had set it up all right but it was not working. And this was because Oracle searches the configuration files in the specified directory in the following order:

  • spfileSID.ora
  • spfile.ora
  • initSID.ora
  • init.ora

So I had my initSID.ora setup but was not being used because I already had one spfileSID.ora in the same location which always got preference.

 

 

Share this post : digg it! Facebook it! live it! reddit! technorati! yahoo!

SQL: Generating Series of Numbers in Oracle

 

Generating a series of numbers is a very common practice in SQL environments. The objectives are but not limited to generating testing data and constants to use in SQL statements. Oracle provides a number of ways to accomplish this but the most simple to me is using the CONNECT BY clause.

   1:  SELECT ROWNUM N FROM dual 
   2:  CONNECT BY LEVEL <= 20

The statement generates integers starting from 1 to 20. It can be altered in many ways according to the acquirements e.g. adding a WHERE clause to offset the series or applying a mathematical equation at N to generate different types of series. For example:

   1:  SELECT (ROWNUM * 0.02 + 3) "N" FROM dual 
   2:  WHERE LEVEL > 10 
   3:  CONNECT BY LEVEL <= 20

Another objective of series is the requirement of constants in the SQL statements. For example I used constants of week days (1-7) and number of hours in a day (1-24) to generate some statistics by hour by day of week.

   1:  SELECT DECODE (DN, 
   2:       2, ‘MON ‘, 
   3:       3, ‘TUE ‘, 
   4:       4, ‘WED ‘, 
   5:       5, ‘THU ‘, 
   6:       6, ‘FRI ‘, 
   7:       7, ‘SAT ‘, 
   8:       1, ‘SUN ‘) "Day of Week", 
   9:  Hour, 
  10:  (SELECT COUNT(*) FROM tblsupport 
  11:   WHERE tktopendate > SYSDATE – 90 
  12:   AND TO_CHAR(tktopendate, ‘D’) = DN 
  13:   AND TO_CHAR(tktopendate, ‘HH24’) = H) "Ticket Count" 
  14:  FROM 
  15:  (SELECT ROWNUM DN FROM dual 
  16:    CONNECT BY LEVEL <= 7) WeekDays, 
  17:  (SELECT ROWNUM Hour FROM dual
  18:   CONNECT BY LEVEL <= 24) Hours

This statement generates statistics data about number of tickets received during the last 3 months grouped by week day and hour.

 

 

Share this post : digg it! Facebook it! live it! reddit! technorati! yahoo!

Using the Oracle DBMS_LOCK Package

As a standard DBMS practice Oracle provides all sort of row level and table level locking mechanism for data integrity. This built-in locking mechanism is strongly coupled with database transactions e.g. a lock acquired through the LOCK command is released automatically at the end of the transaction. Sometimes you need explicit locking with full control and independent of the transactions. This feature is provided by DBMS_LOCK.

The other day I was stuck in a legacy system where I had to provide a locking mechanism on call to a stored procedure at the database level. The procedure is called from a web farm consisting of a number of web servers and I wanted to make sure that only one web servers accesses the procedure at any given time. Although I hated such a procedure in my database but that looked like the only solution to the problem without changing some fundamentals. The DBMS_LOCK package came for my rescue and here is what I learnt from it.

This package is available to the SYS user only (may be because of its significance and impact). So if you try to access it from another user you will get an error “IDENTIFIER MUST BE DECLARED”. The solution is to write  functions on top of the DBMS_LOCK package and grant access to those functions. The DBA can thus regulate which operations are allowed to the user and which are forbidden.

CREATE OR REPLACE PROCEDURE xmode_lock_procedure (lockhandle varchar2, v_result OUT number) IS
v_lock varchar2(200);
begin
dbms_lock.allocate_unique(lockhandle, v_lock);
v_result := dbms_lock.request(v_lock, dbms_lock.x_mode);
end;

Grant access on this procedure to the user:

GRANT ACCESS ON xmode_lock_procedure TO user1;

Granting access to public is never recommended due to sensitivity of this procedure.

Similarly a procedure which releases this lock can be written as:

CREATE OR REPLACE PROCEDURE lock_release (lockhandle varchar2, v_result OUT number) IS
v_lock varchar2(200);
begin
dbms_lock.allocate_unique(lockhandle, v_lock);
v_result := dbms_lock.release (v_lock);
end;

This will release the lock and in the meantime if another users requests a lock he will be blocked until this users releases the same lock. Note that the lock uniqueness is decided by the LOCKHANDLE parameter passed to the procedures.

The result returned by these procedures can be:

1: Timeout occurred
2. Deadlock occurred
3. Parameter Error
4. Already owned
5. Illegal lock handle

 

 

 

Share this post : digg it! Facebook it! live it! reddit! technorati! yahoo!

Empty String vs. NULL Value in Oracle

Today I was writing a query trying to eliminate NULL values and empty strings from a column. I found a strange behavior in Oracle that it considers NULL and empty strings ( ” ) equal which is not ANSI. Other database systems which are more compliant to ANSI like PostgreSQL differentiate between NULL and ( ” ) correctly.

Check this out:

SQL> create table tes (a varchar2(2));
Table created.
SQL> insert into tes values (''); -- inserted an empty string
1 row created.
SQL> select * from tes where a is not null;
no rows selected

Gives me no rows although it has a string with zero length in it.

SQL> select length(a) from tes;
 LENGTH(A)
----------

Even the length function shows a NULL it should have been saying “0”.

This brings another interesting scenario. What if you want to find out empty strings in your column?

SQL> select * from tes where a = '';

no rows selected

This does not bring in any rows because a NULL is not equal to NULL. The correct query would be:

SQL> select * from tes where trim(a) is null;
A
--

Now it brings in one row because the TRIM function returns null if no characters are left in the given string after trimming. A more ANSI compliant database would have returned an empty string ( ” ) though.