Adding TODOs to your C# code

Havings TODO items in your code is inevitable. I have yet to see a project where we do not have to come back and alter a few things later. In my recent project where I had to write an integration module to a financial transactions settlement system which was not totally ready. That settlement system was accessible to me through web services but a lot of those web services were not just ready yet. I had to write a log of code assuming that certain methods with those parameter will be available to me some day and hence the use of TODO markers. Visual Studio makes it very convenient. It has a few keywords like TODO, HACK or UNDONE which can be prepended to a comment to make it special.

For example a comment like:

// TODO: this needs to be done
// MySystem.Method1(parameters);

Once you have a comment like this in your code it becomes a part of your task list. The task list can be viewed anytime through the menu View -> Task List. The task list will be displayed by default along with the error list. The task list window has its own little drop down menu having its own options, User tasks or Comments. Select the comments option and all TODO, HACK or any other special comment you have in your code will be displayed there. It is a perfect way to create reminders about what still needs to be done in your code so you don’t miss out anything by mistake.

And yes, you can create your own keywords as well. This MSDN article (http://msdn.microsoft.com/en-us/library/zce12xx2(v=vs.80).aspx) shows how to do that.

Advertisements

Checking Exceptions in MsTest

So, we found out MsTest does not have an Assert method to check if your method throws an exception. But can take care of that.

///<summary>
/// Extension methods for MsTest Assert
///
</summary> 
public static class AssertEx
{
///<summary>
/// Check that the peice of code throws an exception
/// as expected
///</summary>
///<typeparam name=”T”></typeparam>
///<param name=”action”></param>
///<returns></returns>
public static T Throws<T>(Action action) where T : Exception
{
try
{
action();
}
catch (T ex)
{
return ex;
}
catch (Exception ex)
{
Assert.Fail(“Expected exception of type {0} but actual exception of type {1} was thrown.”, typeof(T), ex.GetType());
}

Assert.Fail(“Expected exception of type {0} but not exception was thrown.”, typeof(T));
return null;
}
}

This method can be used to run methods which are expected to throw exceptions.

AssertEx.Throws<MyException>( () => class.RunMethodRun() );

This will make sure that the method RunMethodRun() throws an exception or your unit test will fail.

Database Schema Versioning

Database schema is an important part of your source code and you would like to release it along with your application especially if it is a web application. The other reason to version your database is for continuous integration. Every time a team member makes some change in the code the build server picks it up and rebuilds the whole application again. If that change contains a database update, you would want that to happen on the build server automatically. Here comes your database script runner tool.

I have been looking for tools like that over the internet but could not find an open source script runner. The one which does the job is not open source and the one which is open source does not do the job right. So I decided to write my own.

This is a class library which takes in a connection string, a file format string and the path to the scripts directory. Connection string is off course the address of your database and the file format is an handy feature so you can use any name format for your sql scripts. For instance: a file name “my_%.sql” will read all files of the format my_<sequence number>.sql where sequence number is a 4 digit integer.

Here is the code. I use it as a library and run it at the start up on one of my services but you can create a console application and reference the assembly in it to use it as a standalone tool. I might create that console application later and will share it here if I do.

    public class SchemaUpdater
    {
        private readonly string _connectionString;
        private readonly string _nameformat;
        private readonly string _scriptdir;

        public SchemaUpdater(string connectionstring, string format, string scriptpath)
        {
            _connectionString = connectionstring;
            _nameformat = format;
            _scriptdir = scriptpath;
        }

        public void Update()
        {
            using (var connection = new SqlConnection(_connectionString))
            {
                connection.Open();
                var scriptseparator = new[] {"\nGO"};

                // Make sure we have a schema versions table
                var scriptfile = string.Format("{0}\\{1}", _scriptdir, "versions-table.sql");
                var transaction = connection.BeginTransaction(IsolationLevel.Serializable);
                try
                {
                    Array.ForEach(File.ReadAllText(scriptfile).Split(scriptseparator, StringSplitOptions.RemoveEmptyEntries),
                        (sql => new SqlCommand(sql, connection, transaction).ExecuteNonQuery()));
                    transaction.Commit();
                }
                catch(Exception)
                {
                    transaction.Rollback();
                    throw;
                }

                // Now run the baseline
                scriptfile = string.Format("{0}\\{1}", _scriptdir, "base.sql");
                transaction = connection.BeginTransaction(IsolationLevel.Serializable);
                try
                {
                    Array.ForEach(File.ReadAllText(scriptfile).Split(scriptseparator, StringSplitOptions.RemoveEmptyEntries),
                        (sql => new SqlCommand(sql, connection, transaction).ExecuteNonQuery()));
                    transaction.Commit();
                }
                catch (Exception)
                {
                    transaction.Rollback();
                    throw;
                }

                // Now run all the files
                var command = new SqlCommand("SELECT Version FROM SchemaVersion", connection);
                var version = command.ExecuteScalar();
                command.Dispose();

                var start = version == null ? 0 : Convert.ToInt32(version)+1;

                var filename = _nameformat.Replace("%", start.ToString("0000"));
                scriptfile = string.Format("{0}\\{1}", _scriptdir, filename);

                transaction = connection.BeginTransaction(IsolationLevel.Serializable);

                try
                {
                    while (File.Exists(scriptfile))
                    {
                        Array.ForEach(File.ReadAllText(scriptfile).Split(scriptseparator, StringSplitOptions.RemoveEmptyEntries),
                            (sql => new SqlCommand(sql, connection, transaction).ExecuteNonQuery()));

                        start++;
                        filename = _nameformat.Replace("%", start.ToString("0000"));
                        scriptfile = string.Format("{0}\\{1}", _scriptdir, filename);
                    }

                    new SqlCommand(string.Format("Update SchemaVersion SET Version={0}", start-1), connection, transaction).ExecuteNonQuery();

                    transaction.Commit();
                }
                catch(Exception)
                {
                    transaction.Rollback();
                    throw;
                }
            }
        }
    }

This is a small piece of code but it works for me, also takes care of the GO separator in the sql scripts and runs all commands in a transaction. It uses my scripts “version-tables.sql” which creates a new version tables in the database if it does not exist and “base.sql” which contain any sql statements which you want to run to create a baseline schema.

ASP.NET MVC: Binding arrays in model

One of the great features of ASP.NET MVC is the automatic model binding. This is one of those things which look trivial but in fact are very very important. This reduces the LoC and complexity by miles. For instance, you want to present a number of text inputs to the user and read multiple locations:

 

<input type="text" name="Location1" id="Location1" />
<input type="text" name="Location2" id="Location2" />
<input type="text" name="Location3" id="Location3" />

 

Now this would be tedious in your action:

 

public ActionResult UserData(string location1, string location2, string2)
{
   // do somthing
   return View();
}

 

And what if you don’t know how many locations would there be? That is where MVC shows the magic. Bring it on:

public ActionResult UserData(string[] locations)
{
   // do somthing
   return View();
}

 

 

Hmmm… pretty simple. Now keep adding location inputs in your view and MVC will bind all that to the location parameter. The key here is that the name of your input controls should be location as well with an index like a proper array.

 

<input type="text" name="location[0]" id="location[0]" />
<input type="text" name="location[1]" id="location[1]" />
<input type="text" name="location[2]" id="location[2]" />

 

The magic of model binding does not end here. That is, if you have another array of postcodes along with locations the method signature of your action will look dirty again. The solution for this in MVC is to bind everything in the model object and pass that object over to the action. Now your model object will look like this:

 

public class HomeModel
{
     public IList<string> Locations { get; set; }
     public IList<string> PostCodes { get; set; }
}

 

 

And the action signature will accept only one parameter which is the model.

 

public ActionResult UserData(HomeModel model)

 

Yes, everything is in that object, pretty neat. Make sure your view inherits from System.Web.Mvc.ViewPage<HomeModel> so it knows about its model type.

 

 

 

Share this post :

SQL Server clustering vs. Oracle RAC

 

I did some research on SQL Server clustering a while ago to find out what high-availability and fail-over options does SQL Server provide. Below are my findings. In short SQL Server has fail-over but no high-availability as compared to Oracle RAC.

1. SQL Server 2008 Peer-to-Peer Replication

  • Multiple nodes running their own instances
  • Each node has its own copy of data
  • Every node is a publisher and subscriber at the same time
  • Not scalable because of complex architecture
  • Complex to modify schema
  • Conflicts may arise if two nodes update the same row
  • In case of conflict, the topology remains inconsistent until the conflict is resolved
  • Conflict is resolved manually using the method described in
    http://technet.microsoft.com/en-us/library/bb934199.aspx

2. SQL Server 2008 Mirroring

  • One primary server and more than one mirror instances
  • Periodic Log Shipping between primary and secondary servers
  • Failover process is manual
  • A separate ‘witness’ server can be deployed to automate the fail over
  • Secondary servers  do not participate in any transaction and just wait for the failover
  • Equivalent to Oracle standby database technology

3. SQL Server 2008 Failover Clustering

  • Two servers running on a shared storage
  • All data and logs reside on the SAN and is shared
  • One server performs all transactions and the other waits for the failover
  • Microsoft Cluster Server takes care of the fail over
  • Both instances have separate instance names and one cluster instance name
  • Clients connect to the cluster IP address and cluster instance name
  • Failover is transparent but a delay (in minutes) is required to mount the database on the failover instance and start it
  • There is an application blackout during fail over process
  • Reference (http://msdn.microsoft.com/en-us/library/aa479364.aspx)

4. SQL Server 2008 Active/Active Failover Clustering

  • Two instances running on a shared storage
  • Two different SQL Server databases setup on both servers
  • Active/Active Clustering is effectively two different failover clusters
  • Each node in the cluster is running one primary instance and one secondary instance of the other node
  • Both clusters run a synchronized copy of the database
  • Replication is setup between both clusters to keep them synchronized
  • Clients see two different databases available to connect to
  • In case of failure, one server runs both database instances which may cause performance overhead
  • Write cost may increase because of replication and database synchronization
  • Application blackout will only be for the clients connected to the failed instance
  • Peer-to-Peer replication has conflicts (See No. 1)

5. SQL Server 2008 Federated Database

  • Multiple instances running in a network connected to each other
  • Each instance carry part of the database
  • Complete table is formed always using VIEWS and distributed SQL
  • Each instance has a VIEW of the table using UNION ALL between all instances called DPV
  • Complex to scale up and manage
  • Complex to modify the schema because of multiple databases
  • May have HOT-NODE syndrome when one node carry most used data

6. Oracle 11g RAC

  • Multiple Nodes running on a shared storage
  • All nodes are participating
  • Nodes are connected to each other using inter-network
  • All nodes servicing the single database
  • Scalable because of single database
  • Entire cluster fails if SAN fails
  • Higher performance inter-connect required for cache fusion as nodes increase
  • Virtual IP Address is used to connect to all servers
  • In case of failure of one node, clients will connect to other nodes on the same IP address on subsequent requests
  • 30-60 seconds of delay required for failover
  • Application blackout will only be for the clients connected to the failed instance

 

Share this post : digg it! Facebook it! live it! reddit! technorati! yahoo!

Deadlock Resolution: Killing Oracle Sessions

 

We usually face this situation where an Oracle session is dead and an UPDATE command freezes forever. Usually it is the fault of the programmer who messes up with his session. In such case of a deadlock, the problem session needs to be killed.

Following is a query (I got from some other blog I don’t remember) we use to find objects which are locked. This is used to make sure that the object which we are trying to update is actually locked by another session:

select
   c.owner,
   c.object_name,
   c.object_type
   b.sid,
   b.serial#,
   b.status,
   b.osuser,
   b.machine
from
   v$locked_object a ,
   v$session b,
   dba_objects c
where
   b.sid = a.session_id
and
   a.object_id = c.object_id;

And this query tells which are the problem sessions:

select l1.sid, ‘ IS BLOCKING ‘, l2.sid
from v$lock l1, v$lock l2
where l1.block =1 and l2.request > 0
and l1.id1=l2.id1
and l1.id2=l2.id2;

So now we have the SID of the problem session and we only need the serial number to kill it.

SELECT serial# FROM v$session WHERE sid=<sid>

And now kill it:

ALTER SYSTEM KILL SESSION ‘<sid>,<session#>’

Easy!

 

 

Share this post : MSDN! Technet! del.icio.us it! digg it! Facebook it! live it! reddit! technorati!

Oracle: Drop Database Command

 

Oracle introduced a new command in 10g to drop a database. Before that there was a set of procedures to follow in order to get rid of a database which involved deleting all data files from the file system, deleting control files, the parameter files etc. Now after this command everything is deleted by the RDBMS and there is no chance of forgetting any file to delete.

The command is simple, “DROP DATABASE”.

And the procedure to use this command is:

sqlplus / as sysdba

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount exclusive restrict
ORACLE instance started.

SQL> exit

rman target /

drop database;

The database has to be started in exclusive restrict mode.

If you are running a RAC you will have to close down all instances and mount the database exclusively on one node only. The parameter file will need to be changed and the parameter “cluster_database” should be set to false. My suggestion is to create another parameter file from spfile and mount the database using that file.

CREATE PFILE=’/home/oracle/anotherpfile.ora’ from SPFILE;

The file can be modified and all RAC/node specific parameters can be removed.

 

Technorati Tags: ,,,,

 

Share this post : Social! del.icio.us it! digg it! Facebook it! live it! reddit! technorati! yahoo!