2LeggedSpider

Getting specific with the Specification Pattern

Posted in C#, patterns by Sumit Thomas on March 23, 2011

The idea of Specification pattern according to Martin Fowler is to separate the statement of how to match a candidate, from the candidate object that it is matched against. As well as its usefulness in selection, it is also valuable for validation and for building to order.

In simple terms it means this pattern helps us to check if an object satisfies certain criteria. Well, we do that all the time in our code don’t we? For instance, we check if the data in an object that we send to a web service or database is properly validated against a business rule. We check for certain conditions on an object’s property to fetch subset of objects from a collection using say LINQ etc etc. Since we already do these things, why do we need a separate pattern to do the same?

Well the biggest advantage of Specification pattern is that we can create compartments of criteria definitions and check them against an object wherever the need arises. We then have the flexibility to change the criteria definitions in one single place as per the business requirement instead of changing it all the places where such criteria would had been used, if we didn’t use specification pattern. OK enough theory, lets see how we can implement the Specification pattern to understand it better.

We’ll start by creating the Core framework for the Specification. Lets create an Interface ISpecification with the following definition

public interface ISpecification<T>
{
     bool IsSatisfiedBy(T t);
}

Now as I mentioned earlier, we can check if an object satisfies a certain condition or a set of conditions. In order to make it easy for us we’ll create classes which will help us perform the logical And, Or and Not operations on an object with the available suggestions.

    public class AndSpecification<T> : ISpecification<T>
    {
        private readonly ISpecification<T> spec1;
        private readonly ISpecification<T> spec2;
        public AndSpecification(ISpecification<T> s1, ISpecification<T> s2)
        {
            spec1 = s1;
            spec2 = s2;
        }

        public bool IsSatisfiedBy(T t)
        {
            return spec1.IsSatisfiedBy(t) && spec2.IsSatisfiedBy(t);
        }

    }

    public class OrSpecification<T> : ISpecification<T>
    {
        private readonly ISpecification<T> spec1;
        private readonly ISpecification<T> spec2;
        public OrSpecification(ISpecification<T> s1, ISpecification<T> s2)
        {
            spec1 = s1;
            spec2 = s2;
        }

        public bool IsSatisfiedBy(T t)
        {
            return spec1.IsSatisfiedBy(t) || spec2.IsSatisfiedBy(t);
        }
    }

    public class NotSpecification<T> : ISpecification<T>
    {
        private readonly ISpecification<T> spec;
        public NotSpecification(ISpecification<T> spec)
        {
            this.spec = spec;
        }

        public bool IsSatisfiedBy(T t)
        {
            return !spec.IsSatisfiedBy(t);
        }
    }

Next we will create extension methods that will help us to chain together required specifications.

    public static class SpecExtensions
    {
        public static ISpecification<T> And<T>(this ISpecification<T> s1, ISpecification<T> s2)
        {
            return new AndSpecification<T>(s1, s2);
        }
        public static ISpecification<T> Or<T>(this ISpecification<T> s1, ISpecification<T> s2)
        {
            return new OrSpecification<T>(s1, s2);
        }
        public static ISpecification<T> Not<T>(this ISpecification<T> s)
        {
            return new NotSpecification<T>(s);
        }
    }

That pretty much forms the core implementation of Specification pattern. Now lets get in to the fun part where we will put the specification pattern in use.

Lets assume that you are creating a module for fictional employee management application to determine the qualification of employees in order to promote them to managers. Lets start by creating a Employee class as follows

    public class Employee
    {
        public int EmployeeId { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public int TotalExperience { get; set; }
        public ExcelCompetency ExcelCompetency { get; set; }
        public bool PotentialManager
        {
            get
            {
                if (ExcelCompetency == Domain.ExcelCompetency.High || ExcelCompetency == Domain.ExcelCompetency.Medium)
                    return true;

                return false;
            }
        }
    }

Here ExcelCompetency is an enum with values High, Medium and Low. Assume that the specification provided to you is that only Employees with High or Medium competency in Excel can become Managers. The property PotentialManager checks this condition and returns a boolean based on the value assigned for ExcelCompetency.

Though the property PotentialManager would help us in fetching a subset of employees qualified to be a manager from a list of employees, the actual condition we used inside this property might be used elsewhere in our application. In such cases, if the company decides later that employees with only High ExcelCompetency can become managers then we would have to change the condition wherever it is implemented. That is not ideal. This is where Specification pattern could be used.

Lets start by creating our own implementation of Specification framework.

    public class PotentialManagerSpecification : ISpecification<Employee>
    {
        public bool IsSatisfiedBy(Employee employee)
        {
            if (employee.ExcelCompetency == ExcelCompetency.High || employee.ExcelCompetency == ExcelCompetency.Medium)
                return true;
            return false;
        }
    }

Now our PotentialManager property could be changed to…

        public bool PotentialManager
        {
            get
            {
                var  potentialManagerSpec = new PotentialManagerSpecification();
                if (potentialManagerSpec.IsSatisfiedBy(this))
                    return true;
                return false;
            }
        }

Now the condition for satisfying a requirement will lie independent of the entity object and can be changed anytime without touching the entity.

Now lets assume that the management has laid down a condition that managers should not only have High Excel competency but also should have more than 10 years of experience. To satisfy this condition lets create another specification ManagerRequiredExperienceSpecification

    public class ManagerRequiredExperienceSpecification : ISpecification<Employee>
    {
        public bool IsSatisfiedBy(Employee employee)
        {
            if (employee.TotalExperience > 10)
                return true;
            return false;
        }
    }

We’ll create a test method to test these specifications

[TestMethod]
public void ManagerSelection_Test()
{
Employee abc = new Employee(){ FirstName="ABC", ExcelCompetency = ExcelCompetency.Medium, TotalExperience = 12};
Employee def = new Employee(){ FirstName = "DEF", ExcelCompetency = ExcelCompetency.High, TotalExperience = 11};
Employee qrs = new Employee(){ FirstName = "QRS", ExcelCompetency = ExcelCompetency.High, TotalExperience = 8};
Employee xyz = new Employee(){FirstName = "XYZ", ExcelCompetency = ExcelCompetency.Low, TotalExperience = 10};

IList employees = new List()
{
  abc, def, qrs, xyz
};

PotentialManagerSpecification managerSpec = new PotentialManagerSpecification();
ManagerRequiredExperienceSpecification experienceSpec = new ManagerRequiredExperienceSpecification();

foreach(Employee e in employees){
    System.Diagnostics.Debug.WriteLine(
      "{0} is {1} to be a Manager",
       e.FirstName,
       managerSpec.And(experienceSpec).IsSatisfiedBy(e) ? "qualified" : "not qualified"
       );
}
}

If you run the above test, your Debug trace will show the following output

ABC is qualified to be a Manager
DEF is qualified to be a Manager
QRS is not qualified to be a Manager
XYZ is not qualified to be a Manager

You can see in the line managerSpec.And(experienceSpec).IsSatisfiedBy(e) how we have used the extension method ‘And’.

Based on the requirement we can use the extension methods as follows.

  • managerSpec.And(experienceSpec).IsSatisfiedBy(e) -> Employee satisfies both PotentialManagerSpecification and ManagerRequiredExperienceSpecification
  • managerSpec.Or(experienceSpec).IsSatisfiedBy(e) -> Employee satisfies either PotentialManagerSpecification or ManagerRequiredExperienceSpecification
  • managerSpec.And(experienceSpec.Not()).IsSatisfiedBy(e) -> Employee satisfies PotentialManagerSpecification but not ManagerRequiredExperienceSpecification

As you can see the possibilities are limitless. If you want to create a new specification which handles both PotentialManagerSpecification and ManagerRequiredExperienceSpecification then we can also do so as follows

    public class ManagerSelectionSpecification : ISpecification<Employee>
    {
        public bool IsSatisfiedBy(Employee employee)
        {
            PotentialManagerSpecification managerSpec = new PotentialManagerSpecification();
            ManagerRequiredExperienceSpecification experienceSpec = new ManagerRequiredExperienceSpecification();

            if (managerSpec.And(experienceSpec).IsSatisfiedBy(employee))
                return true;

            return false;
        }
    }

We can also use specifications to perform validation before saving information in database for instance

var managerSelectionSpec = new ManagerSelectionSpecification();
if(managerSelectionSpec.IsSatisfiedBy(employee)
{
      PromoteToManager();
}

Or in LINQ as follows

 var managers = from e in employees
                           where managerSelectionSpec.IsSatisfiedBy(e)
                           select e;

Pretty neat isn’t it? I hope you found it useful as I did.

Some useful links…

Specifications by by Eric Evans and Martin Fowler

Repository, Specification, Unit of Work, Persistence Ignorance POCO with Microsoft ADO.NET Entity Framework 4.0 Beta 2

Learning the Specification Pattern

StyleCop 4.5 Beta is out!

Posted in C#, Microsoft, Tips, Visual Studio by Sumit Thomas on March 21, 2011

You can download it @ http://stylecop.codeplex.com/releases/view/62209

If you are not aware of StyleCop, it is an open source static code analysis tool from Microsoft which helps developers analyse their C# code for conformance to StyleCop recommended coding styles and it works at the source code level.

For more information on this community driven project visit http://stylecop.codeplex.com/

Changing the default View Location in ASP.NET MVC

Posted in ASPNETMVC, C# by Sumit Thomas on June 10, 2009

[tweetmeme style=”compact”]After 8 hours of training in ASP.NET MVC by a guy from Microsoft I starting revisiting the ways in which I’ve implemented some of the functionality in my existing project done using MVC. The training was just a walkthrough of what I already know about ASP.NET MVC from the internet. One of the questions I put forth to the trainer, which he termed as interesting was, how to change the default view location in MVC. Apart from his I’ll get back to you on this answer, one of my colleagues in the room was vociferous in declaring that it is not possible at all as none of the MVC tutorials talk about it ūüėź

I googled and binged for answers and found few…

I found this post Organize your views in ASP.Net MVC very useful in scenarios where I have more than one Controller which needs to share the same View location.

I wanted to check if there are any other ways of doing the same and so I twittered Scott Hanselman, the guy himself to find if he can give me any pointers and he replied…

shanselmanR @2leggedspider Derive from WebFormsViewEngine, override just FindView(). Look at the NerdDinner code on Codeplex at the MobileViewEngine.

He was talking about this piece of code in NerdDinner.

public class MobileCapableWebFormViewEngine : WebFormViewEngine
	{
		public override ViewEngineResult FindView(ControllerContext controllerContext, string viewName, string masterName, bool useCache)
		{
			ViewEngineResult result = null;
			var request = controllerContext.HttpContext.Request;

			//This could be replaced with a switch statement as other advanced / device specific views are created
			if (UserAgentIs(controllerContext, "iPhone"))	{
				result = base.FindView(controllerContext, "Mobile/iPhone/" + viewName, masterName, useCache);
			}

			// Avoid unnecessary checks if this device isn't suspected to be a mobile device
			if (request.Browser.IsMobileDevice)
			{
				//TODO: We are not doing any thing WinMobile SPECIAL yet!

				//if (UserAgentIs(controllerContext, "MSIEMobile 6"))	{
				//  result = base.FindView(controllerContext, "Mobile/MobileIE6/" + viewName, masterName, useCache);
				//}
				//else if (UserAgentIs(controllerContext, "PocketIE") && request.Browser.MajorVersion >= 4)
				//{
				//  result = base.FindView(controllerContext, "Mobile/PocketIE/" + viewName, masterName, useCache);
				//}

				//Fall back to default mobile view if no other mobile view has already been set
				if ((result == null || result.View == null) &&
								request.Browser.IsMobileDevice)
				{
					result = base.FindView(controllerContext, "Mobile/" + viewName, masterName, useCache);
				}
			}

			//Fall back to desktop view if no other view has been selected
			if (result == null || result.View == null)
			{
				result = base.FindView(controllerContext, viewName, masterName, useCache);
			}

			return result;
		}

		public bool UserAgentIs(ControllerContext controllerContext, string userAgentToTest)
		{
			return (controllerContext.HttpContext.Request.UserAgent.IndexOf(userAgentToTest,
							StringComparison.OrdinalIgnoreCase) > 0);
		}
	}

Though the above code helps in detecting if the user is accessing the site from a mobile device and redirect the request to a particular view location, you can customize it for your needs.

Btw, if you have not checked NerdDinner yet, I suggest you should. It is one of the best ways to learn MVC.

Another approach I found useful is from Phil Haack Grouping Controllers with ASP.NET MVC.

Let me know if you came across any other approach or best practice relevant to this.

Tagged with: , ,

Scott Hanselman talks about MVC 1.0 and NerdDinner.com at Mix09

Posted in ASP.NET, ASPNETMVC, C#, Microsoft, SQL by Sumit Thomas on March 22, 2009

A really great presentation by Scott Hanselman demonstrating how he created NerdDinner.com using MVC 1.0. In this 70 minutes presentation, Scott demonstrates how we can build a real Web site with ASP.NET, ASP.NET AJAX, Authentication, Authorization, MVC, Microsoft SQL Server and jQuery. Long video, but really worth it.

CodePlex Project: http://nerddinner.codeplex.com

Tagged with: , ,

Upload files to UNC share using ASP.NET

Posted in ASP.NET, C# by Sumit Thomas on May 28, 2007

[tweetmeme style=”compact”]SCENARIO 1: Your ASP.NET website should upload files to a File Server accessible via an UNC share

SOLUTION

  1. Create a Local User Account in the Web server with Username say “TestUser” and Password say “Secret” of your choice
  2. Create a Local User Account in File server with the same Username “TestUser” and Password “Secret” as the one created in the Web server.
  3. In Web.config set the impersonation to true for the above Local User Account as follows…
    
    

    And your upload script will be something like this…

                    if ((FileUpload1.PostedFile != null) &amp;&amp; (FileUpload1.PostedFile.ContentLength &gt; 0))
                    {
                        string fileName = System.IO.Path.GetFileName(FileUpload1.PostedFile.FileName);
                        string folderPath = @"\\MyUNCShare\MyFolder\";
    
                        string locationToSave = folderPath + "\\" + fileName ;
                        try
                        {
                            FileUpload1.PostedFile.SaveAs(locationToSave );
                            Response.Write("The file has been uploaded.");
                        }
                        catch (Exception ex)
                        {
                            Response.Write("Error: " + ex.Message);
                        }
                    }
                    else
                    {
                        Response.Write("Please select a file to upload.");
                    }
    
    

    Run the code and test the upload and it should work.

    SCENARIO 2:
    Now this is fine for a demo. But what if I want to setup Windows authentication for my application but restrict the authenticated users of the application from directly accessing the UNC share to copy files? Any uploads to the UNC share should be done only using the Local User Account that I created earlier.

    To solve this issue do the following…

    1) Change the identity impersonate tag in Web.config to

    <identity impersonate="true" />

    , assuming that you have enabled Windows authentication in IIS as well.

    2) Change the impersonation at runtime to the Local User Account, upload the file and then undo the impersonation. To do this use the following code…

    
    using System.Security.Principal;
    using System.Runtime.InteropServices;
    
    namespace FileUploadUNCShare
    {
        public partial class _Default : System.Web.UI.Page
        {
    
            public const int LOGON32_LOGON_INTERACTIVE = 2;
            public const int LOGON32_PROVIDER_DEFAULT = 0;
    
            WindowsImpersonationContext impersonationContext;
    
            [DllImport("advapi32.dll")]
            public static extern int LogonUserA(String lpszUserName,
                String lpszDomain,
                String lpszPassword,
                int dwLogonType,
                int dwLogonProvider,
                ref IntPtr phToken);
            [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)]
            public static extern int DuplicateToken(IntPtr hToken,
                int impersonationLevel,
                ref IntPtr hNewToken);
    
            [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)]
            public static extern bool RevertToSelf();
    
            [DllImport("kernel32.dll", CharSet = CharSet.Auto)]
            public static extern bool CloseHandle(IntPtr handle);
    
            private bool ImpersonateUser(String userName, String domain, String password)
            {
                WindowsIdentity tempWindowsIdentity;
                IntPtr token = IntPtr.Zero;
                IntPtr tokenDuplicate = IntPtr.Zero;
    
                if (RevertToSelf())
                {
                    if (LogonUserA(userName, domain, password, LOGON32_LOGON_INTERACTIVE,
                        LOGON32_PROVIDER_DEFAULT, ref token) != 0)
                    {
                        if (DuplicateToken(token, 2, ref tokenDuplicate) != 0)
                        {
                            tempWindowsIdentity = new WindowsIdentity(tokenDuplicate);
                            impersonationContext = tempWindowsIdentity.Impersonate();
                            if (impersonationContext != null)
                            {
                                CloseHandle(token);
                                CloseHandle(tokenDuplicate);
                                return true;
                            }
                        }
                    }
                }
                if (token != IntPtr.Zero)
                    CloseHandle(token);
                if (tokenDuplicate != IntPtr.Zero)
                    CloseHandle(tokenDuplicate);
                return false;
            }
    
            private void UndoImpersonation()
            {
                impersonationContext.Undo();
            }
    
            protected void Page_Load(object sender, EventArgs e)
            {
    
            }
    
            protected void Button1_Click(object sender, EventArgs e)
            {
                if (ImpersonateUser("Test_Share", "", "@Dell123"))
                {
    
                    if ((FileUpload1.PostedFile != null) &amp;&amp; (FileUpload1.PostedFile.ContentLength &gt; 0))
                    {
                        string fileName = System.IO.Path.GetFileName(FileUpload1.PostedFile.FileName);
                        string folderPath = @"\\MyUNCShare\MyFolder\";
    
                        string locationToSave = folderPath + "\\" + fileName;
                        try
                        {
                            FileUpload1.PostedFile.SaveAs(locationToSave);
                            Response.Write("The file has been uploaded.");
                        }
                        catch (Exception ex)
                        {
                            Response.Write("Error: " + ex.Message);
                        }
                    }
                    else
                    {
                        Response.Write("Please select a file to upload.");
                    }
    
                    UndoImpersonation();
                }
                else
                {
                    Response.Write("Failed");
                }
    
            }
        }
    }
    

    This will ensure that the impersonation you need to upload the file does not interfere with the Application level impersonation you may want to use.

    Hope it is useful!

    Cheers!

    ref: http://support.microsoft.com/kb/306158

Tagged with: , ,

Script#

Posted in AJAX, C#, technology by Sumit Thomas on August 18, 2006

I recently came across Nikhil Kothari’s blog and his pet project Script#. Nikhil is an architect at the Web platform and tools team in Microsoft. Check out his blog http://www.nikhilk.net/

Here is his video on Script#

Tagged with: ,

Container-Managed Persistence

Posted in ASP.NET, C# by Sumit Thomas on February 28, 2006

[tweetmeme style=”compact”]The most fascinating part in developing ASP.NET applications for me is how you can leverage different design models relatively easily, compared to other platforms. Coming from a ASP background, I was excited when I created my first 3-tier application in ASP.NET. I’ve been interested in architectures ever since. The n-tier architecture adopted in most of the Microsoft’s sample applications such as IBuySpy, is one of the commonly used models and I’ve used it successfully in my applications. It is what we call a Component-Managed Persistence model, where the component needs to know how to make calls to the persistence layer such as SQL Server and also what are the parameters required for a transaction with the persistence layer. So in a sense the business layer is tightly coupled with the under lying persistence layer.

I came across this book “ASP.NET E-commerce Programming: Problem – Design – Solution” by Kevin Hoffman a long time back. Though I bought the book, i really didn’t understand the Container-Managed Persistence model described in the book. I had tough time trying to download the code associated with the book as the book was no longer supported on www.wrox.com. Anyway, I managed to download the code after several hours of surfing and googling the web. I spent a lot od time trying to understand how the architecture works and I finally managed to understand how brilliant it is. I was however surprised to find that there ain’t much reviews or talk about this book anywhere. Anyway, I would like to thank Kevin Hoffman for this wonderful book.

So what is Container-Managed Persistence?

Quoting from the book: “Container-Managed Persistence is a design pattern whereby business objects have no direct knowledge of where their data came from and how it will be persisted” The business objects are pure business objects and are never tighly coupled with any persistence layer.

After trying several sample applications with the CMPServices library and the associated libraries that formed the part of the architecture, there was one issue that I wasn’t comfortable with in the CMPServices library. The PersistableObjectSet.cs class uses an internal DataSet for handling the resultset from the database. Now, I being a strong advocate of using DataReaders in ASP.NET application was not really comfortable with it. So, I decided to try my hands on tweaking the CMPServices library to make DataReader as the default data handling object and also provide the developer with an option to choose the data object – DataSet, DataTable or DataReader he/she wants to use in the application.

I made few additions to the CMPServices library to achieve what I said. Now to start off I created an Enum CMPDataObjectType as follows

public enum CMPDataObjectType
	{
		/// 
		/// DataSet
		/// 
		DataSet = 1,
		/// 
		/// DataTable
		/// 
		DataTable = 2,
		/// 
		/// DataReader
		/// 
		DataReader = 3
	}

The enum has three fields, representing the Data object types that we are going to support in the CMPServices library. The actual PerisistableObjectSet class in the CMPServices library is as follows

public class PersistableObjectSet : PersistableObject
	{
		protected DataSet internalData;

		public PersistableObjectSet()
		{
			internalData = new DataSet();	
		}

		public virtual void FinalizeData()
		{
			PersistableObjectSet.FinalizeData( internalData );
		}

		public static void FinalizeData(DataSet scratchData )
		{

		}

		public DataSet ResultSet
		{
			get 
			{
				return internalData;
			}
			set 
			{
				internalData = value;
			}
		}
	}

As you can see it has an internal DataSet which is used to hold the records returned from the database. Now we need to add support for DataReader and DataTable in this class. So I modified the class as follows

public class PersistableObjectSet : PersistableObject
	{

		/// 
		/// The internal DataSet of the PersistableObjectSet
		/// 
		protected DataSet objectDataSet;
		/// 
		/// The internal DataReader of the PersistableObjectSet
		/// 
		protected IDataReader objectDataReader;
		/// 
		/// The internal DataTable of the PersistableObjectSet
		/// 
		protected DataTable objectDataTable;

		/// 
		/// CMPDataObjectType
		/// 
		protected CMPDataObjectType cmpDataObjectType;

		/// 
		/// Default constructor
		/// 
		public PersistableObjectSet()
		{
			
		}

		/// 
		/// Gets/Sets the value of the internal DataSet of the PersistableObjectSet
		/// 
		public DataSet DataSet
		{
			get
			{
				return objectDataSet;
			}
			set
			{
				objectDataSet = value;
			}
		}

		/// 
		/// Gets/Sets the value of the internal DataReader of the PersistableObjectSet
		/// 
		public IDataReader DataReader
		{
			get
			{
				return objectDataReader;
			}
			set
			{
				objectDataReader = value;
			}
		}


		/// 
		/// Gets/Sets the value of the internal DataTable of the PersistableObjectSet
		/// 
		public DataTable DataTable
		{
			get
			{
				return objectDataTable;
			}
			set
			{
				objectDataTable = value;
			}
		}

		/// 
		/// The CMPDataObjectType refers to the Datatype that will be used for Data retrieval.
		/// There are 3 CMPDataObjectTypes - DataSet, DataTable and DataReader.
		/// DataReader is the default one and it is recommended to use it for any kind of data retrieval.
		/// DataTable and DataSet must be used only for special data maniuplations.
		/// DataSet should be used only for special requirements like serialization/deserialization, caching etc.
		/// 
		public CMPDataObjectType DataObjectType
		{
			get
			{
				if( cmpDataObjectType == 0 )
				{
					cmpDataObjectType = CMPDataObjectType.DataReader;
				}
				return cmpDataObjectType;
			}
			set
			{
				cmpDataObjectType = value;
			}
		}

		/// 
		/// This method provides a format for child classes to implement data manipulation methods.
		/// If any additional work needs to be done on the retrieved data, this method can be 
		/// implemented as a standard place for data finalisation to take place.
		/// 
		/// If you are using a DataReader you SHOULD call this method in your business logic to
		/// close the Connection object.
		/// 
		public virtual void FinaliseData()
		{
			if( cmpDataObjectType == CMPDataObjectType.DataSet )
			{
				PersistableObjectSet.FinaliseData( objectDataSet );
			}
			else if( cmpDataObjectType == CMPDataObjectType.DataReader )
			{
				PersistableObjectSet.FinaliseData( objectDataReader );
			}
			else if( cmpDataObjectType == CMPDataObjectType.DataTable )
			{
				PersistableObjectSet.FinaliseData( objectDataTable );
			}
		}

		/// 
		/// This method is called by the FinaliseData() overload in the business logic to do
		/// any additional work with the DataSet. 
		/// 
		/// DataSet
		public static void FinaliseData( DataSet tempData )
		{
			tempData.Dispose();
		}

		/// 
		/// This method is called by the FinaliseData() overload in the business logic to do
		/// any additional work with the DataTable. 
		/// 
		/// DataTable
		public static void FinaliseData( DataTable tempData )
		{
			tempData.Dispose();
		}

		/// 
		/// This method is called by the FinaliseData() overload in the business logic to do
		/// any additional work with the DataReader. 
		/// 
		/// IDataReader/SqlDataReader
		public static void FinaliseData( IDataReader tempData )
		{
			tempData.Close();
			tempData.Dispose();
		}

	}

So basically I have added a property DataObjectType which is used to set the type of data object we are going to use. By default it is DataReader. Each data types will have their own FinaliseData method. For the DataReader we make sure that the object is closed in order to close its connection object.

We also need to do couple of changes to the SqlPersistenceContainer class, as we now have three types of data object types to handle.

Since we use the data object only while retrieving the data, we modify the Select method as follows

		/// 
		/// This method performs the Select operation. It executes a stored procedure and places any return or output values
		/// back onto the instance of the PersistableObject that was provided. If the object instance can be cast to a PersistableObjectSet, then the
		/// container will attempt to assign the output or return value to a Data object within the PersistableObjectSet.
		/// 
		/// Persistable object
		public override void Select( PersistableObject selectObject )
		{
			try
			{
				CommandMapping commandMap	= containerMap.SelectCommand;
				SqlCommand selectCommand	= BuildCommandFromMapping( commandMap );
				AssignValuesToParameters( commandMap, ref selectCommand, selectObject );
				if(selectCommand.Connection.State == ConnectionState.Closed)
					selectCommand.Connection.Open();

				if( selectObject is PersistableObjectSet )
				{
					PersistableObjectSet objectSet = (PersistableObjectSet)selectObject;
					
					AssignResultSetToObjectSet( commandMap, selectCommand, ref selectObject );
					AssignOutputValuesToInstance( commandMap, selectCommand, ref selectObject );
					//selectCommand.Connection.Close();
				}
				else
				{
					selectCommand.ExecuteNonQuery();
					selectCommand.Connection.Close();
					AssignOutputValuesToInstance( commandMap, selectCommand, ref selectObject );
				}
				//selectCommand.Connection.Dispose();
				//selectCommand.Dispose();
			}
			catch (Exception dbException)
			{
				throw new Exception("Persistance (Select) Failed for PersistableObject", dbException );
			}
		}

We have remaned AssignResultSetToDataSet to AssignResultSetToObjectSet to give it a more generic name. The AssignResultSetToObjectSet method is as follows

private void AssignResultSetToObjectSet( CommandMapping commandMap, SqlCommand sqlCommand, ref PersistableObject persistObject )
		{
			SqlDataAdapter sqlDa = null;
			PersistableObjectSet objectSet = (PersistableObjectSet)persistObject;
			if( objectSet.DataObjectType == CMPDataObjectType.DataSet )
			{
				sqlDa = new SqlDataAdapter( sqlCommand );
				objectSet.DataSet = new DataSet();
				sqlDa.Fill ( objectSet.DataSet );
				sqlCommand.Connection.Close();
				sqlCommand.Connection.Dispose();
				sqlCommand.Dispose();
			}
			else if( objectSet.DataObjectType == CMPDataObjectType.DataReader )
			{
				SqlDataReader objReader = sqlCommand.ExecuteReader( CommandBehavior.CloseConnection );
				objectSet.DataReader = objReader;
			}
			else if( objectSet.DataObjectType == CMPDataObjectType.DataTable )
			{
				sqlDa = new SqlDataAdapter( sqlCommand );
				objectSet.DataTable = new DataTable();
				sqlDa.Fill ( objectSet.DataTable );
				sqlCommand.Connection.Close();
				sqlCommand.Connection.Dispose();
				sqlCommand.Dispose();
			}
		}

We create the instance of the select data object type and fetch the data using it. So, thats all the changes we do to the CMPServices library.

Now lets see how we use it. I created a simple contacts management application and here is a sample of a function in the Business layer. Since I am sure I am not going to change the underlying datasource, I populate the data to a strongly-typed collection object which I will then use in my presentation layer. I have a function called GetContacts and I have three versions of the function to support the data object type under consideration.

For DataReader (default)

public static Contacts GetContacts()
            {
                  SqlPersistenceContainer spc = new SqlPersistenceContainer( CMPConfigurationHandler.ContainerMaps["SherstonContacts"] );
                  ContactSet contactSet         = new ContactSet();

                  spc.Select( contactSet );

                  Contacts contactList         = new Contacts();

                  while( contactSet.DataReader.Read() )
                  {
                        IDataReader row               = contactSet.DataReader;
                        Contact contact               = new Contact();
                        contact.ContactId             = Convert.ToInt32( row["ContactId"] );
                        contact.ContactName           = row["ContactName"].ToString();
                        contact.Email                 = row["Email"].ToString();

                        contactList.Add( contact );
                  }

                  ///It is imperative that you call the FinaliseData method for DataReader as this will
                  ///automatically close it’s connection object. Note that it is called after the DataReader's values
                  ///are retrieved as DataReader's require the connection object to be open unlike DataSet/DataTable.

                  contactSet.FinaliseData();
                  return contactList;
            }

For DataSet

	    public static Contacts GetContacts()
            {
                  SqlPersistenceContainer spc = new SqlPersistenceContainer( CMPConfigurationHandler.ContainerMaps["SherstonContacts"] );
                  ContactSet contactSet         = new ContactSet();

                  //Setting the DataObjectType of the contactSet object to DataSet
                  contactSet.DataObjectType     = CMPDataObjectType.DataSet;

                  spc.Select( contactSet );

                  Contacts contactList         = new Contacts();

                  ///For DataSet and DataTable the FinaliseData method should be called before extracting the values
                  ///as the manipulation with the data would have occured if there is an overriden FinaliseData method 
                  ///in the ContactSet class

                  contactSet.FinaliseData();

                  foreach( DataRow row in contactSet.DataSet.Tables[0].Rows )
                  {
                        Contact contact               = new Contact();
                        contact.ContactId             = Convert.ToInt32( row["ContactId"] );
                        contact.ContactName           = row["ContactName"].ToString();
                        contact.Email                 = row["Email"].ToString();

	                contactList.Add( contact );
                  }

                  return contactList;
            }

For DataTable

	  public static Contacts GetContacts()
         {
                 SqlPersistenceContainer spc = new SqlPersistenceContainer( CMPConfigurationHandler.ContainerMaps["SherstonContacts"] );
                  ContactSet contactSet         = new ContactSet();

                  //Setting the DataObjectType of the contactSet object to DataTable
                  contactSet.DataObjectType     = CMPDataObjectType.DataTable;

                  spc.Select( contactSet );

                  Contacts contactList         = new Contacts();


                  ///For DataSet and DataTable the FinaliseData method should be called before extracting the values
                  ///as the manipulation with the data would have occured if there is an overriden FinaliseData method 
                  ///in the ContactSet class

                  contactSet.FinaliseData();

                  foreach( DataRow row in contactSet.DataTable.Rows )
                  {
                        Contact contact               = new Contact();
                        contact.ContactId             = Convert.ToInt32( row["ContactId"] );
                        contact.ContactName           = row["ContactName"].ToString();
                        contact.Email                 = row["Email"].ToString();

                        contactList.Add( contact );
                  }

                  return contactList;

            }

So here it is a modified CMPServices library with support for DataReader and DataTable ūüôā

Technorati: , ,

Edit Web.config at run-time

Posted in ASP.NET, C# by Sumit Thomas on February 21, 2006

[tweetmeme style=”compact”]I had an argument with one of my friends recently on his statement that Web.config cannot be edited at run-time. My counter statement was if it is a XML file and if you have write permission on it then you can do it. Here is the code to do the same…

public class ConfigManager
{
 //The file path of Web.config
 static string configFilePath =
 HttpContext.Current.Server.MapPath("~/web.config");
  
 /// Returns the Web.config as XmlDocument
 private static XmlDocument GetWebConfig()
 {
 XmlDocument xmlDoc = new XmlDocument();
 xmlDoc.Load( configFilePath );
 return xmlDoc;
 }

 /// Checks for the appSettings node in Web.config
 /// Updates the value for the given key if the key
 /// already exists or creates a new node for the
 /// given key and value combination if it is not present
 public static void CreateAppSetting( string key, string val )
 {
 XmlDocument xmlDoc = GetWebConfig();
 XmlNode xmlNode = xmlDoc.SelectSingleNode("//appSettings");
   
 if( xmlNode == null )
 {
  throw new Exception("appSettings node not found!");
 }

 string nodeFormat = string.Format("//add[@key='{0}']", key) );

 XmlElement xmlElement =
( (XmlElement) xmlNode.SelectSingleNode( nodeFormat );
 try
 {   
  if( xmlElement != null )
  {
   xmlElement.SetAttribute( "value", val );
  }
  else
  {
   xmlElement = xmlDoc.CreateElement( "add" );
   xmlElement.SetAttribute( "key", key );
   xmlElement.SetAttribute( "value", val );
   xmlNode.AppendChild( xmlElement );
  }
  SaveWebConfig( xmlDoc );
 }
 catch( Exception ex )
 {
  throw new Exception( ex.Message );
 }

 }

 /// Saves the changes to the Web.config file
 private static void SaveWebConfig( XmlDocument xmlDoc )
 {
  try
  {
  XmlTextWriter writer =
 new XmlTextWriter( configFilePath, null );
  writer.Formatting = Formatting.Indented;
  xmlDoc.WriteTo( writer );
  writer.Flush();
  writer.Close();
  }
  catch( Exception ex )
  {
  throw new Exception( ex.Message );
  }
 }
}//end of class

The application should have required permissions on Web.config to edit it, otherwise the code will throw an access denied error.

Technorati: , ,
Tagged with: ,

Strongly-typed collections

Posted in C# by Sumit Thomas on May 23, 2005

I use DataReader as the data access object in my ASP.NET applications for obvious reasons. To improve the performance of my applications, I also need to utilise the data caching feature in .NET. DataReader cannot be cached, hence I create my custom class with public properties resembling the structure of the table from which the data was fetched, create an instance of this custom class, assign the values from the DataReader to the instance and then add the instance to a Collection object, which can be later cached.

There are many useful Collection classes such as ArrayList, SortedList, Queue, Stack and Hashtable available in System.Collections namespace. These are out-of-the-box classes that you can use for your data manipulation. More information on the .NET collection classes can be found here.

ArrayList is one of the commonly used collection types in .NET and that is attributed to its flexibility in storing data and also its useful methods to manipulate the data. But it also has its own implications. Since ArrayList is implemented internally as an Array of Object type, it is resource intensive as it has to box the data added to it at runtime and also we might receive unexpected exceptions while unboxing the data within the ArrayList. For instance, lets check this code…

ArrayList list = new ArrayList();
list.Add("string value");
list.Add(1);
list.Add(true);

As you can see, we have created an object of type ArrayList and added three items of different data types, namely a string, integer and boolean to it. Now, lets try to retrieve the values from the ArrayList and cast them to string data type, as follows..

for( int i=0; i < list.Count; i++ )
{
	Console.WriteLine( (string)list[i] );
}

Now when we run the above code, we can expect an error when i=1 as we cannot directly cast an integer to string. To avoid this problem we can write our own strongly-typed collection. In a strongly-type collection, we can store data only of a particular type and there won’t be any need to unbox it as we know the type of data residing in the collection. To create a strongly-type collection we can create a custom class that inherits from the CollectionBase abstract class present in the System.Collections namespace. The CollectionBase class implements three interfaces namely IList, IEnumerable and ICollection available in the same System.Collections namespace.

So lets get started. We are going to create a custom collection called Staffs which can hold data of type Staff. Here is the code for Staff.cs

public class Staff
{
	private int staffId;
	private string firstName;
	private string lastName;
	private string department;

	public Staff(int _staffId, string _firstName, string _lastName, string _department)
	{
		staffId = _staffId;
		firstName	 = _firstName;
		lastName	 = _lastName;
		department = _department;
	}

	public int StaffId
	{
		get
		{
			return staffId;
		}
		set
		{
			staffId = value;
		}
	}

	public string FirstName
	{
		get
		{
			return firstName;
		}
		set
		{
			firstName = value;
		}
	}

	public string LastName
	{
		get
		{
			return lastName;
		}
		set
		{
			lastName = value;
		}
	}

	public string Department
	{
		get
		{
			return department;
		}
		set
		{
			department = value;
		}
	}

}//end of class

It is a simple class with four public properties.

Now, we have to create our stongly-type collection which can hold a data of type Staff. Here is the code for Staffs.cs

	public class Staffs : CollectionBase
	{
		public Staffs()
		{
			
		}

		//An indexer of type Staff
		public Staff this[int index]
		{
			get
			{
				return (Staff)List[index];
			}
			set
			{
				List[index] = value;
			}
		}

		//Add object of type Staff to the List
		public int Add( Staff staff )
		{
			return List.Add( staff );
		}
	}

There are many other overridable methods within the CollectionBase class which you can explore.

Lets write a console application to try our strongly-typed collection

		[STAThread]
		static void Main(string[] args)
		{
			Staffs staffs = new Staffs();
			staffs.Add( new Staff(1, "Steven", "Tyler", "Rock" ) );
			staffs.Add( new Staff(2, "Michael", "Jackson", "Pop" ) );
			staffs.Add( new Staff(3, "Marshall", "Mathers", "Rap" ) );
			staffs.Add( new Staff(4, "George", "Bush", "Crap" ) );

			foreach(Staff staff in staffs)
			{
				Console.WriteLine( staff.FirstName + " " + staff.LastName + " is a " + staff.Department + " artist" );
				Console.ReadLine();
			}
		}

If we can ignore the staff data that we entered and focus on collection object ;), we find that it is used similarly to an ArrayList and we don’t have to worry about unboxing the data, as we know it has only objects of type Staff in it.

Eventhough there are several advantages in using strongly-type collection, we have to create a collection for each custom class which might greatly increase the amount of code we have to write. This will however be overcomed with the release of Generics in .NET 2.0

Technorati:
Tagged with: ,

DataSet or DataReader?

Posted in ASP.NET, C# by Sumit Thomas on February 14, 2005

I’ve been developing websites in ASP.NET for quite a while now. During my initial projects in ASP.NET I like many others liberally used DataSets whenever possible and hardly went for DataReaders. Now the reason I didn’t¬†prefer DataReaders was because¬†i often¬†forget to close¬†it’s connection object and then I receive a “all pooled connections were in use and max pool size was reached…” error from the SQL Server after some trial runs. Silly reason uh!¬†Now, I know why the error occured but the lazy me wanted a option where I don’t have to worry about these issues.

Then one fine day I felt an invisible slap on my head and I began to seriously consider best practices, performance, robust architecure etc.. seriously. I guess my old programming habits in ASP were reluctant to let me go. But something unusual happened, I won!

Now back to the real issue. DataSets are really powerful and easy to use. But when it comes to developing performant websites, DataSets should never be an option. DataReaders are fast and I mean really fast when compared to a DataSet or for that matter a DataTable. Eventhough DataReaders are fast, there was one fact which worried me. DataReaders need a open connection object while retrieving data. Now would it be a problem if I try to read a large number of records, say more than 10,000 using a DataReader as it needs the open connection to execute? Well, yes if you don’t close the connection object as soon as you are done with it.

I came across this link A Speed Freak’s Guide to Retrieving Data in ADO.NET by Craig Davis and it clearly explains the advantage of using DataReader over DataSet or DataTable. But are DataSets a complete no no when it comes to ASP.NET? Well not really. If you have checked the IBuySpy portal’s architecure, the use of strongly-typed DataSet makes a lot of sense. Since the data is cached most of the time and rarely updated, the performance issue is not really a huge threat. More over it is more easier to handle the large XML file that defines the portals structure using the strongly-typed DataSet.

So does that mean that as long as you cache the DataSet it is fine to use them? Well I would still go for DataReaders. You cannot cache a DataReader object but you can cache the data you receive from DataReader. We can create a class with public properties that mirrors the table structure, iterate  through the records in the DataReader, populate the relevant data to the object instance of the custom class and then add it preferably to a strongly-type collection or an ArrayList, which can be cached. Obviously we need to enter few extra lines of code to do this but its worth the effort as we have performance in mind.

DataReaders should therefore be the first choice when it comes to data access in ASP.NET. DataSets could be used if you don’t worry about the performance factor or if there is situation where DataSet is the only option, which in my opinion is¬†very unlikely.

Technorati: , ,