Have your foot in the door with Delegate and Events

delegate

Delegate and events are one of the most used techniques in a program. To my opinion people use them most but writes them the least. Recently one of my junior asked me about an explanation about delegate and event and this is the reason why I am here sitting and writing for the next generation programmers.

Frankly speaking delegates are a kind of an advanced version of function pointer. Those who are familiar with C/C++ knows that function pointers are special type of pointers that store the address of a function in the stack. This address can be passed freely throughout the program and later on when needed can be executed anywhere of the program.

One the other side events are a kind of signal in C/C++. These signals can be raised by your program as like a signal can be raised by the Operating System when you plug-in a USB stick; but someone out there needs to listen the single and handle them properly.

Well; all those are theoretical and now we can go practical use of it. Delegates and events are used in scenarios where we need publish-subscribe pattern. Now that is publish-subscribe pattern? Hit on wiki here for a detailed description. But in a nutshell The Publishers will raise a signal and the subscribers will listen to those signals and act accordingly. What I want to emphasis on public subscriber pattern is The Publisher should not have any knowledge of who are The Subscribers are, and The Subscribers should not have any interaction with The Publishers, except one thing – listening to The Publisher’s event.

In Publisher.cs

So now we have a publisher class where we need a delegate and delegate-type-variable. As I discussed above that the delegate will point to a function so it must specify a function signature.

public delegate bool NewEditionPublishHandler(object Publisher, string EditionName , int EditionNr);

This function signature is the type of the delegate-variable.

public NewEditionPublishHandler Publish;

Later in Publisher this delegate-variable will point a function in the Subscriber class so that the publisher can call that function using this delegate-variable.


public delegate bool NewEditionPublishHandler(object Publisher, string EditionName , int EditionNr);

public NewEditionPublishHandler Publish;

… … …

Publish(this, "Harry Potter" , i++);

Notice that Publisher is calling a function of a Subscriber without any knowledge of it. No reference, no variable, no knowledge at all, except the face that the subscriber must listen to the signal send by the publisher i.e. Subscriber must subscribe a function to that signal and of course that function signature should be the same as that delegate-type.

At the subscriber class this is done by the following code.

In Subscriber.cs


public void SubscriberToPublisher(Publisher publisher)
{
   publisher.Publish += new Publisher.NewEditionPublishHandler(ShowPublicationDetail);

   //or (event subscription with delegate keyword)

   publisher.Publish += delegate(object SubscribedPublisher, string publicationName, int publilcationNr)  { … return True };

   //or (event subscription with linq expression)

   publisher.Publish += (SubscribedPublisher, publicationName, publilcationNr) => { … return True;};
}

The above three statements are doing the same task with three different syntax.

Here two interesting things are happening.

  • Subscriber is subscribing itself to a signal of the publisher – Subscriber is binding an anonymous function with the signal of the publisher so that whenever a signal is raised from the publisher, this function will be executed.
  • Publisher is delegating its task to the Subscribers – Publisher is delegating its task to the subscribers function so whenever the publisher is calling the delegate-variable, it’s the subscribers anonymous function gets executed.

 

In Program.cs

In our program somewhere we need to create a publisher and a subscriber and subscribe to the publisher.


Publisher P = new Publisher();
Subscriber S = new Subscriber();
S.SubscriberToPublisher(P);
Subscriber2 S2 = new Subscriber2();

S2.SubscriberToPublisher(P);
 
Finally run the publishers PublishRegularly().
 
P.PublishRegularly();

This PublishRegularly() function will in a regular interval call the delegate-variable with appropriate perimeters. This delegate-variable which will in turn delegate its task along with its parameter to the subscriber’s anonymous function and get the job done by the subscriber.


public void PublishRegularly()
{
   while (true)
   {
      Thread.Sleep(2000);
      if (Publish != null)
      {
         Publish(this, "Harry Potter (pert " + i.ToString() + ") ", i++);
      }
   }
}

Notice that we are doing a check ((Publish != null)). This is done because if there is no subscriber  subscribing the publisher, the publisher can still run the function i.e. the publisher is completely ignoring whether any subscriber is subscribing it or not.

Even if there is more than one subscriber subscribing, it does not matter to the publisher either.  It’s the subscribers responsibility to listen to the publishers signal and act accordingly with the help if its own anonymous function.

Event

Now in the Publisher.cs file change the decleration of the delegatetype-variable  like this


   public NewEditionPublishHandler Publish;
    public event NewEditionPublishHandler Publish;

And in the Subscriber.cs change the publishers signal subskribtion(event subskribtion) like this


   publisher.Publish += (SubscribedPublisher, publicationName, publilcationNr) =>{…}
   publisher.Publish = (SubscribedPublisher, publicationName, publilcationNr) =>{…}

With this you will get an error.


Error 1 The event 'Delegate_Event_Test.Publisher.Publish' can only appear on the left hand side of += or -= (except when used from within the type 'Delegate_Event_Test.Publisher')                         C:\Users\rizvis\Documents\Visual Studio 2010\Projects Test\Delegate_Event_Test\Delegate_Event_Test\Subscriber.cs                 23                     23                         Delegate_Event_Test.

Thanks to ‘event’ to make this error because with the statement


   publisher.Publish = (SubscribedPublisher, publicationName, publilcationNr) =>{…}

Here you are not subscribing a signal from a delegate-variable rather you assigning a wrong value to it. This is wrong and your program will not work as it is intended. Without the ‘event’ keyword you will not get a compilation error and thus you are planting a bug in your code. So with the ‘event’ key word even if you missed += with =, it will show up at compile time.

Bird’s eye view

So with a delegate we are preparing an object (i.e. Publisher) to emit a signal to another set of objects (i.e. Subscribers) who are subscribed with publishers signal with an anonymous function. As a notification of the signal the subscribers will execute their own function. So simple J

Download source code
Publisher.cs
Subscriber.cs
Program.cs

Hack around with PECL libs for php in Lubuntu

peclsmall

From my very childhood I dreamed about many things to be when I am grownup – from a soldier to sailor, from a pilot to postman, from engineer to innovator but, by any chance never dreamed about a writer. Now I am writing.

Like the same nature of your life you may need certain things you never have thought about. As I needed to install the PECL library for PHP in an Ubuntu machine. Well how to do it, is pretty much straight forward steps and not much interesting – what is interesting is my experience with its installation.

Well I needed a function in PHP called ‘id3_get_tag’ which needs a package id3. This package can be found in a repository called PECL. This id3 package is maintained by Stephan Schmidt and Carsten Lucke. Thanks to them on behalf of me. I and many others like me are always thankful to those guys who are maintaining this kind of open source libraries.

However you can download the id3 extension of PECL package from here. Download it to your Ubuntu machine – for me it was a lubuntu machine. Unzip it. Follow this commands


cd ./Downloads/Temp/ id3-0.2
phpize
./configure
make

Ignore the errors of the phpize command. The phpize command is used to prepare the build environment for a PHP extension. In our case id3 extension for the PECL package. After the make you will find a shared library called id3.so possibly in the modules directory.

Now all you will have to do is place this library is a place where php5 can reach it. Now to find out this place you will have to hack a little bit. Find this kind of other library in the php.ini or mysql.ini or gd.ini. For example

In the gd.ini you will find

extension=gd.so

In the mysql.ini you’ll find

extension=mysql.so

Next we will have to find the location of these files with command


find / -type f –name ‘gd.so’
find / -type f –name ‘mysql.so’

If you compare the paths you will be able to see a common location for these two files (../php5/…/gd.so). Bang!! That’s the location from where the php5 is loading them. For our case the location was /usr/lib/php5/20090626+lfs.

Now all we have to do is copy our shared library id3.so to that location. Then create a id3.ini file in the location /etc/php5/conf.d. Write ‘extension=id3.so’ in the file.

cd  /etc/php5/conf.d

sudo echo “extension=id3.so” > id3.ini

Restart the apache.

sudo /etc/init.d/apache2 restart

Now test a page with function ‘id3_get_tag’, it will execute successfully. Walla!! You have successfully installed the id3 library from PECL package.

You can absolutely do the same for the other libraries in the PECL package.

Have faith in your detective mind while dealing with php.ini and apache2

Look for the problem

 

More or less we all know that solving a problem in the programming world needs a lot detective work. And to my experience (mostly from hollywood movies) on detective operations there is some rule of thumbs.

  1. Follow the trail up to a reasonable ending
  2. Try to link the points

However last night I was caught up in a tedious problem with my apache server with php5. I needed to upload large files to the server – more than 10 MB. By default apache2 with php5 won’t allow you to upload more than 2MB files in the server.

Now to make it do the task you will have to do is modifying the php.ini file which normally resides in the path /etc/php5/apache2/. In this file there is a flag setting upload_max_filesize = 2M. I needed to set it to upload_max_filesize = 20M. So I did it and tried to upload a file using a php-script. A typical script of this kind can be found here. To find the upload failing reason check the $_FILES['userfilefield']['error'] value. If the value is 1 then it is the file size causing the problem. More of this error values are described here.

But to my surprise I found the value of $_FILES['userfilefield']['error'] to 1 which means it’s the file size causing the problem. I scratched my head a little while then started to think what could be the other problems. To find this out I started to trigger my detective mind and rollout a list of reasons that could cause this fail.

Detective question 1

Is the apache loading this php.ini file properly?

To find this I need to run a script say testup.php containing . These I will be able to see a list of flags loaded by apache2 and their values. To find out whether the php.ini is loaded or not check this.

Loaded Configuration File /etc/php5/apache2/php.ini

So it’s loading or at least started loading the file.

So the next question is –

Detective question 2

Has the upload_max_filesize flag value loaded in apache?

Check the that flag in the testup.php

upload_max_filesize 2M 2M

Bang!! It’s not loading the flag value properly. And that is why it’s failing to upload my large file.

Well why php.ini is not loading where I have changed the value myself and saved the file properly. So the big question is

Detective question 3

Why the upload_max_filesize flag is not loaded whereas the php.ini file has started loading?

Here at this point I was derailed from my detective rules and started to become impatient. I started to find out a patch on how to fix it with lots of googleing rather than finding the answer of the above question. With this mistake I have started lots of suffering which I could easily avoid if I have maintained the detective rules.

Likely solution 1:

One solution seemed most likely to solve my problem – that is to use the .htaccess file in the directory there the testup.php resides. In this .htaccess file you just will have to put this

php_value upload_max_filesize 20M
php_value post_max_size 22M

Then while accessing the testup.php file apache2 will automatically change the values of those flags. But to my ignorance, this will only work while apache2 have successfully loaded the php.ini file and in the file it has been explicitly told to do so. So very reasonably it didn’t help me either.

Likely solution 2:

The other solution seemed suitable is to use the “php -i | grep php.ini” and see there the php5 is loading the php.ini from which is “/etc/php5/cli/php.ini”. But here also to my ignorance this php.ini is nothing to do with apachi2. So very reasonably changing the flag value in this file had no effect on apache.

At the verge of my patience and back to the detective question

Why the upload_max_filesize flag is not loaded whereas the php.ini file has started loading?

Finally I came back to my detective question and started following my detective mind. Now I will have to check whether the apache2 has successfully finished loading the php.ini file. To check this I will have to check the log of the apache2 which is “/var/log/apache2/error.log” for my Ubuntu machine.

Solution comes automatically

To my surprise I found lots of error loading the php.ini as bellow.

PHP: syntax error, unexpected ‘&’ in /etc/php5/apache2/php.ini on line 110
PHP: syntax error, unexpected ‘&’ in /etc/php5/apache2/php.ini on line 110

Then I understood even apache2 had started loading the php.ini file, these errors did not let the php.ini finished loading successfully.

Now the life became easy, fix those errors in the php.ini file and restart the apache2. Then my large files were uploaded successfully with my testup.php script. What I had the problem with php.ini file is the value “Default Value: E_ALL & ~E_NOTICE”. Some how apache could not parse the “&”. Then I used “Default Value: E_ALL” instead of the previous value. But this new flag setting will introduce a new problem that it will not show you the compilation-errors on the page. Set the display_errors flag to On i.e. ”display_errors = On”. Finally solved the problem as I wanted. Sweet, isn’t it?

Bottom line: stick to your detective mind no matter how complex the question becomes.

Simple shell script to kill an application

Shell scripts are awesome. One can do whatever with a shall script. With  whatever i mean whatever deadly or creative.

However in this article I will try to show how can a simple script be used to kill an application. What ever application you start from a command prompt. Now the question is what is the difference between a Application and a Process.  Well, a process is a simple entity that runs in the memory with its own stack memory and resources. Advanced processes can have multiple threads running simultaneously. But an Application is a combination of processes running together and maintaining inter process communication to handle many tasks one the same time. Typically an application is a large project under the hood of  some group of processes.

Now lets invoke a process for Gnome-Commander with the command

nohup gnome-commander &

This nohup is special command to run a process from terminal and isolate it’s standard I/O from the terminal. The & after the command will run the command in background so you can use the terminal for other tasks.

Now lets see the process in the process list. Hit the command

ps

or

ps -Al

With the -Al you will get all the currently running process in your privilege. But without the -Al argument you will get the process-list invoked from current terminal.

Now time to kill the processes. To kill a process simply invoke kill -9 [process_id] and this [process_id] will you get from the process-list returned by ps.

The kill command basically sends various type of signal to a process. You can see all type of signals with kill -l and will find SIGKILL as numbner 9. So when you invoke kill [process_id] the kill command actually sends a SIGKILL signal to that process of id [process_id]. This is a signal to terminate the process and approximately all the process handles this signal by terminating itself.
kill_gnome-commander

Up to now this is very simple. Now lets write a script out of this knowledge to kill an application. To find out how an application runs lets run ‘Chromium-Browser’ application with the command
nohup chromium-browser &
Now watch how many process supports this application with
ps -Al | grep chromium
You will find a bundle of processes running behind the application.

Now lets write a script that will close an application(ie. Chromium-Browser). Create a file and name it ‘killproc‘ and write #!/bin/bash at the very first position of the file. Note that if the shell finds ‘#!’ at the starting of the file than it will treat this file as a script and will run it in kernel.

So our first step is to take an application name or a name fragment as the command line argument.


#!/bin/bash
#This script will kill a particular process by name expression

if [ $# -lt 1 ]; then
	echo "Process name missing !!"
	exit 1
...

Here the $# indicates the number of command line arguments and $1 or $2 holds the first or second arguments and so on.
The next step is to get the process list with ps command. Now to filter out our desired process we need to pipeline the output of ps to another command awk which is actually nice tool to filter in rows and columns.


#!/bin/bash
#This script will kill a particular process by name expression

if [ $# -lt 1 ]; then
	echo "Process name missing !!"
	exit 1
else	
	# List all the process ID in a line
	list=($(ps | awk '{if((index($4,"'$1'")>0) && ($1!="PID")) {print $1}}'))
	echo "Process count: "${#list[*]}
	max=${#list[*]}

	# If no process found exit
	if [ ${#list[*]} -eq 0 ]
	then
		echo "No process found of this name."		
		exit 0
	fi

	...	

Our filtering command is ps | awk '{if((index($4,"'$1'")>0) && ($1!="PID")) {print $1}}'. With this command we are actually getting a list of process id associated to our application. Then fill an array with this list of processes by list=($(...)) where … is some command that produce a list.

Nest step is to loop through this list of process ids’ and kill them one by one.


#!/bin/bash
#This script will kill a particular process by name expression

if [ $# -lt 1 ]; then
	echo "Process name missing !!"
	exit 1
else	
	# List all the process ID in a line
	list=($(ps | awk '{if((index($4,"'$1'")>0) && ($1!="PID")) {print $1}}'))
	echo "Process count: "${#list[*]}
	max=${#list[*]}

	# If no process found exit
	if [ ${#list[*]} -eq 0 ]
	then
		echo "No process found of this name."		
		exit 0
	fi

	# Loop through all the process and kill one by one
	for ((i=0; i<$max; ++i )); 
	do
		echo "Killing process..."${list[$i]}
		kill -9 ${list[$i]}    		
	done

	#Final Message
	if [ $? -eq 0 ]; then
		echo "All process killed sucessfully."
	else
		echo "Proess killed ..."
	fi

fi

With a for loop we traverse through the process ids and kill them one by one with kill -9 ${list[$i]}. Finally we give a friendly message.

Now to kill a ‘Chromium-Browser’ application we simply invoke


     killproc chromium

This will kill all the associated processes with Chromium-Browser.

killproc

Try it yourself and have fun.

How to open an existing Autotools project in Eclipse CDT

This is a very basic article about how to open an existing AutoTools project in Eclipse CDT. Yet it will waste some of our time specially if we don’t know that we have to convert an ordinary makefile project to an AutoTools project even if we have all the AutoTools files (configure.in, MakeFile.am etc.) in our project.

There is also a serious problem in Ubuntu 12.0.1 while configuring an AutoTools project with Eclipse CDT; but don’t worry about that for now because we have a work-around to solve that problem.

Now lets open an existing opensource project which has been built with AutoTools support. In our case we will open Ices client project. To know more about ices browse here.Download the compressed source code and tar the file to decompress it.

Next we will rename the extracted folder to ices20121121

Now lets go to the Eclipse and open a MakeFile project from existing code

Locate our code folder ices20121121 and set the tools chain to GNU Autotools Toolchain

Up to now you have opened a Makefile project form an existing source code. Select our project node(ices20121121) in the project explorer and notice that our ises20121121 project  has the autotools files (configure.in, Makefile.am etc) but in our ‘Project’ menu there is no  ‘Reconfigure Project’ item. So we can’t configure the project which is very important for an Autotools project.

To be able to configure the project we will have to convert the project to a C/C++ Autotools project. From the file menu you can get ‘Convert to a C/C++ Autotools Project’ item. If you don’t find it there that means you do not have the Autotools plugin installed in your Eclipse. Here you can find how to install a plugin in Eclipse.

The next steps are simple, just follow the wizard.

Step 1

Step 2

Step 3

Now select the project node(ices20121121) and check the ‘Project’ menu and you will find the ‘Reconfigure Project’ item to configure the project.

Now its time to configure; hit the ‘Reconfigure Project’ and the configuration will start and if ends in success then you will find the successful message in ‘CDT Global Build Console’ or ‘Configure’ console.

Next build the project: Select the project node(ices20121121) in the ‘Project Explorer’, then in the ‘Project’ menu hit ‘Build All’.

If the build is successful we will get the message in our ‘CDT Global Build Console’

Then run the project by just right clicking on the project node(ices20121121) and select Run As -> Local C/C++ Application

If the project run successfully you will see Ices version info and Usage option.

Thus we can run an open source project which is Autotools supported by Eclipse CDT Autotools Project in Ubuntu 12.

Eclipse-AutoTools project in Ubuntu and a small head scratch

In this article I will try to demonstrate a very simple way to start an autotools project with Eclipse in Ubuntu. This is very simple except a peculiar problem which may wast your half of the day.

While working with autotools projects in Ubuntu 12.0.1 recently I have faced a strange problem. In short the problem is: In Eclipse while auto-configuration of autoreconfig tool, it fails with the following message: sh: 0: Can't open autoreconf. This happens because of some internal configuration problem in Eclipse. It can easily be resolved with a workaround which will be demonstrated in this article later.

First we proceed to create a C project with AutoTools in Eclipse. I will demonastrate less as I will be using lots of screen-short. After all “A picture is worth a thousand words”.

Then expand the “GNU Autotools” node in the project type and select the basic Hello world Autotools project. Remember the toolchain should be “GNU Autotools Toolchain” and name the project (ie. HelloAuto)

A simple HelloAuto project will be created along with other supporting files and the HelloAuto.c will be created with bare minimum test code.

At this point you can configure the project from Project -> Reconfigure Project.

This configuration should be successful except on some versions of Ubuntu. In the versions of Ubuntu where the dash is the default shell in stead of bash you are supposed to end up with an error with the message: sh: 0: Can't open autoreconf.

But fortunately there is a work around to get rid of this problem.

Work around

Go to your personal home(in Ubuntu ie. #cd ~). Create a bin folder in your personal home (#mkdir bin). Finally in this bin directory create a link of /bin/bash with sh (#ln -s /bin/bash sh).

Then go back to the personal home (#cd ~) and then edit the .bashrc file with adding the line export PATH=/home/rizvi/bin:$PATH at the end if the file. Notice that I have rizvi as my personal home, which you have to change with your personal home directory name.

After all this fixing, reconfigure the project again from Project -> Reconfigure Project.

If all has been done well and good, then the project should autoreconfigure successfully. After configuration notice that there will be couple of new files generated for you as part of configuration.

In case if you don’t find the successful message in the console, try selecting the “CDT Global Build Console”.

Next time to build. Build from Project -> Build All.

If the build is successful you should get a successful build message.

Now time to run the project from – Right click on project folder -> Run As -> Local C/C++ Application.

You should see the program running if you are lucky other wise you may see an error message like this.

But don’t worry about this message. This is just because your last build could not create the binary to run.

Build the project again from Project -> Build Project. Now run the project again and you should see the running program in the “CDT Global Build Console”.

Now with this article if you can run an autotools project in Ubuntu – I would be glad to hear from you. But if you face any new issue or find any improvement – feel free to let me know or comment.

Using unity for IoC and DI – Part 2 (further decoupling)

This article is an extension of my previous article Using unity for IoC and DI. I would recommend reading that article before start reading this article. However in the previous article I have demonstrated how to install Unity container in a .NET project and how we use that container to decouple our layers; for example Data Access Layer (DAL), Business layer objects (BAL) and presentation layer from each other. Moreover we also have used Dependency Injection (DI) to decouple the DAL layer from the BAL layer so that one can easily replace one DAL layer(i.e SQL DAL) with another(i.e. Oracle DAL).

Fig 1: Objects in the container

In the figure above we can see that there is a partition of “Decoupling Point”. On one side of the partition lies the Service Layer(BAL) objects and interfaces (i.e. IBookService, BookService) and on the other side lies the Data Access Layer(DAL) objects and interfaces (i.e. IBookRepository, BookRepositiry, OracleBookRepository, IBook and Book). And these two layers are decoupled by making BookService  object dependent on IBookRepository, by it’s (BookService) parameterized constructor.

In this above example we have just four objects and three interfaces – as this is mare an example application. But in the real life project you will find hundreds of objects and interfaces which will further increase the demand for replacing and changing the objects more frequently and that’s why we need more decoupling.

So now is time to further decouple these set of objects in our example. But the question is, where we will introduce another decoupling line. Notice in our example that the Model objects (Book and IBook) are used directly in the repository objects (ie. BookRepository and OracleBookRepository). Instead of this direct-use we can introduce dependency in between them. This dependency injection will give you flexibility to replace or modify the Model-objects without breaking the Repository-objects.

Fig2: Objects in the container

To introduce this new dependency injection we modify the constructor of BookRepository and OracleBookRepository so that it depends on a Book interface(ie. IBook).


public class BookRepository : IBookRepository
{
       private IBook book; 
       private List listOfBooks;
       private HomeFinDBEntities db;        
	
       public BookRepository(Ibook bk)
        {
            this.book = bk;
this.db = new HomeFinDBEntities();
            
        }
        
        public Ibook getBookById(int id)
        {
            var result = (from p in db.Books
                         where p.Id == id
                         select p).FirstOrDefault();

            if (result == null)
            {
                return null;
            }
            else
            {
                this.book.BookId = result.Id;
                this.book.BookName = result.Name;
                this.book.BookAuthor = result.Author;
            } 
            return this.book;
        }

        }
 
        … … …
}

By this way we can get a book by id from a DB (ie.e function getBookById(int id)). But what about a list of books? How we can get a list of type IBook (ie. List<IBook>)? My solution in this case is by cloning.  We can clone the object we got from the constructor then we make clones as many as we like, change their individual property value and finally put them in a list; simple isn’t it.

While the cloning involves, the question is which cloning should we use? Deep cloning or shallow cloning? To my investigation shallow cloning is enough – where as it doesn’t involve too much performance overhead.

To make our Book object clone-able we will have to implement the interface ICloneable and then implement clone function.


public interface IBook  : ICloneable
    {      
        string BookAuthor { get; set; }
        int BookId { get; set; }
        string BookName { get; set; }
    }

public class Book : IBook
    {

        public int BookId { get; set; }
        public string BookName { get; set; }
        public string BookAuthor { get; set; }

        public object Clone()
        {
            return (Book)this.MemberwiseClone();            
        }
    }

Now the question is with this shallow cloning will we be able to maintain nested objects in the Book object? For example how are we going to manage a Available-in-Library list (ie. List<ILibrary> AvailableLibs{ get; set; }) in a particular book object, where as a library is a different new object with its own property.


public interface ILibrary: ICloneable
    {
        string LibName { get; set; }
        string Location { get; set; }
    }

public class Library : ILibrary
    {
        public string LibName { get; set; }
        public string Location { get; set; }

        public object Clone()
        {
            return this.MemberwiseClone();
        }
    }

public interface IBook  : ICloneable
    {      

        string BookAuthor { get; set; }
        int BookId { get; set; }
        string BookName { get; set; }
        List AvailableLibs { get; set; }
    }

public class Book :  IBook
    {

        public int BookId { get; set; }
        public string BookName { get; set; }
        public string BookAuthor { get; set; }
        public List AvailableLibs { get; set; }

        public object Clone()
        {
            return (Book)this.MemberwiseClone();            
        }
    }

Simply by shallow cloning we can resolve this problem too. Just implement ICloneable in the Library object as in the code above then make a list of ILibrary in the Book class.

Now create a constructor dependency of BookRepository object on Library object ie. pass a Library object referencing ILibrary while instantiation of BookRepository. Now in the List<Ibook> getBookList() function of BookRepository class, start cloning the Library object that has been passed as dependency. Notice the modified constructor and List<IBook> getBookList() function in the class BookRepository bellow.


public class BookRepository : IBookRepository
    {
        IBook book;
        ILibrary library;
        private List listOfBooks;
        HomeFinDBEntities db;        
        public BookRepository(IBook bk,ILibrary lib)
        {
            this.book = bk;
            this.library = lib;
            this.db = new HomeFinDBEntities();
        }
        
        public IBook getBookById(int id)
        {

            var result = (from p in db.Books
                         where p.Id == id
                         select p).FirstOrDefault();

            if (result == null)
            {
                return null;
            }
            else
            {
                this.book.BookId = result.Id;
                this.book.BookName = result.Name;
                this.book.BookAuthor = result.Author;
                this.library.LibName = result.LibName;
                this.library.Location = result.LibLoc;
                this.book.AvailableLibs = new List();
                this.book.AvailableLibs.Add((ILibrary)this.library);
            }           

            return this.book;
        }

        public List getBookList()
        {
            this.listOfBooks = new List();
            var rsltBoks = from p in db.Books select p;

            foreach (var bok in rsltBoks)
            {
                listOfBooks.Add((IBook)this.book.Clone());
                listOfBooks.Last().BookId = bok.Id;
                listOfBooks.Last().BookName = bok.Name;
                listOfBooks.Last().BookAuthor = bok.Author;
                
                var _libList = new List();
                _libList.Add((ILibrary)this.library.Clone());
                _libList.Last().LibName = bok.LibName;
                _libList.Last().Location = bok.LibLoc;
                listOfBooks.Last().AvailableLibs = _libList;                                                             
            

            }
            return this.listOfBooks;
            
        }

    }

Observe that the shallow cloning is enough to build up a list of objects which have another list of object.

Up to now we have successfully decoupled the Model-Objects (ie. Book and Library) from the Repository-Objects (ie. BookRepository and OracleBookRepository) in code. Now we will have to configure these dependencies in our Unity configuration file unity.config (which we have been following from the previous article Using unity for IoC and DI).

To introduce the dependency of BookRepository class on the Book object configure the BookRepository registration as follow.


<register type="IBookRepository" mapTo="BookRepository" name="SQLrepo" >
  <constructor>
    <param name="bk" dependencyName="BookModel" />
    <param name="lib" dependencyName="LibModel" />
  </constructor>
</register>

So the final unity.config file would be looking like bellow (if you are following from the article Using unity for IoC and DI).



<?xml version="1.0" encoding="utf-8"?>
<unity xmlns="http://schemas.microsoft.com/practices/2010/unity">
<typeAliases>
<!-- Models-->
<typeAlias alias="IBook" type="BusinessBackend.IBook, BusinessBackend" />
<typeAlias alias="Book" type="BusinessBackend.Book, BusinessBackend" />
<typeAlias alias="ILibrary" type="BusinessBackend.ILibrary, BusinessBackend" />
<typeAlias alias="Library" type="BusinessBackend.Library, BusinessBackend" />
<!-- Services -->
<typeAlias alias="IBookService" type="BusinessBackend.IBookService, BusinessBackend" />
<typeAlias alias="BookService" type="BusinessBackend.BookService, BusinessBackend" />
<!-- Repositories -->
<typeAlias alias="IBookRepository" type="BusinessBackend.IBookRepository, BusinessBackend" />
<typeAlias alias="BookRepository" type="BusinessBackend.BookRepository, BusinessBackend" />
<typeAlias alias="OracleBookRepository" type="BusinessBackend.OracleBookRepository, BusinessBackend" />
</typeAliases>
<container>
<register type="ILibrary" mapTo="Library" name="LibModel" />
<register type="IBook" mapTo="Book" name="BookModel" />
<register type="IBookRepository" mapTo="BookRepository" name="SQLrepo" >
<constructor>
<param name="bk" dependencyName="BookModel" />
<param name="lib" dependencyName="LibModel" />
</constructor>
</register>
<register type="IBookRepository" mapTo="OracleBookRepository" name="ORACLErepo" >
<constructor>
<param name="bk" dependencyName="BookModel" />
</constructor>
</register>
<register type="IBookService" mapTo="BookService" >
<constructor>
<param name="br" dependencyName="SQLrepo">
<!--<param name="br" dependencyType="BookRepository">-->
<!--<dependency type="BookRepository" />-->
<!--<dependency name="SQLrepo" />-->
</param>
</constructor>
</register>
</container>
</unity>

Finally we can list what we have done so far in this article

  1. Decoupled the Model-objects form the Repository-Objects.
  2. Used shallow cloning while creating a list of Model-objects.
  3. Configure this decoupling in the Unity configuration file.

Comments are appreciated.

Happy coding…

Using Unity for IoC and DI

In this article I will try to describe all about using unity in our .NET projects and thus how to implement IoC (inversion of control) and DI(Dependency Injection).

Description of the Objects

Going straight to a class library project(.net) which will we follow through out this article. In the project we have a Book class which is considered as our model class, a BookService class which will be considered as business layer class and a BookRepository class which will be considered as a data access layer class. Each of this class has their interface as IBook, IBookService and IBookRepository.

Fig:1 Objects in the container

Here in this project we have all objects those will be contained in a container (i.e. Unity in our case). There are lots of containers out there in the web but we have chosen Unity because it is a moderate container in all respect. Find more about other containers here.

Next we create another .net mvc3 website project as our view layer. We have to reference our class-library project from this project because we are going to create our container in this mvc3 website project. Well there are many ways to install the Unity container but I  would prefer to install it from Package Manager Console. In the Package Manager Console just type PM> Install-Package Unity. It will install unity in your project and will automatically add reference for Microsoft.Practices.Unity , Microsoft.Practices.Unity.Configuration and other necessary packages.  Up to here you are ready with Unity and now you will have to use the Unity container.

Unity container can be configured in two ways - run-time configuration and design-time configuration.

Run-time configuration

Run-time configuration is easy, just put the following code in the Application_Start() function of the web application’s global.asax.cs file.


   // Container initialization by code
   MvcApplication._myContainer = new UnityContainer();
   MvcApplication._myContainer.RegisterType();
   MvcApplication._myContainer.RegisterType();
   MvcApplication._myContainer.RegisterType();

After instantiating the UnityContainer you just need to register all the model objects(Book with IBook), Business layer objects(BookService with IBookService) and repository objects(BookRepository with IBookRepository). After registration you can use them by calling the resolve function anywhere in your application like bellow.


IBook _b = _cc.Resolve();

There are some certain advanced functions for DI with constructor, property of interface. You can get more about here.

Design-time-configuration

Here in this article we will more concentrate with the design-time-configuration because design-time-configuration has certain advantages over run-time-configuration.For example you can change the DAL layer of SQL server to Oracle, just by changing a bit in the configuration file and without recompiling a single line of code.

To configure unity at design time – put the following code in the <configuration> element of the web.config file.


<configSections>
<section name="unity"
type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection,
Microsoft.Practices.Unity.Configuration"/>
</configSections>
<unity configSource="unity.config"/>

With this configuration we actually relocate our main container configuration in a separate file unity.config. Now lets see how we have configured all the objects of Model, BAL and DAL layers in this file.


<?xml version="1.0" encoding="utf-8"?>
<unity xmlns="http://schemas.microsoft.com/practices/2010/unity">
  <typeAliases>
    <!-- Models-->
    <typeAlias alias="IBook" type="BusinessBackend.IBook, BusinessBackend" />
    <typeAlias alias="Book" type="BusinessBackend.Book, BusinessBackend" />
    <!-- Services -->
    <typeAlias alias="IBookService" type="BusinessBackend.IBookService, BusinessBackend" />
    <typeAlias alias="BookService" type="BusinessBackend.BookService, BusinessBackend" />
    <!-- Repositories -->
    <typeAlias alias="IBookRepository" type="BusinessBackend.IBookRepository, BusinessBackend" />
    <typeAlias alias="BookRepository" type="BusinessBackend.BookRepository, BusinessBackend" />
  </typeAliases>
  <container>
    <register type="IBook" mapTo="Book" />
    <register type="IBookRepository" mapTo="BookRepository" name="SQLrepo" />
    <register type="IBookService" mapTo="BookService" >
      <constructor>
        <param name="br" dependencyName="SQLrepo">
        <!--<param name="br" dependencyType="BookRepository">-->
        <!--<dependency type="BookRepository" />-->
        <!--<dependency name="SQLrepo" />-->
        </param>
      </constructor>
    </register>
  </container>
</unity>

If you look in detail of the unity.config file you will be able to see that; in <typeAliases> section i just have given a short hand name of a particular class.


  <typeAlias alias="[short hand name]" type="[namespace].[class], [assembly name]" />

Then in the <container> section I have registered all the objects with its interface. Optionally you can give a name of a registration to use it further in the configuration file.
Like I have registered BookRepository with IBookRepository and named it “SQLrepo”.


    <register type="[interface]" mapTo="[class]" name="[name of the registration]" />

Notice our BookService class which have a parameterized constructor which takes an object of type IBookRepository.


public class BookService : IBookService
    {
        IBookRepository BookRepo;
        public BookService(IBookRepository br)
        {
            BookRepo = br;
        }

        public IBook getBookById(int id)
        {
            return BookRepo.getBookById(id);
        }
    }

This is how we inject dependency on BookRepository object of the BookService object. This means while creating a BookService object you will have to provide a BookRepository object referenced by IBookRepository. Unity will do this for us, if we properly configure the constructor of BookService with the proper dependency. To configure the constructor add a <constructor> element in the  registration of BookService. Then set the parameters with <param> element. 

You can set the dependencyName attribute of the <param> element to any named registration.

        <param name="[name of the param]" dependencyName="[name of a registration]">

Or set the dependencyType attribute of the <param> element to the object type upon which the constructor is depending on.

 <param name="[name of the param]" dependencyType="[type of the param]">

Or you can put a <dependency> in the <param> element and set the name attribute or type attribute.


        <dependency type="[type of the param]" />
        <dependency name="[name of a registration]" />

Up to now we are done with the configuration. Now we need to create a unity container in the code from this configuration. We will create a container in the Application_Start() by the following code.


  // Container initialization by web.config and unity.config          
  var section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity");
  IUnityContainer container = new UnityContainer().LoadConfiguration(section);

Using the container

The best place to use this container is in a Controller-Factory of a MVC3 web application. Here is my Controller-Factory class MyControllerFactory which is derived from DefaultControllerFactory.


public class MyControllerFactory: DefaultControllerFactory
    {
        IUnityContainer _container;
        public MyControllerFactory(IUnityContainer c)
        {
            _container = c;
        }

        protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType)
        {
            if (controllerType == null)
                throw new System.Web.HttpException(404, "Page not found: " + requestContext.HttpContext.Request.Path);
            if (!typeof(IController).IsAssignableFrom(controllerType))
                throw new System.ArgumentException("Type does not subclass IController", "controllerType");

            object[] parameters = null;

            ConstructorInfo constructor = controllerType.GetConstructors().FirstOrDefault(c => c.GetParameters().Length > 0);
            if (constructor != null)
            {
                ParameterInfo[] parametersInfo = constructor.GetParameters();
                parameters = new object[parametersInfo.Length];

                for (int i = 0; i < parametersInfo.Length; i++)
                {
                    ParameterInfo p = parametersInfo[i];

                    if (!_container.IsRegistered(p.ParameterType))
                        throw new ApplicationException("Can't instanciate controller '" + controllerType.Name + "', one of its parameters is unknown to the IoC Container");

                    parameters[i] = _container.Resolve(p.ParameterType);
                }
            }

            try
            {
                return (IController)Activator.CreateInstance(controllerType, parameters);
            }
            catch (Exception ex)
            {
                throw new InvalidOperationException(String.Format(CultureInfo.CurrentUICulture, "error creating controller", controllerType), ex);
            }
        }
    }

Observe the constructor MyControllerFactory(IUnityContainer c) is taking the container and preserves it to a local variable. Then, while creation of a new controller this preserved container is used to spawn the appropriate service-level-object(BAL object) to the controller through parameterized-constructor.

So each controller takes a service-level-object(BAL object) as constructor parameter. For example see the HomeController takes IBookService as constructor parameter. Then this controller consumes this service-level-objects to get the book information.


    public class HomeController : Controller
    {
        IBookService _bookSrv;
        public HomeController(IBookService bs)
        {
            _bookSrv = bs;
        }

        public ActionResult Index()
        {         
            IBookService _bks = _bookSrv;
            ViewBag.Message = _bks.getBookById(2).BookName;

            return View();
        }     
    }

Extending the container with another Repository

At this point we want to insert another repository class which will handle oracle database for example. So we add another class OracleBookRepository implementing the same interface IBookRepository in our class library project.

Then we will have to register this new class in our unity.config also.


<?xml version="1.0" encoding="utf-8"?>
<unity xmlns="http://schemas.microsoft.com/practices/2010/unity">
  <typeAliases>
    <!-- Models-->
    <typeAlias alias="IBook" type="BusinessBackend.IBook, BusinessBackend" />
    <typeAlias alias="Book" type="BusinessBackend.Book, BusinessBackend" />
    <!-- Services -->
    <typeAlias alias="IBookService" type="BusinessBackend.IBookService, BusinessBackend" />
    <typeAlias alias="BookService" type="BusinessBackend.BookService, BusinessBackend" />
    <!-- Repositories -->
    <typeAlias alias="IBookRepository" type="BusinessBackend.IBookRepository, BusinessBackend" />
    <typeAlias alias="BookRepository" type="BusinessBackend.BookRepository, BusinessBackend" />
    <typeAlias alias="OracleBookRepository" type="BusinessBackend.OracleBookRepository, BusinessBackend" />
  </typeAliases>
  <container>
    <register type="IBook" mapTo="Book" />
    <register type="IBookRepository" mapTo="BookRepository" name="SQLrepo" />
    <register type="IBookRepository" mapTo="OracleBookRepository" name="ORACLErepo" />
    <register type="IBookService" mapTo="BookService" >
      <constructor>
        <param name="br" dependencyName="ORACLErepo">
        <!--<param name="br" dependencyType="OracleBookRepository">-->
        <!--<dependency type="OracleBookRepository" />-->
        <!--<dependency name="ORACLErepo" />-->
        </param>
      </constructor>
    </register>
  </container>
</unity>

First we will have to make a type alias for the class OracleBookRepository. Then register the type of OracleBookRepository with IBookRepository. Next to use this new DAL object OracleBookRepository in our application we will have to edit the constructor of the BookService object so that it depends on OracleBookRepository instead of BookRepository. Observe the bold texts in the unity.config above. Now that the new object OracleBookRepository is added in our container.

The fun part with this is – if you want to go back to use BookRepository instead of OracleBookRepository just change back the constructor of BookRepository in the configuration file and no recompile is needed. This is the sweet part of decoupling with DI.

This is how you can create a Unity container and use it in the .NET MVC web application. Thus you also have implemented IoC when you have introduced Unity in your custom controller factory. You also have implemented DI while injecting BookRepository in the BookService class. Now there are less dependency between the container and the consumer and so you can easily test the controllers with custom service objects in a test project.

Comments are appreciated.

Cygwin vs MinGW – what to prefer when

Sometimes we get confused between Cygwin and MinGW when developing open source applications. Off course they are not the same but which one to prefer when is a big challenge.

Differences

MSYS(in MingGW) by itself does not contain a compiler or a C library, therefore does not give the ability to magically port UNIX programs over to Windows nor does it provide any UNIX specific functionality like case-sensitive filenames. Users looking for such functionality should look to Cygwin.

Cygwin applications by principle are not considered a “Native Win32 application” because it relies on the Cygwin® POSIX Emulation DLL or cygwin1.dll for Posix functions and does not use win32 functions directly. MinGW on the other hand, provides functions supplied by the Win32 API. While porting applications under MinGW, functions not native to Win32 such as fork(), mmap() or ioctl() will need to be reimplemented into Win32 equivalents for the application to function properly.

Perferance

In minGW the MSYS is a collection of GNU utilities such as bash, make, gawk and grep to allow building of applications. Its a command prompt where users  run “./configure” then “make” to build programs. The problem is there’s no /usr directory psychically. The root (/) is considered as usr (/usr) path – so you can not create one either. The problem arises while a program depends on third party library – there is no place to put this third party library so that the default search path can find the library file. Usually in linux /usr/local/lib is the default library search path. So the client program can not configure with “./configure”. You will need special modification on LIBRARY_PATH environment variable which is much tedious and cumbersome.

So to run the program which needs lots of dependency on other libraries I would prefer Cigwin over minGW.

References

http://www.mingw.org/wiki/HOWTO_Specify_the_Location_of_External_Libraries_for_use_with_MinGW#comment-278

Needless needs – IP

Sometimes we all need some quick information as like our global address of our machine. Hit any of these sites to get your global IP.

Please report on any broken link.

A simple container used in mvc web applications with DI(Dependency Injection)

If you have a good understanding of IoC(Inversion of Control), DI(Dependency Injection) and Multi-layer architecture then continue reading this article other wise have a look at this article – An idea of a three tier application using IoC(Inversion of Control), DI(Dependency Injection) and perfectly decoupled layers

Container

A ‘Container’ as the name implies – will contain some objects for further use in your project. These  stored objects are mainly the Business Layer Objects(BLL) and the Data Access Layer Objects(DAL).  We store them and then retrieve them as needed in the Application.

Sometimes a BLL object may depend on a DAL object. The Container will load(instantiate) them separately. Then the problem is: how will we inform the Container about their inter dependency. Here comes the role of Dependency Injection. The Container will use DI(Dependency Injection) to inject the dependent-object  in to the depende-object mostly through the constructor. I will not go in detail of DI as there are lots of article about it the web.

I will stick with the Container. Container mainly implements IoC(Inversion of control): another software engineering terminology which I am not going to discus here but you can find a clear concept from my post here. In short with IoC we are actually controlling the loading of all the objects in the Container. Basically this is the concept behind all the Containers like UnityCastle Windsor or StructureMap.

Dependency Injection and use of IoC container is getting popular day by day. But have we been ever curious about this – why are they useful? This is all about improving the performance and maintainability of code. Using IoC Container you can have these two benefits

  1. Your code is easy to maintain
    • Your code is easy to maintain because all your BLL and DAL objects are loaded is a centralized place called Composition-Root. So if you want to replace a particular object –  just do it in Composition-Root.
  2. Gives you extra performance
    • In the memory allocation perspective – every new heap allocation takes a significant time and is subject to a task of garbage collector. Hence uncontrolled object loading and releasing affects the performance. But a IoC container will help us to minimize the scattered object instantiation and thus the performance will improve.

In this article I am going to implement a very basic Container in a three tier Web-application. The design is as bellow

Fig: A three tier application architecture

The detail of this design has been described in the post – An idea of a three tier application using IoC(Inversion of Control), DI(Dependency Injection) and perfectly decoupled layers.

I am not going to describe this design here, rather I am going to modify the design and introduce a Container in our Composition-Root (ie CRoot) layer. We will call our container object – ControlsContainer. Possibly it will be a single-tone. There will be another object Root that will use this ControlsContainer object.

A three tier application with Container

This ControlsContainer will maintain a dictionary of objects where all the BLL and DAL layer objects will be registered and indexed. These objects will be fetched as needed throughout the project.

Lets start with a very preliminary skeleton of our ControlsController class.


public sealed class ControlsContainer
{
        // variable for singletone instance
                private static readonly ControlsContainer _cc = new ControlsContainer();
        // To return a singletone instance
        public static ControlsContainer CContainerInstance
        {
            get {
                return _cc;
            }
        }

        // This is the delegate that we will
        // be used to create objects
        // (don't mind the reason of the input parameter for now)
        // This perameter will be used to Inject Dependency as a perameter.
        public delegate object CreateInstance(ControlsContainer container);

        // Our dictionary that will hold the couples:
        // (type, function that creates "type")
        Dictionary Factories;

        // Default Private Constructor - To make Singletone
        private ControlsContainer()
        {
            // Create an Empty dictionary
            Factories = new Dictionary();
        }

        // Other codes coming...
}

The code is self descriptive with the comments. Firstly we are creating the object as singleton – so that we don’t have to instantiate it over and over again. Then we put a delegate to hold the pointer of a function that will create a new object(ie. BLL and DAL object). Then we create a dictionary ‘Factories’ that will store all the objects reference. In the constructor the dictionary has been instantiated. This dictionary will be indexed with the Type of the objects and these objects will be created with the help of the delegate ‘CreateInstance‘. For more understanding on delegate you can read this article ‘Ways of Delegation in .NET (C# and VB)‘.

Now is the time to create a function ‘RegisterInterface‘ in the ControlsContainer object that will register an object by adding the type of the object and the delegate that will create that object in the dictionary. But certainly it will not instantiate the object.



        // Add or Register a Delegate and a Return type in the Dictionary
        public void RegisterInterface(CreateInstance ci)
        {
            if (ci == null)
                throw new ArgumentNullException("ci");

            if (Factories.ContainsKey(typeof(T)))
                throw new ArgumentException("Type already registered");
            // Adding the Type and delegate-function(function pointer)
            Factories.Add(typeof(T), ci);
        }

We need another functions ‘Resolve’ that will call the appropriate delegate function according to the Type, instantiate the object and return the instance.


        // Drag an Item from the dictioary, call the Delecate function 
        //and return the object to the client.
        public T Resolve()
        {
            if (!Factories.ContainsKey(typeof(T)))
                throw new ArgumentException("Type not registered");

            // retrieve the function that creates
            // the object from the dictionary
            CreateInstance creator = (CreateInstance)Factories[typeof(T)];

            // call it!
            return (T)creator(this);            
        }

        // We provide an overload that doesn't use generics, to be more
        // flexible when the client doesn't know the type he wants to
        // retrieve at compile time.
        public object Resolve(Type type)
        {
            if (type == null)
                throw new ArgumentNullException("type");

            if (!Factories.ContainsKey(type))
                throw new ArgumentException("Type not registered");

            CreateInstance creator = (CreateInstance)Factories[type];
            return creator(this);            
        }

We have some utility function that will check the dictionary for a Type.


        // Utility function that checks for alrady registered Types 
        public bool IsInterfaceRegistered()
        {
            return Factories.ContainsKey(typeof(T));
        }

        // Utility function that checks for alrady registered Types
        public bool IsInterfaceRegistered(Type type)
        {
            if (type == null)
                throw new ArgumentNullException("type");

            return Factories.ContainsKey(type);
        }

Finally our ControlsContainer class will look like this


    public sealed class ControlsContainer
    {

        private static readonly ControlsContainer _cc = new ControlsContainer();
        // To return a singletone instance
        public static ControlsContainer CContainerInstance
        {
            get {
                return _cc;
            }
        }

        // This is the delegate that we will
        // be used to create objects
        // (don't mind the reason of the input parameter for now)
        // This perameter will be used to Inject Dependency as a perameter.
        public delegate object CreateInstance(ControlsContainer container);

        // Our dictionary that will hold the couples:
        // (type, function that creates "type")
        Dictionary Factories;

        // Default Private Constructor - To make Singletone
        private ControlsContainer()
        {
            // Create an Empty dictionary
            Factories = new Dictionary();
        }

        // Add or Register a Delegate and a Return type in the Dictionary
        public void RegisterInterface(CreateInstance ci)
        {
            if (ci == null)
                throw new ArgumentNullException("ci");

            if (Factories.ContainsKey(typeof(T)))
                throw new ArgumentException("Type already registered");

            Factories.Add(typeof(T), ci);
        }

        // Drag an Item from the dictioary, call the Delecate function 
        //and return the object to the client.
        public T Resolve()
        {
            if (!Factories.ContainsKey(typeof(T)))
                throw new ArgumentException("Type not registered");

            // retrieve the function that creates
            // the object from the dictionary
            CreateInstance creator = (CreateInstance)Factories[typeof(T)];

            // call it!
            return (T)creator(this);            
        }

        // We provide an overload that doesn't use generics, to be more
        // flexible when the client doesn't know the type he wants to
        // retrieve at compile time.
        public object Resolve(Type type)
        {
            if (type == null)
                throw new ArgumentNullException("type");

            if (!Factories.ContainsKey(type))
                throw new ArgumentException("Type not registered");

            CreateInstance creator = (CreateInstance)Factories[type];
            return creator(this);            
        }

        // Utility function that checks for alrady registered Types 
        public bool IsInterfaceRegistered()
        {
            return Factories.ContainsKey(typeof(T));
        }

        // Utility function that checks for alrady registered Types
        public bool IsInterfaceRegistered(Type type)
        {
            if (type == null)
                throw new ArgumentNullException("type");

            return Factories.ContainsKey(type);
        }

    }

Root

Now lets look at the Root class that will be using the ControlsContainer object. The Root object will basically instantiate the singleton of the ControlsContainer and then will register all the BLL and DAL objects in the dictionary.


public class Root
    {
        // Container property
        public ControlsContainer MyContainer { get; set; }

        public Root()
        {           

            // Declare the Container
            ControlsContainer _container = ControlsContainer.CContainerInstance;
            // Set the container property
            this.MyContainer = _container;

            // Register SqlProductRepository(DAL layer)
            _container.RegisterInterface( ( ControlsContainer _c ) => new SqlProductRepository());
            // Register ProductService(BLL layer) with DI of the DAL (ISqlProductRepository)
            _container.RegisterInterface((ControlsContainer _c ) => new ProductService(_c.Resolve()));

        }
    }

Notice that the BLL and DAL objects are registered with the Interface Types – not the original object Types. This actually decouples the view layer from the BLL and DAL totally. If any object needs a change in the BLL and DAL layer then it doesn’t reflect any change in the View layer as long as the BLL and DAL implements the interfaces.
Also notice at the last line of the class where we are registering the IProductService in the container – we are also passing the dependency of ISqlProductRepository in the constructor of the ProductService class. Thus we are using dependency injection to decouple the BLL and DAL layer.

View Layer

Now its time to look at our view. we are implementing  a MVC  web-application in .NET as our view. In our view layer we will just have to use the Root object and the Interfaces.

At the starting point of our web application (Application_Start event at global.asax.ce file) we will have to create our custom controller factory and pass the root object in it. If you are not familiar with custom controller factory hit here – its very simple.  Our custom controller factory - MyControllarFactory object will get the container – ControlsContainer object from the property of the Root object. Then at every request invoke the our controller factory - MyControllarFactory object will do the following jobs:

  1. Create appropriate controller instance.
  2. Query the container to get the BLL object(ie. ProductSerice) for that controller.
  3. Pass this BLL object to the constructor of the controller as dependency injection.

Note that this BLL object will be referanced by an interface(ie. IProdctService).

Bellow is the MyControllarFactory class


public class MyControllarFactory : DefaultControllerFactory
    {
        // Container that will be used through out this application
        private ControlsContainer _container { get; set; }

        private MyControllarFactory()
        {
        }
        // Constructor
        public MyControllarFactory(Root _root)
        {
            if (_root == null)
                throw new ArgumentNullException("root");

            _container = _root.MyContainer;
        }

        // The function that will be called at every Controller instance creation
        protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType)
        {
            if (controllerType == null)
                return null;
                //throw new System.Web.HttpException(404, "Page not found: " + requestContext.HttpContext.Request.Path);

            if (!typeof(IController).IsAssignableFrom(controllerType))
                throw new System.ArgumentException("Type does not subclass IController", "controllerType");

            object[] parameters = null;

            ConstructorInfo constructor = controllerType.GetConstructors().FirstOrDefault(c => c.GetParameters().Length > 0);
            if (constructor != null)
            {
                ParameterInfo[] parametersInfo = constructor.GetParameters();
                parameters = new object[parametersInfo.Length];

                for (int i = 0; i < parametersInfo.Length; i++)
                {
                    ParameterInfo p = parametersInfo[i];

                    if (!_container.IsInterfaceRegistered(p.ParameterType))
                        throw new ApplicationException("Can't instanciate controller '" + controllerType.Name + "', one of its parameters is unknown to the IoC Container");
                    // Assign appropriate objects from container to the controllers constructor parameter
                    parameters[i] = _container.Resolve(p.ParameterType);
                }
            }

            try
            {
                // Create the controller instance and return
                return (IController)Activator.CreateInstance(controllerType, parameters);
            }
            catch (Exception ex)
            {
                throw new InvalidOperationException(String.Format(CultureInfo.CurrentUICulture, "error creating controller", controllerType), ex);
            }
        }
    }

Then the controller will be able to use that service instance to query the data from DB.


public class HomeController : Controller
    {
        IProductService _ps;
        public HomeController(IProductService prSrv)
        {
            _ps = prSrv;
        }

        public ActionResult Index()
        {

            ViewBag.Message = "Total Product: " + _ps.GetSqlProductList().Count().ToString();

            return View();
        }

        public ActionResult About()
        {
            return View();
        }
    }

Notice the HomeController uses the interface to call the service layer(BLL) functionality.

Up to now we have successfully implemented a container – ControlsContainer that will register all our back-end objects and will draw them as par requirements from the view layer.

Here I have used a very simple container to clear up the understanding of how container helps us in our application, so that  in practical projects we can use some more feature rich containers like UnityCastle Windsor or StructureMap.

Please comment :)

Reference:

Dependency Injection in .NET” by Mark Seemann.

http://blog.mikecouturier.com/2010/03/ioc-containers-with-net-mvc-understand.html

An idea of a three tier application using IoC(Inversion of Control), DI(Dependency Injection) and perfectly decoupled layers

Introduction:

Lets start with a very basic conception in developing a three tier application. These kind of applications normally have three layers: Data access layer (DAL), Business logic layer(BLL) and the View or Presentation layer. The organization of the layers is like the picture bellow.

Fig: Old Three tier

In this organization the BLL is directly dependent on DAL and the View layer is directly dependent on DAL and BLL layers. Here say for example a ProductService class in BLL will directly use the ProductRepository class in DAL layer like this:


public class ProductService 
{
        ProductRepository pr;
        int getProductCount()
        { 
            pr = new ProductRepository();
            return pr.getSqlProductCount();
        }
}

This is a problem; because with the direct initialization of ProductRepository in ProductService class makes them tightly coupled. At this point suppose you need to change the repository to OracleProductRepository, then you need to rewrite the ProductService class and compile again. Moreover it is also not possible to test the ProductService object without the ProductRepository object.

The same happens to the objects of the View layer which directly uses the objects of BLL and sometimes DAL. This makes View layer tightly coupled to the BLL and DAL layer which there by restricts the View layer to test without exactly that BLL and DAL layer.

Actually our problem is: all objects instantiation of DAL layer are scattered among all the objects of BLL layer and all the BLL layer objects instantiation are scattered among all the objects of View layer objects. To solve this – somehow we will have to control the object instantiation from one center.

The remedy:

The remedy lies in IoC and DI. So lets get familiar with those.

IoC:

Firstly to get rid from this problem we will have to use the IoC – Inversion of control. We will have to move the control of instantiation-of-objects to a separate entity – a separate object which we will call Composition Root.

DI:

Secondly, notice the Fig: Old Three tier. There the BLL depends on DAL because objects of BLL needs to instantiate the objects of DAL layer. If we can remove this dependency than we can restrict the objects of BLL to directly access the objects of DAL. But objects of BLL needs to access the DAL objects – how will they do that? Answer is by Dependency Injection and Interface.

Notice the picture bellow.

Fig: New relation between BLL and DAL

Here the BLL and DAL has their own interfaces and BLL is not depending on DAL rather DAL is depending on BLL. Latter we will see the advantage of that.

Notice also that there is a separate object CRoot that will instantiate all the objects in DAL and BLL. Here we have moved the control of instantiation of all the BLL and DAL object, thus implemented Inversion of Control(IoC).

CRoot will also inject the DAL objects in the BLL objects while instantiation. This is called Dependency Injection(DI). Interface will be used to do that.

Have a look at the ProductService class of BLL and follow the comments.


public class ProductService : IProductService
{
  // Dependency of the DLL object will be injected here
  ISqlProductRepository sqlPrdRepo;

  // Constructor
  public ProductService(ISqlProductRepository _r)
  {
    // Injecting Dependency
    sqlPrdRepo = _r;
  }

  // Other functions coming...
}

Now the DAL object that will be injected will simply implement the Interface


public class SqlProductRepository : ISqlProductRepository 
{
    // other functions for database query
}

Notice in the Fig: New relation between BLL and DAL that DAL depends on BLL – this is because sometimes DAL need to populate the POCO(plain old CLR objects) objects or domain objects of BLL. DAL will fetch data from the Repository objects or Data layer objects and will pass these data to the BLL POCOs.

A sample POCO is as this and resides in BLL


    public class Product : IProduct
    {
        public string Name { get; set; }
        public int Id { get; set; }
    }

And this POCO is populated in the DAL layer by SqlProductRepository object like this.


public class SqlProductRepository : ISqlProductRepository 
    {
        public IEnumerable getSqlProductList()
        {
            // Making a list of POCO 
            var _l = new List()
            {
                // In practical projects this data will come from DB
                new Product(){ Name = "Xion Precessor" , Id = 21},
                new Product(){ Name = "Ci7 Precessor" , Id = 20},
                new Product(){ Name = "Seleron Precessor" , Id = 13},
                new Product(){ Name = "Ci5 Precessor" , Id = 17}
             };
            //returning the list to BLL
            return _l;
        }
    }

Finally this Product list is passed to layer upwards(ie. View layer) by the object ProductService in BLL like this


public class ProductService : IProductService
{
  // Dependency of the DLL object will be injected here
  ISqlProductRepository sqlPrdRepo;

  // Constructor
  public ProductService(ISqlProductRepository _r)
  {
    // Injecting Dependency
    sqlPrdRepo = _r;
  }

  public IEnumerable GetSqlProductList()
  {
    //  Using the dependency get the Product list and
    //  return to layer upwards
    return sqlPrdRepo.getSqlProductList();
  }
}

Here all the interfaces like IProduct,IProductService and ISqlProductRepository will be resided in Interface layer. These interface layer is separated so that all the other layers(ie. View layer) can efficiently use the interfaces rather than the original object.

Finally comes the CRoot object where the BLL object – ProductService is instantiated and the DAL object – SqlProductRepository is injected in it.


    public class Root
    {
      // A property as IProductService holding the ProductService  
      public IProductService ProuctService { get; set; }

      // Constructor
      public Root()
      {

         // Instantiating the DAL object
         ISqlProductRepository _repo = new SqlProductRepository();

         // Instentiating the BLL object and 
         // inject the DAL object in it
         this.ProuctService = new ProductService(_repo);

         // ... Other DLL and BAL objects instantiating       
      }
    }

Now is the time to integrate the View layer or Presentation layer. This layer should only depend on the Composition Root (ie. CRoot) and also should be perfectly decoupled from the BLL and DAL.

See the final diagram

Fig: Final View,BLL and DAL

As you see the View layer is depending on CRoot and from there it will get all the necessary BLL and DAL objects. View layer will not use these objects directly rather it will use interfaces of those objects. Thus it is perfectly decoupled from the DAL and BLL layer.

The view layer could be anything from Console Application to Web or Desktop or Mobile Application and for simplicity i am showing a console application as a view. This view layer will ask the CRoot for an instance for IProductService. Then it will use this IProductService to get the list of IProduct.

Code from the View layer


static void Main(string[] args)
{
  Root _root = new Root();
  IProductService _prdsrv = _root.ProuctService;
  List _l = (List)_prdsrv.GetSqlProductList();

  Console.WriteLine("The processor lists as bellow");
  foreach (IProduct pr   in _l)
  {
     Console.WriteLine("The product ID: " + 
       pr.Id.ToString() + " --- Name: " + pr.Name);
  }

   Console.Read();
}

There are significant advantages we will get from this design:

1. We have centralized our object instantiation to CRoot object. So neither you can create DAL object in BLL layer nor BLL objects in View layer. Hence our code is highly maintainable and reusable.

2. We have injected the DAL objects in to BLL objects through constructor injection from CRoot object. Hence the DAL layer is less coupled with BLL layer. Some day if we want to replace the SQL DAL layer with Oracle DAL – we’ll just have to modify the CRoot object. The new Oracle DAL will have to implement the interfaces.

3. Our view layer is also loosely coupled with the BLL and DAL coz it is using the Interface layer to handle BLL and DAL objects.

4. The application is easy to test. For example we can easily make a dummy DAL layer and use it in application and test.

Possible Upgrade:

Up to now we have successfully created an application where the codes are highly maintainable and reusable, the layers are perfectly decoupled and easy to test individual layer. For further improvement we can introduce a Container in the CRoot layer. The container can be any sort of – from Unity, Castle Windsor, StructureMap to Spring.NET.

Final Words:

The design concept is totally my own – so I will appreciate comments and criticism.

References:

http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcdependencyinjection.aspx

http://www.devtrends.co.uk/blog/how-not-to-do-dependency-injection-the-static-or-singleton-container

http://www.dotnetcurry.com/ShowArticle.aspx?ID=786

Lubuntu or Ubuntu fails to update on VirtualBox virtual mechine

Some times I feel that I have reached the end point of my patients just like this day when I was stuck in Lubuntu update.  I have set up the Lubuntu in VirtualBox virtual machine. Right after the installation i tried to update it – it failed to update then I tried to install GNOME-Commander that also failed.

The error was like this:


 Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/libe/liberror-perl/liberror-perl_0.17-1_all.deb Size mismatch
 Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/g/git/git-man_1.7.5.4-1_all.deb Bad header line [IP: 91.189.92.181 80]
 Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/g/git/git_1.7.5.4-1_i386.deb Bad header line [IP: 91.189.92.181 80]
 Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/patch/patch_2.6.1-2_i386.deb Bad header line [IP: 91.189.92.181 80]
 E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

It seemed that it could not connect to the server very well.  But when I check the internet connection it works fine in all browsers. I searched about this topics in the google but could not find any way to fix it, not even in Ubuntu Forum. I was super confused and didn’t ever thought that it could be an issue from VirtualBox – coz all the browsers were working fine.  Fortunately the next day a post from Nicolas saved me out.

The network  connection for a VirtualBox uses optionally 8 hardware settings and 6 networking modes. Check out here for details. Among the 8 hardware most commonly used hardware is AMD PCNet FAST III (Am79C973, the default) – no problem with that. But the among the 6 networking modes the following 3 are mostly used.

  • Not attached mode (the guest OS consider that the network cable is unplugged)
  • Network Address Translation (NAT) mode (uses NAT between guest and host OS – therefore has some limitation)
  • Bridged networking mode (VirtualBox connects to one of your installed network cards and exchanges network packets directly)

When a Lubuntu or Ubuntu is installed in a VirtualBox, by default the networking mode is selected NAT which is OK for browsing but has certain limitations with some protocol(ie. NFS) and that leads our problem – the Update manager can not communicate with the servers properly.

So to fix this issue just change the mode from NAT to Bridged networking mode. To do this in VirtualBox go to Settings -> Network, under the first tab change the ‘Attached to’ drop-down from ‘NAT’ to ‘Bridged Adapter’.

Hope this post will save a day of yours.