Saturday, February 6, 2010

Scrum Methodology

The Origins of Scrum

The Scrum software development process is arose from shared concerns between Advanced Development Methods (ADM) and VMARK Software (VMARK). ADM produces process automation software. VMARK produces object-oriented software development environments. Both companies were concerned over the lack of breakthrough productivity being reported in object-oriented development projects. Both ADM's and VMARK's products are built using OO, and breakthrough productivity had been experienced in both companies.

Scrum Empirical Development Process

The Scrum software development process uses an iterative, incremental approach. Interaction with the environment (technical, competitive, and user) is allowed, which will change the project scope, technology, functionality, cost, and schedule whenever required. Controls are used to measure and manage the impact.
Scrum accepts that the development process is unpredictable. The product is the best possible software, factoring in cost, functionality, timing, and quality. This concept has been discussed by James Bach of Software Testing Laboratories in various articles, including "The Challenge of Good Enough Software".
Scrum formalizes the empirical "do what it takes" software development process used today by many successful ISV's. The empirical approach has been used by these ISV's to cope with the otherwise overwhelming degree of complexity and uncertainty-chaos-in which they develop products. The chaos exists not only in the marketplace where they hope to sell the products, but in the technology that they employ to design and construct these products.
These ISV's succeed and thrive amidst chaos. How? Is their development approach applicable to IS organizations. Is it predictable and controlled? Can it be used in large and small projects? Is it scaleable? Does it work for all types of development?

Scrum - Characteristics and Rules
Scrum encapsulates what works at the best ISV's., Several of the characteristics that guide Microsoft's controlled-chaos approach to the development process are inherent in Scrum :
• "It breaks down large products into manageable chunks - a few product features that small teams can create in a few months.
• It enables project to proceed systematically even when team members cannot determine a complete and stable product design at the project's beginning.
• It allows large teams to work like small teams by dividing work into pieces, proceeding in parallel but synchronizing continuously, stabilizing in increments, and continuously finding and fixing problems.

It facilitates competition based on customer feedback, product features, and short development times by providing a mechanism to incorporate customer inputs, set
priorities, complete the most important parts first, and change or cut less important features."
Scrum follows common ISV rules :
• Always have a product you can theoretically ship
• Speak a common language on a single development site
• Continuously test the product as you build it

The key principles the Scrum development process are:
• Small working teams that maximize communication, minimize overhead, and maximize sharing of tacit, informal knowledge
• Adaptability to technical or marketplace (user/customer) changes to ensure the best possible product is produced
• Frequent "builds", or construction of executables, that can be inspected, adjusted, tested, documented, and built on
• Partitioning of work and team assignments into clean, low coupling partitions, or packets
• Constant testing and documentation of a product-as it is built
• Ability to declare a product "done" whenever required (because the competition just shipped, because the company needs the cash, because the user/customer needs the functions, because that was when it was promised...).

Scrum and empirical development are recommended as enabling processes for developing new software or software that already is object-oriented or has "clean interfaces". Work objects or subsystems with high cohesion and low coupling are grouped into packets. Teams are assigned one or more packets, maximizing a team's ability to work independently. Scrum is applicable for any size project.

The Roles of Scrum

Scrum has three fundamental roles: Product Owner, ScrumMaster, and team member.
• Product Owner: In Scrum, the Product Owner is responsible for communicating the vision of the product to the development team. He or she must also represent the customer’s interests through requirements and prioritization. Because the Product Owner has the most authority of the three roles, it’s also the role with the most responsibility. In other words, the Product Owner is the single individual who must face the music when a project goes awry.
The tension between authority and responsibility means that it’s hard for Product Owners to strike the right balance of involvement. Because Scrum values self-organization among teams, a Product Owner must fight the urge to micro-manage. At the same time, Product Owners must be available to answer questions from the team.
• ScrumMaster: The ScrumMaster acts as a liaison between the Product Owner and the team. The ScrumMaster does not manage the team. Instead, he or she works to remove any impediments that are obstructing the team from achieving its sprint goals. In short, this role helps the team remain creative and productive, while making sure its successes are visible to the Product Owner. The ScrumMaster also works to advise the Product Owner about how to maximize ROI for the team.
• Team Member: In the Scrum methodology, the team is responsible for completing work. Ideally, teams consist of seven cross-functional members, plus or minus two individuals. For software projects, a typical team includes a mix of software engineers, architects, programmers, analysts, QA experts, testers, and UI designers. Each sprint, the team is responsible for determining how it will accomplish the work to be completed. This grants teams a great deal of autonomy, but, similar to the Product Owner’s situation, that freedom is accompanied by a responsibility to meet the goals of the sprint.


Applicable to Small Simple and Large Complex Software Systems
Prior to Scrum, large complex systems had to be built using either a waterfall or modified waterfall processes. A defined, stepwise approach provided task-level controls for managing the process. However, many of these projects failed because they were unresponsive to changing technology and user requirements, and the adequacy of the deliverables from the tasks couldn't be determined.
A Scrum software project is controlled by establishing, maintaining, and monitoring key measurements as controls. These controls replace task and time-reporting controls used in the waterfall and other defined development processes. These controls are critical when a
software development project is viewed as an empirical process that encompasses an unknown quantity of instability and unpredictability. Use of these controls is the backbone of the Scrum development process.
Through the formalized controls of Scrum, the empirical development process becomes formally scaleable. Using an initial systems architecture and design, work can be partitioned and assigned to as many teams as required and managed at the team and rollup levels. Overall and team risk, workload, problems and other measurements are always known, assessed, and managed.
During the design and system architecture phase of Scrum, the designers and architects divide the project into packets. Packets are assigned to teams based on priority and scheduling constraints. Based on the degree of coupling between packets, groups of up to six teams are organized into a management control cluster. This continues upwards until a top level cluster has been created.
Empirical software development has been used to build systems as large as Windows NT (approximately 4 million lines of C and C++ code)and the Norfolk Southern Railway freight tracking system. Empirical software development has also been used to rapidly get sophisticated, functionally-rich products to market, such as Borland's Quattro Pro for Windows and Advanced Development Method's process management software, MATE ( approximately 4,100 function points). Scrum has also been used for short, simple development projects.

Scrum Productivity

Compared to other development processes, Scrum delivers. Scrum was compared to other popular development processes by Capers Jones from Software Productivity Group: "Scrum methodology - similar to the iterative methodology, but assumes that all requirements are not known in advance, and that the fastest path to surfacing and implementing all requirements will be discovered empirically during the development process. Careful control mechanisms are used to assure on-time delivery of a high quality product, while allowing maximum flexibility of small, tightly coupled, development teams. Requires a well motivated team and good leadership to implement effectively. Productivity gains of 600% have been seen repeatedly in well executed projects. "

The Inner Workings of Scrum

Scrum consists of development processes and measurements that are used to control the development processes.
The key to the success of Scrum is using measurements to maximize flexibility and risk while maintaining control. Most projects try to avoid risk. Yet, risk is an inherent part of software development. Scrum embraces risk by identifying and managing risk-so that the best possible product can be built.
A Scrum software project is controlled by establishing, maintaining, and monitoring key control parameters. These controls are critical when a software development encompasses an unknown quantity of uncertainty, unpredictable behavior, and chaos. Use of these controls is the backbone of the Scrum development process.
The variables in the systems development project are risk, functionality, cost, time, and quality. These variables can be roughly estimated at the start of a project. Each variable will start changing from the moment the project starts. Variables are traded off against each other as the project progresses (improved functionality for later time and more money, etc.).
The controls used in Scrum are:
• Backlog - an identification of all requirements that should be fulfilled in the completed product
• Objects/Components - self-contained reusable things
• Packets - a group of objects within which a backlog item will be implemented. Coupling between the objects in a packet is high. Coupling between packets is low
• Problems - what must be solved by a team member to implement a backlog item within an object(s) (includes bugs)
• Issues - Concerns that must be resolved prior to a backlog item being assigned to a packet or a problem being solved by a change to a packet
• Solutions - the resolution of an issue or problem
• Changes - the activities that are performed to resolve a problem
• Risks - the risk associated with a problem, issue, or backlog item

These controls are measured, correlated, and tracked. The main controls are backlog and risk. Backlog should start relatively high, get higher during planning, and then be whittled away as the project proceeds - either by being solved or removed, until the software is completed. Risk will rise with the identification of backlog , issues, and problems, and fall to acceptable levels when the software is complete and delivered.
The Scrum methodology consists of three distinct processes.
Planning and System Architecture
Product functionality and estimated delivery requirement are planned based on current backlog and assessed risk. The result of this process is the most optimal, elegant design that meets product performance and architectural requirements.
The definition of a new release is based on currently known and uncovered backlog, along with an estimate of its schedule and cost. If a new system is being developed, conceptualization and analysis are performed. If this project is the next release of an existing system, the analysis is more limited.
A baseline product is established. Changes in the user, customer, competitive, and technological environment will require changes to this baseline definition as the project proceeds. Increased productivity through good tools or uncovered components may open the opportunity for adding more backlog to the product, or for releasing the product earlier.
Backlog and risk management will allow the product release to be planned and managed to optimize the product content and its chance of success - given the environment and resources available. In other words, the very best product possible will be built. Development will occur in a controlled environment, as close to the edge of chaos as possible.
Key is pinning down the date at which the application should be released, prioritizing functionality requirements, identifying resources available for the development effort, envisioning the application architecture, and establishing the target operating environment(s). Compared with other methodologies, the planning phase is conducted relatively quickly because it assumes that pragmatic managers and the course of events will require that these initial parameters will be later changed.

What differentiates the Scrum Planning and System Architecture process from other methodologies is:
• Controls are established : backlog (requirements) for this project is established and prioritized, risks are defined, objects for implementing backlog are identified, and problems are stated for implementing the backlog into the related objects.
• Team assignments : backlog is assigned to teams of no more than 6 developers, maximizing communication bandwidth and productivity.
• Prioritization : backlog items are prioritized for teams to work on, starting with infrastructure, then most important functionality to least important functionality.

Sprint (consisting of multiple sprints)
Visualize a large pressure cooker. Scrum development work is done in it. Gauges sticking out of the pressure cooker provide detailed information on the inner workings, including backlog, risks, problems, changes, and issues. The pressure cooker is where Scrum sprints occur, iteratively producing incrementally more functional product.
A sprint is a set of development activities conducted over a pre-defined period resulting in a demonstrable executable. The interval is based on product complexity, risk assessment, and degree of oversight desired. Sprint speed and intensity are driven by the selected duration of the sprint. The duration also depends on where it occurs in the
sequence of sprints (initial sprints are usually given longer duration, later sprints less duration), the degree of control desired, and the level of domain expertise of the various teams. Duration range from one to six weeks
This is where Scrum radically differs from traditional enterprise application methodologies because the planned product (backlog, risk) can be changed at the end of any sprint, in response to the environment (competition, new technology, development tool flaws, etc.). This ensures that the delivered product is a usable product.
Also different, the Sprint phase duration is unknown. The Sprint duration was initially planned-however, as the sprint iterations occur and the product is incrementally developed, the product may be deemed worth delivery at the end of any sprint ... because of market announcements, competitive pressures, or the product is just ready.
The project manager establishes sprint teams consisting of between 1 and 7 members (a fully staffed team should include a developer, quality assurance person, and documentation member). Each sprint consists of one or more teams working on their assigned packets to solve the problems posed for implementing backlog in the objects. Each team is given its assignment(s) and all teams are told to sprint to achieve their objectives on the same day between 1 and 6 weeks from the start of the sprint. However, this process is not as undisciplined as it may seem each team must deliver executable code to successfully end a sprint.
The project remains open to environmental complexity, including competitive, time, quality, and financial pressures, throughout the sprints. The definition of the planned product can be changed during any review meeting of a Sprint. The project and product remain flexible; the delivered product is the best possible and most relevant software possible.
Closure and Consolidation.
When the management team determines that the product is ready for delivery-based on the competition, requirements, cost, and quality-they end the Sprint phase and declare the release "closed". Closure is performed when a build is considered to have reduced risk adequately and resolved and implemented required backlog.
Closure consists of finishing system and regression testing, developing training materials, and completing final documentation. Developed product are prepared for general release. Integration, system test, user documentation, training material preparation, and marketing material preparation are among closure tasks.
The Scrum process asserts that a product is never complete; after the initial construction, it is constantly under development (otherwise known as maintenance and enhancements). Consolidation prepares for the next development cycle. The purpose of consolidation is to clean up the products and artifacts of this development cycle for a clean start on the
next development cycle. This provides an opportunity to clean up all of the loose ends that were let slip during the pressure of getting the release out the door.

Summary

Scrum produces breakthrough productivity, enabling building the best systems possible in complex, unpredictable environments.
We believe that we have captured the best practices at ISV's in Scrum, making them available to IS organizations. Scrum controls and flexibility puts the teams back in charge.
References
The following sources contain reference material for the subjects of Scrum, "good enough software", and empirical (theoretical) processes.
WorldWideWeb
ADM's home page : http://www.tiac.net/users/virman/
Jeff Sutherland's page on Scrum : http://www.tiac.net/users/jsuth/scrum/index.html
James Bach's views : http://www.stlabs.com/real_01.htm
http://www.codeproject.com/KB/architecture/scrum.aspx
Books
Cusamo, M. and Selby, R. Microsoft Secrets, The Free Press, 1995
DeGrace, P. and Hulet-Stahl, L. Wicked Problems, Righteous Solutions. Yourdon Press, 1990
Gleick, J. Chaos, Making a New Science Penguin Books, 1987
Ogunnaike, B. Process Dynamics, Modeling, and Control. Oxford University Press, 1994
Takeuchi, Hirotaka and Nonaka, Ikujiro, The Knowledge-Creating Company : How Japanese Companies Create the Dynamics of Innovation. Oxford University Press, 1995
Articles
Bach, James. The Challenge of "Good Enough" Software, American Programmer October, 1995
Curtis, B. A Mature View of the CMM. In American Programmer ,September, 1994
Racoon, L.B.S. The Chaos Model and the Chaos Life Cycle. In Software Engineering Notes, vol. 20 no. 1, January 1995
Rumbaugh, J. What Is A Method? In Journal of Object Oriented Programming , October, 1995

Wednesday, July 29, 2009

C# Constraints on Generic Types

C# Constraints on Generic Types

When you define a generic class, you can apply restrictions to the kinds of types that client code can use for type arguments when it instantiates your class. If client code tries to instantiate your class by using a type that is not allowed by a constraint, the result is a compile-time error. These restrictions are called constraints. Constraints are specified by using the where contextual keyword.

Why Use Generic Constraints

If you want to examine an item in a generic list to determine whether it is valid or to compare it to some other item, the compiler must have some guarantee that the operator or method it has to call will be supported by any type argument that might be specified by client code. This guarantee is obtained by applying one or more constraints to your generic class definition. For example, the base class constraint tells the compiler that only objects of this type or derived from this type will be used as type arguments. Once the compiler has this guarantee, it can allow methods of that type to be called in the generic class. Constraints are applied by using the contextual keyword where.

You can also set up constraints on generic classes. What if you wanted to create a generic list of objects that derived from a certain base class? This would allow you call certain functions that existed in that class.
By constraining the type, you increase the number of functions you can perform on the type.
// File: Constraints.cs
using System;
public class Employee
{
private string name;
private int id;
public Employee(string name, int id)
{
this.name = name;
this.id = id;
}
public string Name
{
get
{
return name;
}
set
{
name = value;
}
}
public int Id
{
get
{
return id;
}
set
{
id = value;
}
}
}
class MyList where T : Employee
{
T[] list;
int count;
public MyList()
{
list = new T[10];
count = 0;
}
public void InsertSorted(T t)
{
int index = 0;
bool added = false;
while (index < count)
{
if (list[index].Id > t.Id)
{
for (int last = count; last > index; last--)
{
list[last] = list[last - 1];
}
list[index] = t;
added = true;
break;
}
index++;
}
if (!added)
{
list[index] = t;
}
count++;
}
}
class Program
{
static void Main()
{
MyList myList = new MyList();
myList.InsertSorted(new Employee("dan", 200));
myList.InsertSorted(new Employee("sabet", 100));
myList.InsertSorted(new Employee("mike", 150));
myList.InsertSorted(new Employee("richard", 120));
}
}

Friday, July 24, 2009

Abstract Factory Design Pattern

Design Patterns:

Design patterns make it easier to reuse successful designs and architectures. Design patterns help you choose design alternatives that make a system reusable and avoid alternatives that compromise reusability. They help make a system independent of how its objects are created, composed, and represented

Abstract Design Pattern:
An abstract factory provides an interface for creating families of related objects without specifying their concrete classes.

Sometimes one wants to construct an instance of one of a suite of classes, deciding between the classes at the time of instantiation. In order to avoid duplicating the decision making everywhere an instance is created, we need a mechanism for creating instances of related classes without necessarily knowing which will be instantiated.

Create an Abstract Factory class to answer instances of concrete classes (usually subclasses). The class of the resultant instance is unknown to the client of the Abstract Factory.

There are two types of Abstract Factory:

Simple Abstract Factory is an abstract class defining Factory methods to answer instances of concrete subclasses. The choice of which subclass to instantiate is completely defined by which method is used, and is unknown to the client.

The second form of Abstract Factory is an abstract class defining a common protocol of Factory methods. Concrete subclasses of the abstract factory implement this protocol to answer instances of the appropriate suite of classes.

Need to abstract from details of implementation of products –

1. The system shall be independent of how its constituent pieces are created, composed, and represented.
2. Need to have multiple families of products - The system shall be configured with one of multiple families of products.
3. Need to enforce families of products that must be used together - A family of related product objects is designed to be used together, and you need to enforce this constraint.
4. Need to hide product implementations and just present interfaces - You want to provide a class library of products, and you want to reveal just their interfaces, not their implementations.

Characteristics:

1. An abstract factory is an object maker.
2. It typically can produce more than one type of object.
3. Each object that it produces is known to the receiver of the created object only by that object's interface, not by the object's actual concrete implementation.
4. The different types of objects that the abstract factory can produce are related—they are from a common family.
5. An abstract factory isolates concrete classes.
6. It makes exchanging product families easy.
7. It promotes consistency among products.
8. It supports adding new kinds of products and their families

Example:

public interface IComputerFactory
{
ICPU createCPU();
IMemory createMemory();
}

public interface ICPU
{
string GetCPUString();
}

public interface IMemory
{
string GetMemoryString();
}

//Concrete CPUA
public class CPUA : ICPU
{
public string GetCPUString()
{
return "CPUA";
}
}
//Concrete MemoryA
public class MemoryA : IMemory
{
public string GetMemoryString()
{
return "MemoryA";

}
}

public class ComputerFactoryA : IComputerFactory
{
public ICPU createCPU()
{
return new CPUA();
}
public IMemory createMemory()
{
return new MemoryA();
}
}


public class Client
{
//this is a template method; does not depend on the Concrete Factory
//and the Concrete classes

public static string BuildComputer(IComputerFactory factory)
{
ICPU cpu = factory.createCPU();
IMemory memory = factory.createMemory();
StringBuilder sb = new StringBuilder();
sb.Append(string.Format("CPU:{0}", cpu.GetCPUString()));
sb.Append(Environment.NewLine);
sb.Append(string.Format("Memory:{0}",
memory.GetMemoryString()));

return sb.ToString();

}
}

Calling Client

private void button2_Click(object sender, EventArgs e)
{
Abstract_Factory.IComputerFactory factory= new Abstract_Factory.ComputerFactoryA();
MessageBox.Show(Abstract_Factory.Client.BuildComputer(factory));
}

Wednesday, July 1, 2009

Introduction to .Net 4.0

In the past programming languages were developed to be either Object oriented or functional. But, today languages were being designed with several paradigms in mind including all best features of programming and functional capabilities.

The .net languages especially C# is no mere exception to this. The C# language has been witnessing many changes and enhancements in each version to enable the application developers of C# to utilize the real power of programming languages. Ever since, the outburst of the C# language, a .net language in 1998, with goal of creating a simple, modern, object oriented and type safe language, it has witnessed many enhancements in each release of the .net framework.


The 2.0 version of the language saw the evolution of the support for the following



1) Generics,

2) Anonymous methods,

3) Iterators,

4) Partial types and

5) Nullable types.

The 3.0 version predominantly concentrated on LINQ (Language Integrated Query) and as a side note to this LINQ, the additional features includes the following to facilitate the former:

1) Implictly Typed Local Variables.

2) Extension Methods.

3) Lambda Expressions.

4) Object and Collection Initializers.

5) Annonymous types.

6) Implicitly Typed Arrays.

7) Query Expressions and Expression Trees.

Coming to the upcoming version of C# which is 4.0 is more inspired by dynamic languages like Perl, Python and Ruby. There are both advantages and disadvantages in using both statical and dynamic languages. Some of the new features that we are going to see in the upcoming release of .net Framework in respect to C# are as described in following sections.

C# 4.0 language innovations include:

Dynamically Typed Objects.
Optional and Named Parameters.
Improved COM Interoperability.
Safe Co- and Contra-variance.


1. Let us consider this simple statically typed .NET class which calls the Add method on that class to get the sum of two integers:

Calculator calc = GetCalculator();
int sum = calc.Add(10, 20);

Our code gets all the more interesting if the Calculator class is not statically typed but rather is written in COM, Ruby, Python, or even JavaScript. Even if we knew that the Calculator class is a .NET object but we don't know specifically which type it is then we would have to use reflection to discover attributes about the type at runtime and then dynamically invoke the Add method.

object calc = GetCalculator();
Type type = calc.GetType();
object result = type.InvokeMember("Add",
BindingFlags.InvokeMethod, null,
new object[] { 10, 20 });
int sum = Convert.ToInt32(result);

If the Calculator class was written in JavaScript then our code would look somewhat like the following.

ScriptObect calc = GetCalculator();
object result = calc.InvokeMember("Add", 10, 20);
int sum = Convert.ToInt32(result);

With the C# 4.0 we would simply write the following code:

dynamic calc = GetCalculator();
int result = calc.Add(10, 20);

In the above example we are declaring a variable, calc, whose static type is dynamic. Yes, you read that correctly, we've statically typed our object to be dynamic. We'll then be using dynamic method invocation to call the Add method and then dynamic conversion to convert the result of the dynamic invocation to a statically typed integer.

You're still encouraged to use static typing wherever possible because of the benefits that statically typed languages afford us. Using C# 4.0 however, it should be less painful on those occassions when you have to interact with dynamically typed objects.


2. Another major benefit of using C# 4.0 is that the language now supports optional and named parameters and so we'll now take a look at how this feature will change the way you design and write your code.

One design pattern you'll often see as that a particular method is overloaded because the method needs to be called with a variable number of parameters.

Let's assume that we have the following OpenTextFile method along with three overloads of the method with different signatures. Overloads of the primary method then call the primary method passing default values in place of those parameters for which a value was not specified within the call to the overloaded method.

public StreamReader OpenTextFile(
string path,
Encoding encoding,
bool detectEncoding,
int bufferSize) { }

public StreamReader OpenTextFile(
string path,
Encoding encoding,
bool detectEncoding) { }

public StreamReader OpenTextFile(
string path,
Encoding encoding) { }

public StreamReader OpenTextFile(string path) { }

In C# 4.0 the primary method can be refactored to use optional parameters as the following example shows:

public StreamReader OpenTextFile(
string path,
Encoding encoding = null,
bool detectEncoding = false,
int bufferSize = 1024) { }

Given this declaration it is now possible to call the OpenTextFile method omitting one or more of the optional parameters.

OpenTextFile("foo.txt", Encoding.UTF8);

It is also possible to use the C# 4.0 support for named parameters and as such the OpenTextFile method can be called omitting one or more of the optional parameters while also specifying another parameter by name.

OpenTextFile("foo.txt", Encoding.UTF8, bufferSize: 4098);

Named arguments must be provided last although when provided they can be provided in any order.


3. If you have ever written any code that performs some degree of COM interoperability you have probably seen code such as the following.

object filename = "test.docx";
object missing = System.Reflection.Missing.Value;
doc.SaveAs(ref filename,
ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing);

With optional and named parameters the C# 4.0 language provides significant improvements in COM interoperability and so the above code can now be refactored such that the call is merely:

doc.SaveAs("foo.txt");

When performing COM interoperability you'll notice that you are able to omit the ref modifer although the use of the ref modifier is still required when not performing COM interoperability.



With previous versions of the technologies it was necessary to also ship a Primary Interop Assembly (PIA) along with your managed application. This is not necessary when using C# 4.0 because the compiler will instead inject the interop types directly into the assemblies of your managed application and will only inject those types you're using and not all of the types found within the PIA.


4. The final language improvement that we will explore is co-variance and contra-variance and we'll begin by exploring co-variance with .NET arrays.

string[] names = new string[] {
"Anders Hejlsberg",
"Mads Torgersen",
"Scott Wiltamuth",
"Peter Golde" };

Write(names);

Since version 1.0 an array in the .NET Framework has been co-variant meaning that an array of strings, for example, can be passed to a method that expects an array of objects. As such the above array can be passed to the following Write method which expects an array of objects.

private void Write(object[] objects)
{
}

Unfortunately arrays in .NET are not safely co-variant as we can see in the following code. Assuming that the objects variable is an array of strings the following will succeed.

objects[0] = "Hello World";

Although if an attempt is made to assign an integer to the array of strings an ArrayTypeMismatchException is thrown.

objects[0] = 1024;

In both C# 2.0 and C# 3.0 generics are invariant and so a compiler error would result from the following code:

List names = new List();

Write(names);

Where the Write method is defined as:


Generics with C# 4.0 now support safe co-variance and contra-variance through the use of the in and out contextual keywords. Let's take a look at how this changes the definition of the IEnumerable and IEnumerator interfaces.

public interface IEnumerable
{
IEnumerator GetEnumerator();
}

public interface IEnumerator
{
T Current { get; }
bool MoveNext();
}

You'll notice that the type parameter T of the IEnumerable interface has been prefixed with the out contextual keyword. Given that the IEnumerable interface is read only, there is no ability specified within the interface to insert new elements with the list, it is safe to treat something more derived as something less derived. With the out contextual keyword we are contractually affirming that IEnumerable is safely co-variant. Given that IEnumerable is safely co-variant we can now write the following code:


Because the IEnumerable interface uses the out contextual keyword the compiler can reason that the above assignment is safe.

Using the in contextual keyword we can achieve safe contra-variance, that is treating something less derived as something more derived.

public interface IComparer
{
int Compare(T x, T y);
}

Given that IComparer is safely contra-variant we can now write the following code:

IComparer[object] objectComparer = GetComparer();
IComparer[string] stringComparer = objectComparer;

Although the current CTP build of Visual Studio 2010 and the .NET Framework 4.0 has limited support for the variance improvements in C# 4.0 the forthcoming beta will allow you to use the new in and out contextual keywords in types such as IComparer. The .NET Framework team is updating the types within the framework to be safely co- and contra-variant.

Friday, June 19, 2009

How to make window services in .Net

How to make window services in .Net

Definition: Microsoft Windows services enable you to create long-running executable applications that run in their own Windows sessions. These services (e.g. SQL server Service) can be automatically started when the computer boots, can be paused and restarted, and do not show any user interface. Whenever you need long-running functionality that does not interfere with other users who are working on the same computer.


We can start, stop, or pause a Windows service from the Windows Services management console. This MMC (Microsoft Management Console) can be opened by selecting the group from the Administrative Tools programs group. All the services currently running on our system.

Windows Services in DOTNET
To make a Windows service, we need three types of operational programs. They are as follows:
• Service Program
• Service Control Program
• Service Configuration program
Service Program
A Service Program is the main program where our code resides. Here, we write the actual logic for our service. One point to note here is that each service program can refer to more than one service but each one of these services must contain their own entry point or a Main() method. This Main() method is just like the Main() method in C#.
To write a service class, we need to inherit it from a ServiceBase class. The ServiceBase class is a part of the System.ServiceProcess class.
Service Control Program
A service control program is a part of the operating system. It is responsible for the starting, stopping, and pausing of a Windows service. To implement a Windows service, we need to inherit from the ServiceController class of the System.ServiceProcess class.
Service Configuration Program
Once a service is made, it needs to be installed and configured. Configuration involves that this service will be started manually or at boot time. To implement a service program, we need to inherit from the ServiceProcessInstaller class of ServiceInstaller.

To create a new Windows service, select a Windows service project. Type a project name and click OK. VS.NET will create the basic code for us. We will just have to write the logic of the code.
There is one important thing to consider in the Main() method:
Program.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.ServiceProcess;
using System.Text;

namespace MonitoringService
{
static class Program
{
///
/// The main entry point for the application.
///

static void Main()
{
ServiceBase[] ServicesToRun = new ServiceBase[]
{
new MonitoringWinService()
};
ServiceBase.Run(ServicesToRun);
}
}
}


MonitoringService.config










MonitoringWinService.cs

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.Linq;
using System.ServiceProcess;
using System.Text;
using System.Configuration;
using System.IO;

namespace MonitoringService
{
public partial class MonitoringWinService : ServiceBase
{
readonly EventLog eventLog1 = new EventLog();

readonly System.Timers.Timer timer1 = new System.Timers.Timer();

public MonitoringWinService()
{
InitializeComponent();

timer1.Enabled = true;
timer1.Start();
timer1.Interval= 60000; // One Minute


if(!System.Diagnostics.EventLog.SourceExists("MonitoringWinService"))
System.Diagnostics.EventLog.CreateEventSource("MonitoringWinService",
"MonitoringWinService_Logging");

eventLog1.Source = "MonitoringWinService";

eventLog1.Log = "MonitoringWinService_Logging";
}

#region Properties

private string _MonitoringFolder = string.Empty;
public string MonitoringFolder
{
set
{
if(!string.IsNullOrEmpty(ConfigurationSettings.AppSettings["FolderPath"].ToString()))
{
_MonitoringFolder = mCheckFolderPath(ConfigurationSettings.AppSettings["FolderPath"].ToString());
}
else
{
_MonitoringFolder = @"c:\Image_Folder\"; // Default Value
}
}
get
{
return _MonitoringFolder;
}
}

private string _LogFile = string.Empty;
public string LogFile
{
set
{
if(!string.IsNullOrEmpty(ConfigurationSettings.AppSettings["WinService_log"].ToString()))
{
_LogFile = ConfigurationSettings.AppSettings["WinService_log"].ToString();
}
else
{
_LogFile = @"C:\MonitoringService_LogFile.log"; // Default Value
}
}
get
{
return _LogFile;
}
}

#endregion


#region CustomMethods

///
/// For Monitoring Folder Size
///

///
///
public int mGetDirectoryLength(string strDirectory)
{
long iTotalSize = 0;
List list = new List();
try
{
foreach (string d in Directory.GetFiles(strDirectory))
{
iTotalSize = iTotalSize + mGetFileLength(d);

}

return mConvertBytesIntoGB(iTotalSize);
}
catch (System.Exception excpt)
{
return 0;
}

}
///
/// Convert Bytes into MB
///

///
///
public int mConvertBytesIntoMB(long bytes)
{
return Convert.ToInt32(Math.Round(bytes / 1024.0 / 1024.0, 2));

}

///
/// Convert Bytes into GB
///

///
///
public int mConvertBytesIntoGB(long bytes)
{
return Convert.ToInt32(Math.Round(bytes / 1024.0 / 1024.0 / 1024.0, 2));

}
///
/// Get File Length of the File
///

///
///
public long mGetFileLength(string strFilePath)
{
if (!string.IsNullOrEmpty(strFilePath))
{
FileInfo info = new System.IO.FileInfo(strFilePath);
return info.Length;
}
return 0;
}
///
/// To Check the Folder Path.
/// Adding slash at end to make it valid path
///

///
///
public static string mCheckFolderPath(string strpath)
{
if(!string.IsNullOrEmpty(strpath))
{
if(strpath.Substring((strpath.Length-1), 1)!=@"\")
{
return strpath.Insert(strpath.Length, @"\");

}

return strpath;
}

return strpath;
}
///
/// Log-Writer
///

///
public void mWriteLogFile(string LogMessage)
{
if (LogMessage == null) throw new ArgumentNullException("LogMessage");
if (!File.Exists(LogFile))
File.Create(LogFile);

FileStream fs = new FileStream(LogFile, FileMode.Open, FileAccess.Write);

StreamWriter writer = new StreamWriter(fs, ASCIIEncoding.Default);

writer.WriteLine(LogMessage);
writer.Flush();
writer.Close();

fs.Close();

}

#endregion

protected override void OnStart(string[] args)
{
mWriteLogFile("Service Started ............." + DateTime.Now);
eventLog1.WriteEntry("Service Started " + DateTime.Now);
}

protected override void OnStop()
{
mWriteLogFile("Service Stopped ............." + DateTime.Now);
eventLog1.WriteEntry("Service Stopped " + DateTime.Now);
}

protected override void OnContinue()
{
mWriteLogFile("Image_Folder monitoring in Process " + DateTime.Now);
eventLog1.WriteEntry("Image_Folder monitoring in Process " + DateTime.Now);

}

private void timer1_Tick(object sender, EventArgs e)
{
mWriteLogFile(string.Format("Folder Size [GB] {0} - DateTime {1}", mGetDirectoryLength(MonitoringFolder).ToString(), DateTime.Now.ToString()));
eventLog1.WriteEntry(string.Format("Folder Size [GB]{0} - DateTime {1}", mGetDirectoryLength(MonitoringFolder).ToString(), DateTime.Now.ToString()));
}
}
}

InstallerClass.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.ComponentModel;
using System.ServiceProcess;



namespace MonitoringService
{
[RunInstallerAttribute(true)]
public class InstallerClass : System.Configuration.Install.Installer
{
public InstallerClass()
{
ServiceInstaller si = new ServiceInstaller();
ServiceProcessInstaller spi = new ServiceProcessInstaller();

si.ServiceName = "MonitoringImageFolder";
si.DisplayName = "MonitoringImageFolder";
si.StartType = ServiceStartMode.Automatic;
this.Installers.Add(si);

spi.Account = System.ServiceProcess.ServiceAccount.LocalSystem;
spi.Username=null;
spi.Password=null;
this.Installers.Add(spi);
}
}
}


Deployment Process
To add a deployment project
1. On the File menu, point to Add, and then click New Project.
2. In the Add New Project dialog box's Project Type pane, open the Other Project Types node, select Setup and Deployment Projects. In the Templates pane, choose Setup Project. In the Name box, type MonitoringService_Setup.
The project is added to Solution Explorer and the File System Editor is displayed.
3. In the File System Editor, select Application Folder in the left pane. On the Action menu, point to Add, and then choose Project Output.
4. In the Add Project Output Group dialog box, OpenWeb3 will be displayed in the Project list. Select Primary Output.
Primary Output from OpenWeb (Active) appears in the Application Folder.
To add the custom action
1. Select the MonitoringService_Setup project in Solution Explorer. On the View menu, point to Editor, and then choose Custom Actions.
The Custom Actions Editor is displayed.
2. In the Custom Actions Editor, select the Commit node. On the Action menu, choose Add Custom Action.
3. In the Select Item in Project dialog box, double-click the Application Folder. Then select Primary output from MonitoringService.
Primary output from MonitoringService appears under the Commit node in the Custom Actions Editor.
4. In the Properties window, make sure that the InstallerClass property is set to True (this is the default).
5. In the Custom Actions Editor, select the Install node and add Primary output from MonitoringService to this node as you did for the Commit node.
6. On the Build menu, choose MonitoringService_Setup.
InstallUtil could also be use to Install the Windows Service
Open a Visual Studio .NET Command Prompt
Change to the bin\Debug directory of your project location (bin\Release if you compiled in release mode)
Issue the command InstallUtil.exe MonitoringService.exe to register the service and have it create the appropriate registry entries
Open the Computer Management console by right clicking on My Computer on the desktop and selecting Manage
In the Services section underneath Services and Applications you should now see your Windows Service included in the list of services
Start your service by right clicking on it and selecting Start
Each time you need to change your Windows Service it will require you to uninstall and reinstall the service. Prior to uninstalling the service it is a good idea to make sure you have the Services management console closed. If you do not you may have difficulty uninstalling and reinstalling the Windows Service. To uninstall the service simply reissue the same InstallUtil command used to register the service and add the /u command switch

Monday, June 8, 2009

What is SilverLight

What is Silverlight?

Silverlight is a new cross-browser, cross-platform implementation of the .NET Framework for building and delivering the next generation of media experiences and Rich Interactive Applications(RIA) for the web. It runs in all popular browsers, including Microsoft Internet Explorer, Mozilla Firefox, Apple Safari, Opera. The plugin required to run Silverlight is very small in size hence gets installed very quickly.
It is combination of different technolgoies into a single development platform that allows you to select tools and the programming language you want to use. Silverlight integrates seamlessly with your existing Javascript and ASP.NET AJAX code to complement functionality which you have already created.
Silverlight aims to compete with Adobe Flash and the presentation components of Ajax. It also competes with Sun Microsystems' JavaFX, which was launched a few days after Silverlight.
Currently there are 2 versions of Silverlight:

Silverlight 1.0 :

Silverlight 1.0 consists of the core presentation framework, which is responsible for UI, interactivity and user input, basic UI controls, graphics and animation, media playback, DRM support, and DOM integration.
Main features of Silverlight 1.0 :
1. Built-in codec support for playing VC-1 and WMV video, and MP3 and WMA audio within a browser.
2. Silverlight supports the ability to progressively download and play media content from any web-server.
3. Silverlight also optionally supports built-in media streaming.
4. Silverlight enables you to create rich UI and animations, and blend vector graphics with HTML to create compelling content experiences.
5. Silverlight makes it easy to build rich video player interactive experiences.

Silverlight 1.1 :

Silverlight 1.1 includes a version of the .NET Framework, with the full Common Language Runtime as .NET Framework 3.0; so it can execute any .NET language including VB.NET and C# code. Unlike the CLR included with .NET Framework, multiple instances of the CoreCLR included in Silverlight can be hosted in one process.[16] With this, the XAML layout markup file (.xaml file) can be augmented by code-behind code, written in any .NET language, which contains the programming logic.
Main features of Silverlight 1.1 :
1. A built-in CLR engine that delivers a super high performance execution environment for the browser. Silverlight uses the same core CLR engine that we ship with the full .NET Framework.
2. Silverlight includes a rich framework library of built-in classes that you can use to develop browser-based applications.
3. Silverlight includes support for a WPF UI programming model. The Silverlight 1.1 Alpha enables you to program your UI with managed code/event handlers, and supports the ability to define and use encapsulated UI controls.
4. Silverlight provides a managed HTML DOM API that enables you to program the HTML of a browser using any .NET language.
5. Silverlight doesn't require ASP.NET to be used on the backend web-server (meaning you could use Silverlight with with PHP on Linux if you wanted to).
How Silverlight would change the Web:
1. Highest Quality Video Experience : prepare to see some of the best quality videos you have seen in your life, all embedded in highly graphical websites. The same research and technology that was used for VC-1, the codec that powers BluRay and HD DVD, is used by Microsoft today with its streaming media technologies.
2. Cross-Platform, Cross-Browser : Finally build web applications that work on any browser, and on any operating system. At release, Silverlight will work with Mac as well as Windows! The Mono project has also already promised support for Linux!.
3. Developers and Graphic Designers can play together! : Developers familiar with Visual Studio, Microsoft.net will be able to develop amazing Silverlight applications very quickly, and they will work on Mac's and Windows. Developers will finally be able to strictly focus on the back end of the application core, while leaving the visuals to the Graphic Design team using the power of XAML.
4. Cheaper : Silverlight is now the most inexpensive way to stream video files over the internet at the best quality possible. Licensing is dead simple, all you need is IIS in Windows Server, and you’re done.
5. Support for 3rd Party Languages : Using the power of the new Dynamic Language Runtime, developers will now be able to use Ruby, Python, and EcmaScript! This means a Ruby developer can develop Silverlight applications, and leverage the .net Framework!
6. Cross-Platform, Cross-Browser Remote Debugging : If you are in the need to debug an application running on a Mac, no problem! You can now set breakpoints, step into/over code, have immediate windows, and all that other good stuff that Visual Studio provides.
7. The best development environment on the planet : Visual Studio is an award winning development platform! As it continues to constantly evolve, so will Silverlight!
8. Silverlight offers copy protection : Have you noticed how easy it is to download YouTube videos to your computer, and save them for later viewing ? Silverlight will finally have the features enabling content providers complete control over their rich media content! Streaming television, new indie broadcast stations, all will now be possible!
9. Extreme Speed :There is a dramatic improvement in speed for AJAX-enabled websites that begin to use Silverlight, leveraging the Microsoft .net framework.
Getting Started With SilverLight :
In order to create Silverlight applications with following :
Runtime :
Microsoft Silverlight 1.1 (currently alpha) : The runtime required to view Silverlight applications created with .NET Microsoft.
Developer Tools :
Microsoft Visual Studio codename "Orcas" (currently Beta 1) : The next generation development tool.
Microsoft Silverlight Tools Alpha for Visual Studio codename "Orcas" : The add-on to create Silverlight applications using .NET.
Designer Tools :
Download the Expression designer tools to start designing Silverlight application.
Expression Blend 2
Software Development Kit:
Microsoft Silverlight 1.1 Alpha Software Development Kit (SDK) : Download this SDK to create Silverlight Web experiences that target Silverlight 1.1 Alpha. The SDK contains documentation and samples.

Thursday, May 28, 2009

OracleHelper.cs

This OracleHelper class is intended to encapsulate high performance, scalable best practices for common uses of OracleClient

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data.OracleClient;
using System.Data;

namespace Trans829115.DAL
{
public class OracleHelper
{
private string strConnectionString;
private OracleConnection oConnection;
private OracleCommand oCommand;
private int iTimeOut = 30;

public enum ExpectedType
{

StringType = 0,
NumberType = 1,
DateType = 2,
BooleanType = 3,
ImageType = 4
}

public OracleHelper()
{
try
{

strConnectionString = System.Configuration.ConfigurationManager.ConnectionStrings["ConnectionString"].ToString();

oConnection = new OracleConnection(strConnectionString);
oCommand = new OracleCommand();
oCommand.CommandTimeout = iTimeOut;
oCommand.Connection = oConnection;

}
catch (Exception ex)
{
throw new Exception("Error initializing OracleHelper class." + Environment.NewLine + ex.Message);
}
}


public OracleHelper(string MyConnectionString)
{
try
{

strConnectionString = MyConnectionString;

oConnection = new OracleConnection(strConnectionString);
oCommand = new OracleCommand();
oCommand.CommandTimeout = iTimeOut;
oCommand.Connection = oConnection;

}
catch (Exception ex)
{
throw new Exception("Error initializing OracleHelper class." + Environment.NewLine + ex.Message);
}
}

public void Dispose()
{
try
{
//Dispose off connection object
if (oConnection != null)
{
if (oConnection.State != ConnectionState.Closed)
{
oConnection.Close();
}
oConnection.Dispose();
}

//Clean Up Command Object
if (oCommand != null)
{
oCommand.Dispose();
}

}

catch (Exception ex)
{
throw new Exception("Error disposing OracleHelper class." + Environment.NewLine + ex.Message);
}

}

public void CloseConnection()
{
if (oConnection.State != ConnectionState.Closed) oConnection.Close();
}

public int GetExecuteScalarByCommand(string Command)
{

object identity = 0;
try
{
oCommand.CommandText = Command;
oCommand.CommandTimeout = iTimeOut;
oCommand.CommandType = CommandType.StoredProcedure;

oConnection.Open();

oCommand.Connection = oConnection;
identity = oCommand.ExecuteScalar();
CloseConnection();
}
catch (Exception ex)
{
CloseConnection();
throw ex;
}
return Convert.ToInt32(identity);
}

public void GetExecuteNonQueryByCommand(string Command)
{
try
{
oCommand.CommandText = Command;
oCommand.CommandTimeout = iTimeOut;
oCommand.CommandType = CommandType.StoredProcedure;

oConnection.Open();

oCommand.Connection = oConnection;
oCommand.ExecuteNonQuery();

CloseConnection();
}
catch (Exception ex)
{
CloseConnection();
throw ex;
}
}

public void GetExecuteNonQueryBySQL(string strSQL)
{
try
{
oCommand.CommandText = strSQL;
oCommand.CommandTimeout = iTimeOut;
oCommand.CommandType = CommandType.Text;

oConnection.Open();

oCommand.Connection = oConnection;
oCommand.ExecuteNonQuery();

CloseConnection();
}
catch (Exception ex)
{
CloseConnection();
throw ex;
}
}

public DataSet GetDatasetByCommand(string Command)
{
try
{
oCommand.CommandText = Command;
oCommand.CommandTimeout = iTimeOut;
oCommand.CommandType = CommandType.StoredProcedure;

oConnection.Open();

OracleDataAdapter adpt = new OracleDataAdapter(oCommand);
DataSet ds = new DataSet();
adpt.Fill(ds);
return ds;
}
catch (Exception ex)
{
throw ex;
}
finally
{
CloseConnection();
}
}

public DataSet GetDatasetBySQL(string strSQL)
{
try
{
oCommand.CommandText = strSQL;
oCommand.CommandTimeout = iTimeOut;
oCommand.CommandType = CommandType.Text;

oConnection.Open();

OracleDataAdapter Adapter = new OracleDataAdapter(strSQL, oConnection);
DataSet ds = new DataSet();
Adapter.Fill(ds);
return ds;
}
catch (Exception ex)
{
throw ex;
}
finally
{
CloseConnection();
}
}


public OracleDataReader GetReaderBySQL(string strSQL)
{
oConnection.Open();
try
{
OracleCommand myCommand = new OracleCommand(strSQL, oConnection);
return myCommand.ExecuteReader();
}
catch (Exception ex)
{
CloseConnection();
throw ex;
}
}

public OracleDataReader GetReaderByCmd(string Command)
{
OracleDataReader objOracleDataReader = null;
try
{
oCommand.CommandText = Command;
oCommand.CommandType = CommandType.StoredProcedure;
oCommand.CommandTimeout = iTimeOut;

oConnection.Open();
oCommand.Connection = oConnection;

objOracleDataReader = oCommand.ExecuteReader() ;
return objOracleDataReader;
}
catch (Exception ex)
{
CloseConnection();
throw ex;
}

}

public void AddParameterToSQLCommand(string ParameterName, OracleType ParameterType)
{
try
{
oCommand.Parameters.Add(new OracleParameter(ParameterName, ParameterType));
}

catch (Exception ex)
{
throw ex;
}
}
public void AddParameterToSQLCommand(string ParameterName, OracleType ParameterType,int ParameterSize)
{
try
{
oCommand.Parameters.Add(new OracleParameter(ParameterName, ParameterType, ParameterSize));
}

catch (Exception ex)
{
throw ex;
}
}
public void SetSQLCommandParameterValue(string ParameterName, object Value)
{
try
{
oCommand.Parameters[ParameterName].Value = Value;
}

catch (Exception ex)
{
throw ex;
}
}
}

}
Locations of visitors to this page