Pages

Monday, November 29, 2010

Microsoft Light Switch

Microsoft has come out with a rapid-application development tool to build applications for the desktop, the web, and the cloud. Microsoft is promoting LightSwitch as the simplest way for developers of all skill levels to develop business applications, but it seems to be targeted more at inexperienced developers.

You can download the beta version and look at some demonstration videos here.

Sunday, November 28, 2010

WCF Data Services

It often becomes difficult (if not painful) to share data beyond its original intent. As systems continue to become more interconnected, the need to reuse information also grows and the value of any given data becomes greater the more it can be shared and accessed by other systems.

The Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. WCF Data Services, is the Microsoft technology to support the Open Data Protocol. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010.


In addition to client libraries that simplify working with OData, the Data Services framework builds on the general WCF capabilities to provide a solution for creating OData services for the web. Data Services enable you to expose data models to the Web in a RESTful way, with rich built-in data access capabilities such as flexible querying, paging, and association traversal.

The Data Services framework facilitates the creation of flexible data services that are naturally integrated with the web. WCF Data Services use URIs to point to pieces of data and use simple, well-known formats to represent that data, such as JSON and ATOM (XML-based feed format). This results in the data service being surfaced as a REST-style resource collection that is addressable with URIs and with which agents can interact using standard HTTP verbs such as GET, POST, PUT or DELETE.

For examples and quick start guides on WCF Data Services, go to this link. Also, read more about OData protocol here.

Thursday, November 25, 2010

Difference between XmlSerializer and DataContractSerializer (WCF)

Lets start with, what is serialization?
It is the process of converting an object instance into a format that can be persisted and formated. The objects can be serialized into all sorts of formats(Xml, binary, json, etc). Serializing to Xml is most often used for its interoperability. Serializing to binary is useful when you want to send the object from one .Net application to another. .Net even supports the interfaces and base classes to build your own serialization format.

Deserialization is basically the reverse of serialization. Its the process of taking some data (Xml, binary, json, etc) and converting it back into an object.

I want to point out, that the differences between XmlSerializer and DataContractSerializer are in context of WCF.
XmlSerializer
Advantages:
  • Opt-out rather than opt-in properties to serialize. This mean you don’t have to specify each and every property to serialize, only those you don’t want to serialize
  • Full control over how a property is serialized including it being a node or an attribute
  • Supports more of the XSD standard
Disadvantages:
  • Can only serialize properties
  • Properties must be public
  • Properties must have a get and a set which can result in some awkward design
  • Supports a narrower set of types
  • Cannot understand the DataContractAttribute and will not serialize it unless there is a SerializableAttribute too
DataContractSerializer
Advantages:
  • Opt-in rather than opt-out properties to serialize. This means, you specify what you want to serialize
  • Because it is opt in model, you can serialize not only properties, but also fields. You can even serialize non-public members. And you don't need a set on a property either (however without a setter you can serialize, but not deserialize)
  • Is faster than XmlSerializer because you don’t have full control over how it is serialized, there is a lot that can be done to optimize the serialization/deserialization process.
  • Can understand the SerializableAttribute and know that it needs to be serialized
  • More options and control over KnownTypes
Disadvantages:
  • No control over how the object is serialized outside of setting the name and the order

For WCF, prefer to use the DataContractSerializer. But if you need full control over how the xml looks, use XmlSerializer.

Friday, November 19, 2010

ASP.NET Page Life Cycle Overview

It is important to understand how ASP.NET page life cycle works, as it provides insight about the series of processing steps it goes through and then you can write code at the appropriate life-cycle stage. Specially when you are developing custom controls, you need to be familiar with the page life cycle in order to correctly initialize controls, populate control properties with view-state data and run behavior code.

I am not going to explain all the stages and events of the page life cycle, as it is well documented on MSDN and other blogs. But I found a really good image which provides you an overview of the page life cycle. The following image shows some of the most important methods of the Page class that you can override in order to add code that executes at specific points in the page life cycle. The image also shows how these methods relate to page events and to control events. The sequence of methods and events in the illustration is from top to bottom, and within each row from left to right.


This above image is relevant to .NET framework 4.0. I am going to print this out and stick it somewhere near my desk. BTW, an easy way of remember the important page life-cycle stages is SILVER.

S – Start
I – Initialize
L – Load
V – Validate
E – Event Handling
R – Render

Thursday, November 18, 2010

Update T4 POCO Template for inheritance

Lately I have been working on an application which uses Entity Framework 4. After playing around with the default code generation, I found the option of using ADO.NET POCO template generator. They are very light compared to the entities generated by the default code generator.

But I wanted to customize the template further to suit my requirements. One of the requirements was that all the entity classes should inherit from a base class. So to edit the template, first you need to install T4 editor extension (download it from here).  This editor provides intelli-sense and syntax-hilighting.

Currently the entities generated from the model inherit from other entities only if that inheritance is defined in the model. For example, Employee inherits from Person. This is how the entity decleration looks in the original template:

partial class <#=code.Escape(entity)#><#=code.StringBefore(" : " ,code.Escape(entity.BaseType))#>

This ensures that the Employee class inherits from Person, or that any derived entity inherits from its base, in the generated code. But we now want to have every entity inherit from the new base class (BaseModel) unless the entities are already deriving from another base entity. In other words, Person should inherit directly from BaseModel, while Employee continues to inherit from Person (and therefore indirectly inherits BaseModel).

First, you’ll need to add the method, BaseTypeName, as shown below. Look for IsReadWriteAccessibleProperty method and add the following method above it.
string BaseTypeName(EntityType entity, CodeGenerationTools code)
{
       return entity.BaseType == null ? "Namespace.BusinessEntities.BaseModel" :
               code.Escape(entity.BaseType);
}

Now you can modify the code shown before, where the entity declaration is made to call the BaseTypeName method. As result, the entity will inherit from either BaseModel or its base type as defined in the model.

partial class <#=code.Escape(entity)#><#=code.StringBefore(" : ", BaseTypeName(entity, code))#>

Ensure the entities can locate the BaseModel class. Once you save the template, the entities should get re-generated again based on our customised template.

Wednesday, November 17, 2010

SQL OVER Clause with aggregate functions

In one of my previous post, we saw the usage of OVER() clause for ranking functions. We can also use the OVER() Clause to simplify aggregate calculations. We can now add aggregate functions to any SELECT (even without a GROUP BY clause) by specifying an OVER() partition for each function. Consider the following table:

StudentID
Quesiton Reference No
Section
Required Score
1
Q-M100
Maths
10
1
Q-M200
Maths
15
1
Q-M300
Maths
20
1
Q-M400
Maths
10
1
Q-P100
Physics
10
1
Q-P200
Physics
30
1
Q-P300
Physics
10
1
Q-C100
Chemistry
50
1
Q-C200
Chemistry
10
1
Q-C300
Chemistry
15

Say, there is a requirement where you want to find the weightage of each question within a section
and also the wieghtage of each section for a student. Normally, you would create sub-queries to retrieve the summary values for calculations, as shown below:

SELECT [StudentID], [Quesiton Reference No], [Section], [Required Score],
((([Required Score]*1.0)/
  (SELECT SUM(A.[Required Score]) FROM Answers A WHERE A.[Section] = Answers.[Section]))*100) 
AS QuestionWeightageInSection,
((((SELECT SUM(A.[Required Score]) FROM Answers A WHERE A.[Section] = Answers.[Section])*1.0)/               (SELECT SUM(A.[Required Score]) FROM Answers A WHERE A.[StudentID] = 1))*100) 
AS SectionWeightage

FROM Answers WHERE Answers.[StudentID] = 1

But now with OVER() Clause this becomes more simple and efficient. This is how the query will look:

SELECT [StudentID], [Quesiton Reference No], [Section], [Required Score],
((([Required Score]*1.0)/(SUM([Required Score]) OVER(PARTITION BY [Section])))*100) 
AS QuestionWeightageInSection
((((SUM([Required Score]) OVER(PARTITION BY [Section]))*1.0)/(SUM([Required Score]) OVER(PARTITION BY [StudentID])))*100) 
AS SectionWeightage

FROM Answers WHERE Answers.[StudentID] = 1

StudentID
Quesiton Reference No
Section
Required Score
Question Weightage In Section
Section Weightage
1
Q-C100
Chemistry
50
66.66666667
41.66666667
1
Q-C200
Chemistry
10
13.33333333
41.66666667
1
Q-C300
Chemistry
15
20
41.66666667
1
Q-M400
Maths
10
18.18181818
30.55555556
1
Q-M100
Maths
10
18.18181818
30.55555556
1
Q-M300
Maths
20
36.36363636
30.55555556
1
Q-M200
Maths
15
27.27272727
30.55555556
1
Q-P100
Physics
10
20
27.77777778
1
Q-P200
Physics
30
60
27.77777778
1
Q-P300
Physics
10
20
27.77777778

The way it works is similar to joining an aggregated copy of a SELECT to itself. In my experience it is 20% or more faster than co-related sub queries. You can always look up the execution plan to see the differences between performance. You can use the OVER() Clause with all the other aggregate functions similarly. Read more about it here.

Sunday, November 14, 2010

Entity Framework 4 and WCF Services

Recently I have been reading about using entity framework 4 with WCF services. There are array of options for building services, each option servers a different purpose. As I have played around with various options, I want to talk about which solution(s) applies to different requirements.

POCO entities or EntityObects?
EntityObjects are the “out of the box” option, while they provide a lot of useful automated change tracking and relationship management, it is challenging to work with services that depend on EntityObjects and transfer them across tiers. POCO(Plain Old CLR Objects) entities remove lot of extra layers concerning state management, when you are creating services and making them easy for end users to consume.

Custom service, data service, or RIA service?
There are three paths to choose for WCF services. The first is to write your own service. This is where you have full control over the service operations and the other logic, including handling features such as security.

WCF Data Services is a more lightweight solution and allows you to provide your data to a wide range of consumers. Unlike WCF services, which use SOAP to move data across the wire, WCF Data Services exposes your data for access through URIs over REST (i.e., directly through HTTP). There are also convenient client APIs for .NET, Silverlight, PHP, AJAX, and other consumers. This approach might appear as putting your database on the Internet, which is not the case. What is exposed is first defined by your model and further refined by settings in your service. You do have some control over securing the data, but it is not the same control you can exercise with your custom services.

WCF RIA Services attempts to bridge the gap between custom services and WCF Data Services. WCF RIA Services was originally designed to help getting data into and out of Silverlight applications, you can also consume RIA services from other applications as well, because in the end, unlike WCF Data Services, WCF RIA Services is still a WCF service. RIA services encapsulate some of the most common desired CRUD functionality.Though this does not still give you complete control, but if you want leverage a boxed solution which is customizable, WCF RIA Services could be a good candidate.

Self-tracking entities?
Self-tracking entities are not lightweight, and to get their true benefits, the consuming application must be a .NET 3.5 or 4 app that contains self-tracking entities. Self-tracking entities is the simplest path for using entities in WCF services. Do not mistake self-tracking entities as a great solution for all of your applications. They are written specifically to be used with custom WCF services. They will not work with WCF Data Services or WCF RIA Services, nor can you use them to solve n-tier issues in other applications.

Self-tracking entities are very different from other POCO classes, whether your POCO classes are generated from the provided template, a customized version of that template, or your own template.

It is very important to understand what different options are available, so we can make the right choice when exposing business model through WCF.

Reference: Programming Entity Framework, Second Edition, by Julia Lerman.

Thursday, November 11, 2010

SQL Ranking functions

Ranking functions return a ranking value for each rows in your result set. Many tasks, like generating sequential numbers, finding ranks, and so on, which in pre-2005 SQL Server versions requires many lines of code, now can be implemented much easier and faster.

Look at the following example:

SELECT FirstName, LastName, PostalCode, 
       ,ROW_NUMBER() OVER (ORDER BY PostalCode) AS 'Row Number'
       ,RANK() OVER (ORDER BY PostalCode) AS 'Rank'
       ,DENSE_RANK() OVER (ORDER BY PostalCode) AS 'Dense Rank'
       ,NTILE(2) OVER (ORDER BY PostalCode) AS 'Bi-tile'
FROM   Persons

Result Set:
FirstNameLastNamePostalCodeRow NumberRankDense RankBi-tile
JoeHarley30071111
DavidBigel30072111
JoelFriedlaender30073111
KennyLam31284422
NirmalParrera31285422
GurnamMadan31286423
TroyParker31507733
Melanie????31508734
JeremyPickhaver31509734

Lets look at each of these ranking functions:

1. ROW_NUMBER()
ROW_NUMBER ()     OVER ( [ <partition_by_clause> ] <order_by_clause> )
This function returns the sequential number of a row within a partition of a result set, starting at 1 for the first row in each partition. The ORDER BY clause determines the sequence in which the rows are assigned their unique ROW_NUMBER within a specified partition. In the above example we have not specified any partition columns, so the ROW_NUMBER is assigned based on the ordering of postal code column for each row.

2. RANK()
RANK ()    OVER ( [ < partition_by_clause > ] < order_by_clause > )
This function returns the rank of each row within the partition of a result set. The rank of a row is one plus the number of ranks that come before the row in question. You can also group the the rankings by using the PARTITION BY clause. In the above example the rankings are the same for rows having the same value in postal code column. But the ranking value keeps on incrementing for each row, so that when a new postal code value is encountered, ranking value on that new row will be one more than the number of proceeding rows.

3. DENSE_RANK()
DENSE_RANK ()    OVER ( [ < partition_by_clause > ] < order_by_clause > )
This function returns the rank of rows within the partition of a result set, without any gaps in the ranking. The rank of a row is one plus the number of distinct ranks that come before the row in question. In the above example rankings are incremented as the value in postal code column changes.

4. NTILE()
NTILE (integer_expression)    OVER ( [ <partition_by_clause> ] < order_by_clause > )
This function distributes the rows in an ordered partition into a specified number of groups. The groups are numbered, starting at one. For each row, NTILE returns the number of the group to which the row belongs. The integer constant expression specifies the number of groups into which each partition must be divided. In the above examples the rows are divided in 2 groups. Because the total number of rows(9) is not divisible by the number of groups, the first group has three rows and the remaining groups have two rows each.

These functions can be really handy for complex sorting and generating sequential record set. These windowed functions can only be used in SELECT and ORDER BY statements. You can read more about them here.

LINQ Pad

I have been playing around with Entity Framework 4 and WCF Data Service lately. Whilst reading through some articles on these topics, I found this cool tool to play around with LINQ.

LINQPad interactively lets you query data sources in a modern query language. It supports everything in framework 4.0:

  • LINQ to Objects
  • LINQ to SQL
  • LINQ to Entity Framework
  • LINQ to XML
  • OData/WCF Data Services
It also comes loaded with lot of examples which can help you to learn LINQ. You can also execute C#/VB expression, statement block or program with rich output formatting.

LINQPad standard edition is free to download and use, but you can also get the premium edition with support for auto completion and some other features. So check it out http://www.linqpad.net/

Thursday, October 14, 2010

Convert string to enum

Number of times I have come across the situation where I wanted to convert a string into enum. Using switch case statements to return the enum becomes quite tedious when there are many items. Recently I found a static method in Enum class which replaces these switch statements and is a big time saver.
public static object Enum.Parse(System.Type enumType, string value)
Look at the following code
enum Days
{
  Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday
}
//........

string val = "Tuesday";
Days day = Enum.Parse(typeof(Days), val);
Console.WriteLine("Day Value: {0}", day.ToString());

//You can also ignore the case of the string being parsed 
//by using the following method
public static object Parse(Type enumType, string value, bool ignoreCase);
Parsing an invalid day will throw an Argument Exception. So it is always a good idea to check if the enum exists, look at the following code:
string val = "Tuesday";
Day day;

if(Enum.IsDefined(typeof(Days), val))
  day = Enum.Parse(typeof(Days), val);
Unfortunately, Enum.IsDefined() does not provide ignore case option.