Chapter 4. Building the Data and Business Layers Using .NET 3.5
The data and business layers of the Dropthings portal use two of the hottest features of the .NET 3.0 and 3.5 frameworks: Windows Workflow Foundation (WF) and LINQ. The data layer makes good use of LINQ to SQL, a feature of .NET 3.5. The business layer is built largely with the WF released with .NET 3.0, using new language extensions in C# 3.0.
Introducing LINQ to SQL
LINQ, or Language integrated query, is a set of C# 3.0 language and .NET 3.5 framework
features for writing structured queries over local object collections and remote data
sources. With LINQ, you can query any collection that implements IEnumerable<>
, including tables in a database.
LINQ to SQL is a lot of fun to work with and makes the task of writing a data access layer that generates highly optimized SQL amazingly simple. If you haven’t used LINQ to SQL before, brace yourself.
With LINQ to SQL, you design the database that the application will use and then use the Visual Studio 2008 Object/Relational Designer (sqlmetal.exe in LINQ Preview) to generate a class that represents the database with an appropriate object model. This is a giant step beyond having to handcode the entity and data access classes. Formerly, whenever the database design changed, you had to modify the entity classes and modify the insert, update, delete, and get methods in the data access layer. Of course, you could use third-party object-relational mapping (ORM) tools or some kind of code generator that generates entity classes from database schema and data access layer code. But now, LINQ to SQL does it all for you!
A great thing about LINQ to SQL is that it can generate objects known as
projections that contain only the fields you want to receive from a
specific query, not the entire row. There’s no ORM tool or object-oriented database library
that can do this today because the operation requires a custom compiler to support it. The
benefit of projection is pure performance. You select only fields that you need, and you
don’t have to build a jumbo-sized object with every field from the tables you query.
LINQ to SQL selects only the required fields and creates objects that contain
only the selected fields. Let’s take a look at some example queries used in the business
layer. Example 4-1 shows how easy it is to
create a new Page
object in a database.
Here DashboardDataContext
is the DataContext
generated by the Visual Studio 2008 Object
Relational Designer. It contains all the database access methods and entities for tables in
the database. DataContext
takes care of generating
queries for objects that are requested from the database, executing the queries, and
populating objects from the database. It also keeps track of changes made to the objects and
when they are updated, and knows exactly which fields to update in the tables. DataContext
completely encapsulates database access and provides
a nice, clean, object-oriented way of working with data that is persisted in a database.
Moreover, DataContext
allows you to run arbitrary queries
as well; you can use regular stored procedures to read and write rows in database
tables.
Example 4-2 shows how to get a Page
and change its name. You can use lambda expressions similar
to those you have seen in Chapter 3 to
define the condition for the where clause.
var page = db.Pages.Single( p => p.ID == PageId ); page.Title = PageName; db.SubmitChanges( );
Another option is to select only a scalar value from the database. Reading scalar values directly from a database is faster than reading a row and then converting it to an object repeatedly. Example 4-3 shows how to do it.
var UserGuid = (from u in db.AspnetUsers where u.LoweredUserName == UserName select u.UserId).Single( );
You can also read specific fields and create an object on the fly that contains only the specific fields. This is called projection and is shown in Example 4-4.
var users = from u in db.AspnetUsers select { UserId = u.UserId, UserName = u.LoweredUserName }; foreach( var user in users ) { Debug.WriteLine( user.UserName ); }
In Example 4-4, only two fields from the Aspnet_Users
table are selected. LINQ to SQL returns an object that has only these two fields, and not all the
fields in the table.
Database paging is very easy in LINQ to SQL. For example, if you want to select 20 rows
starting at the 100th row, just use the Skip
and Take
functions as shown in Example 4-5.
var users = (from u in db.AspnetUsers
select { UserId = u.UserId, UserName = u.LoweredUserName }).Skip(100).Take(20);
foreach( var user in users )
{
Debug.WriteLine( user.UserName );
}
It’s easy to provide transaction support in LINQ to SQL. You just write code inside a using
block, and the code inside it falls into a transaction
scope (see Example 4-6).
using( var ts = new TransactionScope( ) ) { List<Page> pages = db.Pages.Where( p => p.UserId == oldGuid ).ToList( ); foreach( Page page in pages ) page.UserId = newGuid; // Change setting ownership UserSetting setting = db.UserSettings.Single( u => u.UserId == oldGuid ); db.UserSettings.Remove(setting); setting.UserId = newGuid; db.UserSettings.Add(setting); db.SubmitChanges( ); ts.Complete( ); }
When there’s any exception, the using
block will call
the Dispose
function on ts
, and the transaction will abort unless it is already completed. But if the
code reaches the end of the block, it calls ts.complete()
and the transaction commits.
Building the Data Access Layer Using LINQ to SQL
The first step to using LINQ to SQL is to build a DataContext
, which contains all entity classes and their data access methods. You will generate one Entity
class per table, e.g., the Page
Entity class for the Page
table in the database. Visual
Studio 2008’s ORM designer allows you to easily drag and drop tables onto a designer surface
and generate entity classes. The next step will be to create a simple helper class that
makes working with LINQ to SQL even easier. Let’s start with designing the ORM in Visual
Studio 2008.
Generating a Data Model Using the Visual Studio 2008 Designer
Visual Studio 2008 comes with an object relational mapping designer, which allows you to create a LINQ to SQL classes file and then drag and drop tables from the server explorer to the designer surface. Visual Studio will then automatically generate classes from those tables. You can further modify the associations, turn on or off lazy loading of properties, add validation, and much more. Nothing special was done to generate Figure 4-1 in the data model, besides putting all the tables from the database onto the designer surface.
After you create the designer and build the project, the data access and entity
classes will be ready for use. DashboardData
is the
data context class that’s included in the project.
Manipulating Data with a Database Helper
Example 4-7 shows the code for a DatabaseHelper
that makes working with LINQ to SQL a lot easier.
public static class DatabaseHelper { public const string ConnectionStringName = "DashboardConnectionString"; public const string ApplicationID = "fd639154-299a-4a9d-b273-69dc28eb6388"; public readonly static Guid ApplicationGuid = new Guid(ApplicationID); public static DashboardDataContext GetDashboardData() { var db=new DashboardDataContext(ConfigurationManager. ConnectionStrings[ConnectionStringName].ConnectionString); return db; }
DatabaseHelper
also takes care of configuration
management and initialization of the DataContent
class.
It has a GetDashboardData
function that returns a
reference to the DashboardData
instance already
configured with the connection string. Insert, Update
,
and Delete
methods offer shortcuts for performing
common database operations.
DatabaseHelper
reads the connection string from the
<connectionString>
block in the
web.config or app.config file. It also stores
the ApplicationId
for the ASP.NET membership
provider.
Tip
Although it would be sensible to place the ApplicationID
in web.config, I’ve placed it in
DatabaseHelper
just to simplify this
discussion.
Whenever you create a new application using an ASP.NET membership, ASP.NET creates an
entry in the Aspnet_Applications
table. This ApplicationID
is also used in the aspnet_users
table to identify which application a user belongs to.
The membership provider is built for hosting multiple applications in the same database. So, all the root objects such as user, personalization, etc. belong to an application.
Here’s a problem with LINQ to SQL. If an entity travels through multiple tiers, then
it gets detached from the DataContext
that created it.
This means that as soon as an entity is out of the data access layer and into the business or web layer, it is detached from the
DataContext
because the DataContext
is only available within the data access layer. So, when you try
to update entities again using a different DataContext
,
you first need to attach the entity instance to the new data context, then make the
changes and call SubmitChanges.
Now the problem is that
from the business layer you do not have access to the DataContext
, which is created by the data access layer when the entity object
is being updated. The business layer will just send the entity object to the data access
component, and then the component will update by creating a new DataContext.
LINQ to SQL requires that you attach the entity object before
making changes to it so that it can track which properties are being changed and determine
what kind of UPDATE
or INSERT
statement to generate. However, a typical business layer will make the
object modifications first and then send them to the data access component. So, a
traditional attempt like Example 4-8 will
fail.
Page p = DashboardData.GetSomePage( ); ... ... // Long time later may be after a page postback p.Title = "New Title"; DashboardData.UpdatePage( p );
Somehow you need to do what is shown in Example 4-9.
Page p = DashboardData.GetSomePage( ); ... ... // Long time later may be after a page postback DashboardData.AttachPage( p ); p.Title = "New Title"; DashboardData.UpdatePage( p );
However, Example 4-9 is not possible
because you can’t make DashboardData
stateless. You need to create DataContext
inside methods and then store the reference to DataContext
between function calls. This will be fine for a single-user
scenario, but not an acceptable solution for multiuser web sites. So, I made a workaround
(see Example 4-10).
// Load the object from database Page p = DashboardData.GetSomePage( ); ... ... // Long time later may be after a page postback DashboardData.Update<Page>( p, delegate( Page p1 ) { p1.Title = "New Title"; });
Here, the Update<>
method first attaches the page object to DataContext
and then calls the delegate passing the reference to the attached
object. You can now modify the passed object as if you were modifying the original object
inside the delegate. Once the delegate completes, the object will be updated using
DataContext.SubmitChanges()
;.
The implementation of the Update<>
method is
shown in Example 4-11.
public static void Update<T>(T obj, Action<T> update) { var db = GetDashboardData( ); db.GetTable<T>( ).Attach(obj); update(obj); db.SubmitChanges( ); }
The widget container uses DatabaseHelper
to update
objects in the database as shown in Example 4-12.
WidgetInstance widgetInstance = DatabaseHelper.GetDashboardData( ). WidgetInstances.Single( wi => wi.Id == WidgetInstanceId ); DatabaseHelper.Update<WidgetInstance>( widgetInstance, delegate( WidgetInstance wi ) { wi.ColumnNo = ColumnNo; wi.OrderNo = RowNo; });
The delegate in Update<T>
allows you to be in
the context of the business layer or the caller. So, you can access the UI elements or
other functions/properties that you need to update the entity’s
properties.
Similarly, there’s an UpdateAll<>
function
that updates a list of objects in the database (see Example 4-13).
public static void UpdateAll<T>(List<T> items, Action<To update) { var db = GetDashboardData( ); foreach( T item in items ) { db.GetTable<T>( ).Attach(item); update(item); } db.SubmitChanges( ); }
For convenience, I have made Insert<>
an
Delete<>
also. But they are not required
because they do not have an “Attach first, modify later” requirement (see Example 4-14).
public static void Delete<T>(Action<T> makeTemplate) where T:new( ) { var db = GetDashboardData( ); T template = new T( ); makeTemplate(template); db.GetTable<T>( ).Remove(template); db.SubmitChanges( ); } public static void Insert<T>(T obj) { var db = GetDashboardData( ); db.GetTable<T>( ).Add(obj); db.SubmitChanges( ); }
The Delete<>
method is a tricky one. First you need to attach the object to the Table
and then call the table’s Remove
function. This means you need to first get the object before you can
call Delete
, which adds a read overhead while deleting
an object (see Example 4-15).
public static void Delete<T>(T entity) where T : class,new( ) { using (var db = GetDashboardData( )) { db.GetTable<T>( ).Attach(entity); db.GetTable<T>( ).Remove(entity); db.SubmitChanges( ); } }
Now that you have learned how to build the data access layer, let’s address some of the challenges you’ll face while running the portal project in a production environment.
Cleaning Up Inactive User and Related Data
An Ajax web portal has a unique challenge when it comes to cleaning up unused data that is generated by anonymous users who never return. Every first visit creates one anonymous user, a page setup, widgets, etc. If the user doesn’t come back, that information remains in the database permanently. It is possible that the user might come back within a day, or a week or a month, but there’s no guarantee. Generally, sticky users—users who return to your site frequently—make up 30 to 50 percent of the total users who come to an Ajax web portal. So, you end up with 50 to 70 percent unused data. Dropthings requires daily data cleanup to keep the database size down—user accounts expire, RSS feeds get old, anonymous sessions expire, and users never come back.
This is a huge cleanup operation once a web portal becomes popular and starts receiving thousands of users every day. Think about deleting millions of rows from 20 or 30 tables, one after another, while maintaining foreign key constraints. Also, the cleanup operation needs to run while the site is running, without hampering its overall performance. The whole operation results in heavily fragmented index and space in the MDF file. The log file also becomes enormous to keep track of the large transactions. Hard drives get really hot and sweat furiously. Although the CPU keeps going, it’s really painful to watch SQL Server go through this every day. But there is no alternative to keep up with SQL Server’s RAM and disk I/O requirements. Most importantly, this avoids counting users in monthly reports that are not valid users.
When a user visits the site, the ASP.NET membership provider updates the LastActivityDate
of the aspnet_users
table. From this field, you can find out how long the user has
been idle. The IsAnonymous
bit field shows whether the
user account is anonymous or registered. If it is registered, then there is no need to
worry. But if it is anonymous and more than 30 days old, you can be sure that the user
will never come back because the cookie has already expired. However, we can’t avoid
creating an anonymous user because the user might want a fresh start (see the
“Implementing Authentication and Authorization” section in Chapter 3). Another scenario is a user
logging out on a shared computer (e.g., a cyber café) and the next person using it as an
anonymous user.
Here’s how the whole cleanup process works:
Find out the users that are old enough to be discarded and are anonymous
Find out the pages the user has
Delete all of the widget instances on those pages
Remove rows from child tables related to
aspnet_users
likeaspnet_profile, aspnet_UsersInRoles
, andaspnet_PersonalizationPerUser
Remove rows for users to be deleted
Remove the users from
aspnet_users
Example 4-16 is the giant DB script that does it all. I have added inline comments to explain what the script is doing.
-- Number of days after which we give users the 'bye bye' DECLARE @Days int SET @Days = 29 -- Number of users to delete per run. If it's too high, the database will get stuck -- for a long time. If it's too low, you will end up having more trash than -- you can clean up. Decide this number based on how many anonymous users are -- created per day and how frequently you run this query. The correct formula -- for this number is: @NoOfUsersToDelete > AnonUsersPerDay / FrequencyOfRun DECLARE @NoOfUsersToDelete int SET @NoOfUsersToDelete = 1000 -- To find other tables, create temporary tables that hold users and pages to delete -- as the user and page are used. -- Having them in a temp table is better than repeatedly running SELECT ID FORM ... IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo]. [PagesToDelete]') AND type in (N'U')) DROP TABLE [dbo].[PagesToDelete] IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo]. [aspnetUsersToDelete]') AND type in (N'U')) DROP TABLE [dbo].[AspnetUsersToDelete] create table PagesToDelete (PageID int NOT NULL PRIMARY KEY) create table AspnetUsersToDelete (UserID uniqueidentifier NOT NULL PRIMARY KEY) -- Find inactive anonymous users and store the UserID in the temporary -- table insert into AspnetUsersToDelete select top(@NoOfUsersToDelete) UserID from aspnet_Users where (isAnonymous = 1) and (LastActivityDate < (getDate( )-@Days)) order by UserID -- Saves SQL Server from sorting in clustered index again print 'Users to delete: ' + convert(varchar(255),@@ROWCOUNT) GO -- Get the user pages that will be deleted insert into PagesToDelete select ID from Page where UserID in ( select UserID from AspnetUsersToDelete ) print 'Pages to delete: ' + convert(varchar(255),@@ROWCOUNT) GO -- Delete all widget instances on the pages to be deleted delete from WidgetInstance where PageID IN ( SELECT PageID FROM PagesToDelete ) print 'Widget Instances deleted: ' + convert(varchar(255), @@ROWCOUNT) GO -- Delete the pages delete from Page where ID IN ( SELECT PageID FROM PagesToDelete ) GO -- Delete UserSetting delete from UserSetting WHERE UserID IN ( SELECT UserID FROm AspnetUsersToDelete ) GO -- Delete profile of users delete from aspnet_Profile WHERE UserID IN ( SELECT UserID FROm AspnetUsersToDelete ) GO -- Delete from aspnet_UsersInRoles delete from aspnet_UsersInRoles WHERE UserID IN ( SELECT UserID FROm AspnetUsersToDelete ) GO -- Delete from aspnet_PersonalizationPerUser delete from aspnet_PersonalizationPerUser WHERE UserID IN ( SELECT UserID FROm AspnetUsersToDelete ) GO -- Delete the users delete from aspnet_users where userID IN ( SELECT UserID FROm AspnetUsersToDelete ) PRINT 'Users deleted: ' + convert(varchar(255), @@ROWCOUNT) GO drop table PagesToDelete drop table AspnetUsersToDelete GO
Now the question is, when can I run this script? The answer depends on several factors:
The period of lowest traffic on your site. For example, in the U.S., most users are asleep at midnight. Of course, that works only if the majority of your users are from the U.S.
Other maintenance tasks, such as index defrag or database backup, are the least likely to be running. If by any chance any other maintenance task conflicts with this enormous delete operation, SQL Server is dead.
The time it takes to run the script. The operation will take anywhere from 10 minutes to a number of hours depending on the volume of trash to clean up. So, consider the amount of time it will take to run this script and plan other maintenance jobs accordingly.
When you typically run index defrag. It’s best to run the script 30 minutes before the index defrag jobs run, because after the script completes, the tables will be heavily fragmented. So, you need to defrag the indexes.
Before running this script, first:
Turn off auto shrink from database property. Database size will automatically reduce after the cleanup. Shrinking a database requires a lot of disk I/O activity and it slows the database down. Turn off auto shrink because the database will eventually grow again.
Ensure that the log file’s initial size is big enough to hold such enormous transactions. You can specify one-third of the MDF size as LDF’s initial size. Also make sure the log file did not shrink—let it occupy HD space. It saves SQL Server from expanding and shrinking the file. Both of these require high disk I/O.
Once the cleanup job is done and the index defrag runs, the performance of your database will improve significantly. Because the tables are now smaller, the indexes are smaller, and SQL Server doesn’t have to run through large indexes anymore. Future index defrags take less time because there’s not much data left to optimize. SQL Server also takes less RAM because it has to work with much less amount of data. Database backup size is also reduced because the MDF size does not keep increasing indefinitely. As a result, the significant overhead of this cleanup operation is quite acceptable when compared to all the benefits.
Introducing Windows Workflow Foundation
Windows Workflow Foundation (WF), included in .NET 3.0, provides the programming model, engine, and tools for quickly building workflow-enabled applications. It gives developers the ability to model business processes in a visual designer by drawing flow chart diagrams. Complex business operations can be modeled as a workflow in the visual workflow designer included in Visual Studio 2008, and coded using any .NET programming language. WF consists of the following parts:
- Activity model
Activities are the building blocks of workflow—think of them as a unit of work that needs to be executed. Activities are easy to create, either from writing code or by composing them from other activities. Out of the box, there are a set of activities that mostly provide structure, such as parallel execution, if/else, and calling a web service.
- Workflow designer
This is the design surface in Visual Studio, which allows for the graphical composition of workflow, by placing activities within the workflow model.
- Workflow runtime
Workflow runtime is a lightweight and extensible engine that executes the activities that make up a workflow. The runtime is hosted within any .NET process, enabling developers to bring workflow to anything, from a Windows forms application to an ASP.NET web site or a Windows service.
- Rules engine
WF has a rules engine that enables declarative, rule-based development for workflows and any .NET application to use. Using the rule engine, you can eliminate hardcoded rules in your code and move them from the code to a more maintainable declarative format on the workflow diagram.
Although a workflow is mostly used in applications that have workflow-type business processes, you can use a workflow in almost any application as long as the application does complex operations. In this Start page application, some operations, like first visit, are complex and require multistep activities and decisions. So, such applications can benefit from workflow implementation.
Building the Business Layer Using WF
The entire business layer is developed using WF. Each of the methods in the DashboardFacade
do nothing but call individual workflows.
There’s absolutely no business code that is not part of any workflow.
“This is insane!” you are thinking. I know. Please listen to why I went for this approach. Architects can “design” business layer operations in terms of activities, and developers can just fill in a small amount of unit code to implement each activity.
This is actually a really good reason because architects can save time by not having to produce Word documents on how things should work. They can directly go into Workflow designer, design the activities, connect them, design the data flow, and verify whether all input and output are properly mapped or not. This is lot better than drawing flow charts, writing pseudocode, and explaining in stylish text how an operation should work. It’s also helpful for developers because they can see the workflow and easily understand how to craft the whole operation. They just open up each activity and write a small amount of very specific reusable code inside each one. They know what the activity’s input (like function parameters) will be and they know what to produce (return value of function). This makes the activities reusable, and architects can reuse an activity in many workflows.
Workflows can be debugged right in Visual Studio Designer for WF. So, developers can easily find defects in their implementation by debugging the workflow. Architects can enforce many standards like validations, input output check, and fault handling on the workflow. Developers cannot but comply and, therefore, produce really good code. Another great benefit for both architect and developer is that there’s no need to keep a separate technical specification document up to date because the workflow is always up to date and it speaks for itself. If someone wanted to study how a particular operation works, they could just print out the workflow and read it through.
Mapping User Actions to a Workflow
Each user action can be mapped to a workflow that responds to that action. For example, when a user wants to add a new widget, a workflow can take care of creating the widget, positioning it properly on the page, and configuring the widget with the default value. The first visit of a brand new user to the site is a complex operation, so it is a good candidate to become a workflow. This makes the architecture quite simple on the web layer—just call a workflow on various scenarios and render the UI accordingly, as illustrated in Figure 4-2.
Instead of using complex diagrams and lines of documentation to explain how to handle a particular user or system action, you can draw a workflow and write code inside it. This serves both as a document and a functional component that does the job. The next sections show scenarios that can easily be done in a workflow.
Dealing with First Visit by a New User (NewUserSetupWorkflow)
Handling the first visit of a brand new user is the most complex operation your web site will handle. It’s a good candidate for becoming a workflow. Figure 4-3 shows a workflow that does all the business layer work for the first-time visit and returns a complete page setup. The Default.aspx just creates the widgets as it receives them from the workflow and is not required to perform any other logic.
The operations involved in creating the first-visit experience for a new user are as follows:
Create a new anonymous user
Create two default pages
Put some default widgets on the first page
Construct a object model that contains user data, the user’s page collection, and the widgets for the first page
If you put these operations in a workflow, you get the workflow shown in Figure 4-3.
The workflow takes the ASP.NET anonymous identification provider generated by UserName
as an input to the workflow from the
Default.aspx page.
The first step in passing this input parameter to the workflow while running the
workflow is to call the GetUserGuidActivity
to get the
UserId
from the aspnet_users
table for that user (see Example 4-17).
protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { using( new TimedLog(UserName, "Activity: Get User Guid") ) { var db = DatabaseHelper.GetDashboardData( ); this.UserGuid = (from u in db.AspnetUsers where u.LoweredUserName == UserName && u.ApplicationId == DatabaseHelper. ApplicationGuid select u.UserId).Single( ); return ActivityExecutionStatus.Closed; } }
This activity is used in many places because it is a common requirement to get the
UserId
from the username found from the ASP.NET
Context
object. All the tables have a foreign key in
the UserId
column but ASP.NET gives only the UserName.
So, in almost all the operations, UserName
is passed from the web layer and the business layer converts it to UserId
and
does its work.
Tip
The using(TimedLog)
block records the execution
time of the code inside the using
block. It prints
the execution time in the debug window as you read earlier in the “Performance Concerns
with WF” section.
The next step is to create the first page for the user using CreateNewPageActivity
shown in Example 4-18.
protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { DashboardData db = DatabaseHelper.GetDashboardData( ); var newPage = new Page( ); newPage.UserId = UserId; newPage.Title = Title; newPage.CreatedDate = DateTime.Now; newPage.LastUpdate = DateTime.Now; db.Pages.Add(newPage); db.SubmitChanges(ConflictMode.FailOnFirstConflict); NewPageId = newPage.ID; return ActivityExecutionStatus.Closed; }
This activity takes the UserID
as input and
produces the NewPageId
property as output. It creates a
new page, and default widgets are added on that page. CreateDefaultWidgetActivity
creates the default widgets on this page as shown
in Example 4-19.
protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { var db = DatabaseHelper.GetDashboardData( ); var defaultWidgets = db.Widgets.Where( w => w.IsDefault == true ).ToList( ); var widgetsPerColumn = (int)Math.Ceiling((float)defaultWidgets.Count/3.0); var row = 0; var col = 0; foreach( Widget w in defaultWidgets ) { var newWidget = new WidgetInstance( ); newWidget.PageId= this.PageId; newWidget.ColumnNo = col; newWidget.OrderNo = row; newWidget.CreatedDate = newWidget.LastUpdate = DateTime.Now; newWidget.Expanded = true; newWidget.Title = w.Name; newWidget.VersionNo = 1; newWidget.WidgetId = w.ID; newWidget.State = w.DefaultState; db.WidgetInstances.Add(newWidget); row ++; if( row >= widgetsPerColumn ) { row = 0; col ++; } } db.SubmitChanges( ); return ActivityExecutionStatus.Closed; }
This is what needs to happen next:
Decide how many widgets to add per column.
Compute the number of widgets to put in each column so they have an even distribution of widgets based on the number of default widgets in the database.
Run the
foreach
loop through each default widget and created widget instances.Call another workflow named
UserVisitWorkflow
to load the page setup for the user. This workflow was used on both the first visit and subsequent visits because loading a user’s page setup is same for both cases.
The InvokeWorkflow
activity that comes with WF
executes a workflow asynchronously. So, if you are calling a workflow from ASP.NET that in
turn calls another workflow, the second workflow is going to be terminated prematurely
instead of executing completely. This is because the workflow runtime will execute the
first workflow synchronously and then finish the workflow execution and return. If you use
InvokeWorkflow
activity to run another workflow from
the first workflow, it will start on another thread, and it will not get enough time to
execute completely before the parent workflow ends, as shown in Figure 4-4.
So, InvokeWorkflow
could not be used to execute the
UserVisitWorkflow
from NewUserSetupWorkflow.
Instead it is executed using the CallWorkflow
activity, which takes a workflow and executes it
synchronously. It’s a handy activity I found on Jon Flanders’ blog (http://www.masteringbiztalk.com/blogs/jon/PermaLink,guid,7be9fb53-0ddf-4633-b358-01c3e9999088.aspx).
The beauty of this activity is that it properly maps both inbound and outbound properties of the workflow that it calls, as shown in Figure 4-5.
The UserName
property is passed from the NewUserVisitWorkflow
, and it is returning the UserPageSetup
, which contains everything needed to render the
page for the user.
Dealing with the Return Visit of an Existing User (UserVisitWorkflow)
UserVisitWorkflow
creates a composite object named
UserPageSetup
that holds the user’s settings, pages,
and widgets on the current page. The Default.aspx gets everything it
needs to render the whole page from UserPageSetup
, as
shown in Figure 4-6.
Just like the previous workflow, UserVisitWorkflow
takes UserName
and converts it to UserGuid.
It then calls the GetUserPagesActivity
, which loads the pages of the user (see Example 4-20).
protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { using( new TimedLog(UserGuid.ToString( ), "Activity: Get User Pages") ) { var db = DatabaseHelper.GetDashboardData( ); this.Pages = (from page in db.Pages where page.UserId == UserGuid select page).ToList( ); return ActivityExecutionStatus.Closed; } }
After that, it calls the GetUserSettingActivity
,
which gets or creates the user’s setting. The UserSetting
object contains the user’s current page, which is used by
GetUserSettingActivity
to load the widgets of the
current page.
The code in GetUserSettingActivity
is not
straightforward (see Example 4-21). It first
checks if UserSetting
has been created for the user
and, if not, GetUserSettingActivity
creates it.
protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { using( new TimedLog(UserGuid.ToString( ), "Activity: Get User Setting") ) { DashboardData db = DatabaseHelper.GetDashboardData( ); var query = from u in db.UserSettings where u.UserId == UserGuid select u; IEnumerator<UserSetting> e = query.GetEnumerator( ); if( e.MoveNext( ) ) { this.UserSetting = e.Current; } else { // No setting saved before. Create default setting UserSetting newSetting = new UserSetting( ); newSetting.UserId = UserGuid; newSetting.CurrentPageId = (from page in db.Pages where page.UserId == UserGuid select page.ID).First( ); db.UserSettings.Add(newSetting); db.SubmitChanges( ); this.UserSetting = newSetting; } this.CurrentPage = db.Pages.Single(page => page.ID == this.UserSetting.CurrentPageId); return ActivityExecutionStatus.Closed; } }
Loading the existing user’s settings is optimized by getting only the CurrentPageId
instead of the whole UserSetting
object. This results in a very small query that does a scalar
selection, which is a bit faster than a row selection because it doesn’t involve
constructing a row object or sending unnecessary fields to a row.
The final activity loads the widgets on the current page (see Example 4-22). It takes the PageId
and loads widget instances on the page, including the
widget definition for each instance.
protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { using( new TimedLog(UserGuid.ToString( ), "Activity: Get Widgets in page: " + PageId) ) { var db = DatabaseHelper.GetDashboardData( ); // Load widget instances along with the Widget definition // for the specified page this.WidgetInstances = (from widgetInstance in db.WidgetInstances where widgetInstance.PageId == this.PageId orderby widgetInstance.ColumnNo, widgetInstance.OrderNo select widgetInstance) .Including(widgetInstance => widgetInstance.Widget) .ToList( ); return ActivityExecutionStatus.Closed; } }
The LINQ query that loads the widget instances has two important actions:
Loads widget instances on the page and orders them by column, and then row. As a result, you get widget instances from left to right and in proper order within each column.
Fetches the widget object by producing an
INNER JOIN
betweenWidget
and theWidgetInstance
table.
The collection of the widget instance is mapped to the WidgetInstance
property of the activity. The final code block—ReturnUserPageSetup
—populates the UserPageSetup
property of the workflow with loaded data (see Example 4-23).
private void ReturnUserPageSetup_ExecuteCode(object sender, EventArgs e) { this.UserPageSetup.Pages = this.GetUserPages.Pages; this.UserPageSetup.UserSetting = this.GetUserSetting.UserSetting; this.UserPageSetup.WidgetInstances = this.GetWidgetsInCurrentPage.WidgetInstances; }
The workflow takes an empty UserPageSetup
object;
when it completes, it populates the empty object with the loaded data. So, from ASP.NET,
the UserPageSetup
object is passed and emptied. Once
the workflow completes, the instance is fully populated.
Adding a New Tab (AddNewTabWorkflow)
Adding a new tab is quite simple, requiring only two steps, after the GUID is assigned (see Figure 4-7):
Create a new blank page.
Update the user settings and set the new page as the current page.
Moving Widgets (MoveWidgetInstanceWorkflow)
To move a widget, you must do the following (see Figure 4-8):
Ensure the current user who is calling the workflow owns the widget instance.
Fetch the widget instance and put in workflow context so that other activities can use it.
Pull the widget up from its previous position, which means all the widgets below are shifted up.
Push the widget onto its new position so that all widgets on the new column move down.
MoveWidgetInstanceWorkflow
verifies whether the
widget being moved is really the current user’s widget. This is necessary to prevent
malicious web service hacking (see the “Implementing Authentication and Authorization”
section in Chapter 3). The EnsureOwnerActivity
can check both the page and the widget’s
ownership (see Example 4-24).
protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { var db = DatabaseHelper.GetDashboardData( ); if( this.PageId == 0 && this.WidgetInstanceId == 0 ) { throw new ApplicationException("No valid object specified to check"); } if( this.WidgetInstanceId > 0 ) { // Gets the user who is the owner of the widget. Then sees if the current user is the same. var ownerName = (from wi in db.WidgetInstances where wi.Id == this.WidgetInstanceIdselect wi.Page.AspnetUser.LoweredUserName
).First( ); if( !this.UserName.ToLower( ).Equals( ownerName ) ) throw new ApplicationException(string.Format("User {0} is not the owner of the widget instance {1}", this.UserName, this.WidgetInstanceId)); } if( this.PageId > 0 ) { // Gets the user who is the owner of the page. Then sees if the current user is the same. var ownerName = (from p in db.Pages where p.ID == this.PageIdselect p.AspnetUser.LoweredUserName
).First( ); if( !this.UserName.ToLower( ).Equals( ownerName ) ) throw new ApplicationException(string.Format("User {0} is not the owner of the page {1}", this.UserName, this.PageId)); } return ActivityExecutionStatus.Closed; }
EnsureOwnerActivity
takes UserName
and either WidgetInstanceId
or
PageId
and verifies the user’s ownership. It should
climb through the hierarchy from WidgetInstance
to the
Page
and then to AspnetUser
to check whether the username matches or not. If the username is
different than the one specified, then the owner is different and it’s a malicious
attempt.
Checking Page
ownership requires just going one
level up to AspnetUser.
But checking WidgetInstance
ownership requires going up to the container
page and then checking ownership of the page. This needs to happen very fast because it is
called on almost every operation performed on Page
or
WidgetInstance.
This is why you want to make sure it
does a scalar select only, which is faster than selecting a full row.
Once the owner has been verified, the widget can be placed on the right column. The
next activity, PutWidgetInstanceInWorkflow
, does
nothing but put the WidgetInstance
object into a public
property according to its ID so the object can be manipulated directly. The other
activities in the workflow work with the object’s ColumnNo
and OrderNo
properties. The next
step, PushWidgetsDownInNewColumn
, calls the PushDownWidgetsOnColumnActivity
, which pushes widgets down one
row so there’s a room for a new widget to be dropped (see Example 4-25).
protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { var db = DatabaseHelper.GetDashboardData( ); var query = from wi in db.WidgetInstanceswhere wi.PageId == PageId && wi.ColumnNo == ColumnNo && wi.OrderNo >= Position
orderby wi.OrderNo
select wi; List<WidgetInstance> list = query.ToList( ); int orderNo = Position+1; foreach( WidgetInstance wi in list ) { wi.OrderNo = orderNo ++; } db.SubmitChanges( ); return ActivityExecutionStatus.Closed; }
The idea is to move all the widgets right below the position of the widget being
dropped and push them down one position. Now we have to update the position of the dropped
widget using the activity ChangeWidgetInstancePositionActivity
(see Example 4-26).
protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { WidgetInstance widgetInstance = DatabaseHelper.GetDashboardData( ).WidgetInstances. Single( wi => wi.Id == WidgetInstanceId ); DatabaseHelper.Update<idgetInstance>( widgetInstance, delegate( WidgetInstance wi ) { wi.ColumnNo = ColumnNo; wi.OrderNo = RowNo; }); return ActivityExecutionStatus.Closed; }
The widget is placed on a new column, and the old column has a vacant place. But now
we need to pull the widgets one row upward on the old column. ReorderWidgetInstanceOnColumnActivity
fixes row orders on a column,
eliminating the gaps between them (see Example 4-27). The gap in the column will be
fixed by recalculating the row number for each widget on that column, starting from
zero.
protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext) { var db = DatabaseHelper.GetDashboardData( ); var query = from wi in db.WidgetInstances where wi.PageId == PageId && wi.ColumnNo == ColumnNo orderby wi.OrderNo select wi; List<WidgetInstance> list = query.ToList( ); int orderNo = 0; foreach( WidgetInstance wi in list ) { wi.OrderNo = orderNo ++; } db.SubmitChanges( ); return ActivityExecutionStatus.Closed; }
That’s all that is required for a simple drag-and-drop operation.
Implementing the DashboardFacade
DashboardFacade
provides a single entry point to the
entire business layer. It provides easy-to-call methods that run workflows. For
example, the NewUserVisit
function executes the NewUserSetupWorkflow
(see Example 4-28).
public UserPageSetup NewUserVisit( ) { using( new TimedLog(this._UserName, "New user visit") ) { var properties = new Dictionary<string,object>( ); properties.Add("UserName", this._UserName); var userSetup = new UserPageSetup( ); properties.Add("UserPageSetup", userSetup); WorkflowHelper.ExecuteWorkflow( typeof( NewUserSetupWorkflow ), properties ); return userSetup; } }
Here the input parameter to the workflow is UserName.
Although the UserPageSetup
object is passed as if it was
an input parameter, it’s not an input. You are passing a null object, which the workflow
will populate with loaded data. It’s like an out parameter in function calls. The workflow
will populate this parameter’s value once it completes the execution.
Other methods, like LoadUserSetup,
DeleteWidgetInstance
, and MoveWidgetInstance
,
behave the same way. They take necessary parameters as input and pass them to their own
workflows, e.g., the MoveWidgetInstance
function (see
Example 4-29).
public void MoveWidgetInstance( int widgetInstanceId, int toColumn, int toRow ) { using( new TimedLog(this._UserName, "Move Widget:" + widgetInstanceId) ) { var properties = new Dictionary<string,object>( ); properties.Add("UserName", this._UserName); properties.Add("WidgetInstanceId", widgetInstanceId); properties.Add("ColumnNo", toColumn); properties.Add("RowNo", toRow); WorkflowHelper.ExecuteWorkflow( typeof( MoveWidgetInstanceWorkflow ), properties ); } }
However, getting a return object from a workflow is quite complicated. The AddWidget
function in the façade needs to get the newly added
widget instance out of the workflow (see Example 4-30).
public WidgetInstance AddWidget(int widgetId) { using( new TimedLog(this._UserName, "Add Widget" + widgetId) ) { var properties = new Dictionary<string,object>( ); properties.Add("UserName", this._UserName); properties.Add("WidgetId", widgetId); // New Widget instance will be returned after the workflow completes properties.Add("NewWidget", null); WorkflowHelper.ExecuteWorkflow( typeof( AddWidgetWorkflow ), properties ); return properties["NewWidget"] as WidgetInstance; } }
A null object is being passed here to the NewWidget
property of the workflow: AddWidgetWorkflow
, which will
populate this property with a new instance of Widget
when
it completes. Once the workflow completes, the object can be taken from the
dictionary.
Implementing the WorkflowHelper Class
WorkflowHelper
is a handy class that makes
implementing a workflow a breeze, especially when used with ASP.NET. In the business
layer, the workflow needs to be synchronously executed, but the default implementation of WF is to work asynchronously. Also, you need return values
from workflows after their execution is complete, which is not so easily supported due to
the asynchronous nature of the workflow. Both of these tasks require some tweaking with
the workflow runtime to successfully run in the ASP.NET environment.
The WorkflowHelper.Init
function initializes
workflow runtime for the ASP.NET environment. It makes sure there’s only one workflow
runtime per application domain. Workflow runtime cannot be created twice in the same
application domain, so it stores the reference of the workflow runtime in the application
context. Example 4-31 shows its partial code.
public static WorkflowRuntime Init( ) { WorkflowRuntime workflowRuntime; // Running in console/winforms mode, create an return new runtime and return if( HttpContext.Current == null ) workflowRuntime = new WorkflowRuntime( ); else { // running in web mode, runtime is initialized only once per // application if( HttpContext.Current.Application["WorkflowRuntime"] == null ) workflowRuntime = new WorkflowRuntime( ); else return HttpContext.Current.Application["WorkflowRuntime"] as WorkflowRuntime; }
The initialization takes care of both ASP.NET and the Console/Winforms mode. You will
need the Console/Winforms mode when you test the workflows from a console application or
from unit tests. After the initialization, it registers ManualWorkflowSchedulerService
, which takes care of synchronous execution of
the workflow. CallWorkflow
activity, which is explained
in NewUserSetupWorkflow
, uses the Activities.CallWorkflowService
to run another workflow
synchronously within a workflow. These two services make WF usable from the ASP.NET
environment (see Example 4-32).
var manualService = new ManualWorkflowSchedulerService( ); workflowRuntime.AddService(manualService); var syncCallService = new Activities.CallWorkflowService( ); workflowRuntime.AddService(syncCallService); workflowRuntime.StartRuntime( ); // on web mode, store the runtime in application context so that // it is initialized only once. On console/winforms mode, e.g., from unit tests, ignore if( null != HttpContext.Current ) HttpContext.Current.Application["WorkflowRuntime"] = workflowRuntime; return workflowRuntime; }
Workflow runtime is initialized from the Application_Start
event in Global.asax.
This ensures the initialization happens only once per application domain (see Example 4-33).
void Application_Start(object sender, EventArgs e) { // Code that runs on application startup DashboardBusiness.WorkflowHelper.Init( ); }
The runtime is disposed from the Application_End
event in Global.asax (see Example 4-34).
void Application_End(object sender, EventArgs e) { // Code that runs on application shutdown DashboardBusiness.WorkflowHelper.Terminate( ); }
Inside the WorkflowHelper
, most of the work is done
in the ExecuteWorkflow
function. DashboardFacade
calls this function to run a workflow,
which:
Executes the workflow synchronously
Passes parameters to the workflow
Gets output parameters from the workflow and returns them
Handles exceptions raised in the workflow and passes to the ASP.NET exception handler
In the first step, ExecuteWorkflow
creates an
instance of workflow and passes input parameters to it as shown in Example 4-35.
public static void ExecuteWorkflow( Type workflowType, Dictionary<string,object> properties) { WorkflowRuntime workflowRuntime = Init( ); ManualWorkflowSchedulerService manualScheduler = workflowRuntime. GetService>ManualWorkflowSchedulerService>( ); WorkflowInstance instance = workflowRuntime.CreateWorkflow(workflowType, properties); instance.Start( );
Then ManualWorkflowSchedulerService
service
executes the workflow synchronously. Next, hook the workflow runtime’s WorkflowCompleted
and WorkflowTerminated
events to capture output parameters and exceptions and
handle them properly, as shown in Example 4-36.
EventHandler<WorkflowCompletedEventArgs> completedHandler = null; completedHandler = delegate(object o, WorkflowCompletedEventArgs e) { if (e.WorkflowInstance.InstanceId ==instance.InstanceId) { workflowRuntime.WorkflowCompleted -= completedHandler; // copy the output parameters in the specified properties dictionary Dictionary<string,object>.Enumerator enumerator = e.OutputParameters.GetEnumerator( ); while( enumerator.MoveNext( ) ) { KeyValuePair<string,object> pair = enumerator.Current; if( properties.ContainsKey(pair.Key) ) { properties[pair.Key] = pair.Value; } } } };
When the workflow completes, WorkflowCompletedEventArgs
produces the OutputParameters
dictionary, which contains all of the workflow’s public
properties. Next, read all of the entries in OutputParameters
and update the InputParameters
dictionary with the new values. This is required in the
AddWidget
function of DashboardFacade
, where you need to know the widget instance created by the
workflow.
WorkflowTerminated
fires when there’s an exception.
When any activity inside the workflow raises an exception, this event fires and the
workflow execution aborts. This exception is captured and thrown again so ASP.NET can trap
it using its default exception handler, as shown in Example 4-37.
Exception x = null; EventHandler<WorkflowTerminatedEventArgs> terminatedHandler = null; terminatedHandler = delegate(object o, WorkflowTerminatedEventArgs e) { if (e.WorkflowInstance.InstanceId == instance.InstanceId) { workflowRuntime.WorkflowTerminated -= terminatedHandler; Debug.WriteLine( e.Exception ); x = e.Exception; } }; workflowRuntime.WorkflowCompleted += completedHandler; workflowRuntime.WorkflowTerminated += terminatedHandler; manualScheduler.RunWorkflow(instance.InstanceId); if (null != x) throw new WorkflowException(x);
This helps show exceptions in the ASP.NET exception handler. Exceptions thrown from workflow instances are captured and rethrown. As a result, they jump up to the ASP.NET exception handler, and you see the “yellow page of death” on your local computer (see Figure 4-9).
Summary
In this chapter, you learned how to harness the power of LINQ to SQL to build a data access layer. You used Workflow Foundation to create a well-designed and well-implemented business layer. WF makes it easy for both architects and developers to be in sync during the design and implementation of an application, which leaves little room for developers to do anything outside the scope and functional requirements of the project. This saves time for architects, developers, and unit testers. In the next chapter, we will make some cool widgets that put the core to its practical use and delivers rich features to the users.
Get Building a Web 2.0 Portal with ASP.NET 3.5 now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.