Microsoft.NET

……………………………………………….Expertise in .NET Technologies

Transactions

Posted by Ravi Varma Thumati on November 12, 2009

Many aspects of computing adhere to the old adage “the more things change, the more they stay the same.” Conceptually, this is a pretty good description of transaction processing in both ADO.NET 2.0 and Yukon. Don’t get us wrong, there are some tremendous enhancements and improvements in both areas, but fundamentally, transaction processing is very similar to what it used to be like in the past. This time, though, the devil is in the details.

With respect to the enhancements in the 2.0 Framework, we’ll be blunt: Transactions pre–ADO.NET 2.0 left a little to be desired. Or, as Wintellect Member and .NET Guru Dino Esposito put it, most of the new features under the 2.0 Framework are evolutionary, not revolutionary.

Establishing client-side transactions was not that difficult, and the average developer could pick it up in a few hours. Furthermore, a ton of resources were available and even Microsoft’s Data Access Application Block included support for them. However, they were limited in many regards. You had basic features such as savepoints, rollbacks, and other traditional features, but transactions often occur on the database side, rather than the client side. That doesn’t change in the 2.0 Framework per se. The really impressive changes are related to distributed transactions. Previously, if you wanted to use distributed transactions (transactions that span more than one database), you had some work to do. You had to dive deep into COM+, an area in which many .NET developers feared to tread. In the 2.0 Framework, the ADO.NET team has given us the System.Transactions namespace. At the time of this writing, there are approximately 20 classes in this namespace, and several enumerations, delegates, and interfaces. It has a companion namespace, System.Transactions.Configuration, which is small (it contains only three classes in total), but it helps with configuration issues (something that was ostensibly the most difficult part of dealing with distributed transactions under COM+).

Basic Concepts

Before we continue, you should familiarize yourself with a whole slew of concepts. The first is the basic notion of a transaction. Instead of giving you a textbook definition, let’s run through an example. Suppose you’re updating a record composed of both parent and child information. Maybe you decide to add a new account at your bank and at the same time change your address. The address information would be stored in the parent record and the new account information would be stored as a new child record. Essentially, four scenarios are possible here:

  • The parent record commits but the child doesn’t.
  • The child record commits but the parent doesn’t.
  • They both commit.
  • Neither of them commits.

If you built your tables with the proper constraints, the second scenario is off the table. However, there will be situations in which tables have not been constructed the way you want them. Ideally, we need the third or fourth scenario to occur, but how do you accomplish this? If you’re the “roll your own” type, you start writing code and you keep on writing. And writing and writing. At the end of the day, your solution will work a lot less well than if you did it the “right” way—using transactions. As such, let’s define a transaction as a group of actions that we want to either succeed or fail together.

A.C.I.D

You won’t get very far into any transaction discussion before the acronym A.C.I.D. comes up. Here’s what the different parts of the acronym stand for:

  • Atomicity: Everything is considered to be part of the same unit. In the preceding example, the new account and the address update belong to the same unit, not two separate units (although they are physically different).
  • Consistency: This means simply that the state of the database is not unchanged if the transaction does not complete. In other words, there are no half-committed transactions, orphaned records, or the like.
  • Isolation: This is not the same as isolation levels, although the two are related. This essentially means that what goes on in this transaction doesn’t affect other transactions or processing.
  • Durability:This is the notion that once the transaction has completed, it will be saved to the database correctly.

When you hear about transactions, the most common example is how ATMs work. You insert your card and make a request. Money is debited from one account and either credited to another or dispensed. If anything fails, you don’t get your money. If transaction processing didn’t occur in this scenario, you could have some serious problems (or benefits, depending on the error). Suppose your account was debited but the other account wasn’t credited. You’d lose the withdrawal amount. If the reverse happened, the other party (ies) (the banks) would lose the withdrawal amount. Either way, if this kind of mistake happened a lot, confidence in ATMs would plummet and no one would use them.

Let’s talk now about how to enlist a transaction. You can do this in two ways: through volatile enlistment and through durable enlistment. These are fancy-sounding concepts, but they’re actually pretty straightforward.

Volatile enlistment is a mechanism whereby nondurable resources will not be reclaimed. Durable enlistment, conversely, is what occurs with resources that will need to be reclaimed.

Conceptually, the difference is about the same as between NotePad® and Word XP®. If you are typing  a NotePad document and the power goes out, you will lose everything you’ve typed since the last time you did a save. With newer versions of Word, when the system comes back up, it will inform you that there was a problem and recover what you were doing to the state it was in when the power went out.

Remember A.C.I.D.? Durability is the “D” in that acronym. If you can risk the durability, then volatile enlistment is a good choice. If you’re wondering why you would want any risk, consider the fact that true durability can be expensive in terms of hardware and resources, and in many situations you don’t need ultradurable transactions.

On the other hand, if you need absolute assurance as far as durability, then your only option is durable enlistment. One good example is a transactional file system, but you can probably think of others. In most instances, volatile enlistment will suffice, but the distinction is worth mentioning. To have truly durable transactions, well, as the old expression goes: “There’s no such thing as a free lunch.” Still, you can get pretty close on the cheap.

Transaction Types

The next thing to understand in terms of background information is local or simple transactions versus distributed transactions (transactions that span more that one data store). You can have transactional processing in areas other than traditional RDBMS systems, but that’s where you see them the most.

Currently, Microsoft has added transaction support for Microsoft Message Queues (MSMQ) and may well expand it to include file system support. Depending on where you work, you probably work with more local transactions than distributed transactions. The distinction is pretty simple—if your transaction spans only one database or data store (Message Queues, for example, a cornerstone of Service Oriented Architecture, are now often involved in transactions), then it’s a simple transaction. If it spans more than one database, be it on one or many servers, then it’s a distributed transaction.

In terms of complexity, local transactions are infinitely easier to work with than distributed transactions.

The reason should be readily evident. With one database, you are dealing with a very minimal set of variables, whereas distributed transactions span many more. In its simplest form, a simple transaction could be run against a database that resides on the same computer as the application. Accordingly, there would be only one file and only one table. Because everything is on the same computer, the application and database may or may not even be on a network. Conversely, in a distributed scenario, your application could be running on x numbers of computers spanning y number of databases residing on z number of networks.

In a distributed transaction, you may hit x number of databases, where security, network connectivity, and everything else can cause problems. If you use distributed transactions, you’ll likely be amazed that they actually work at all, let alone work as superbly as they do. (Think about it: Compare the number of times the ATM has worked correctly from a transactional point of view—even if it didn’t give you any money—compared with the number of times it hasn’t. Sure, glitches occur, but it’s very rare to have problems with accounts being debited or credited.)

Isolation Levels

Stated simply, an isolation level is the level at which any given transaction is isolated from any other transaction. There is an inverse relationship between isolation levels and data correctness. Alower isolation level minimizes the problems associated with concurrency (namely, locking), but this is achieved at the expense of correct data (phantom, nonrepeatable reads). The higher the isolation level, the more accurate your data can be assumed to be, but the more risk you subject yourself to in regard to concurrency.

One of the really interesting features about IsolationLevel is that once you set it, you can change it at any time. If you specify a ConnectionString for instance, and have an open connection, you can’t specify a change to it midstream and have the effects take. This is not the case with IsolationLevel. Moreover, once the changes have been made, they don’t apply to anything that’s already been executed, but they will apply to everything that happens afterward. To be perfectly honest, we’ve worked quite a bit with transactions and have never had a need for this functionality. Still, it’s cool that it’s there.

At the time of this writing, six isolation levels are defined in the Transaction object, as specified in the

MSDN documentation:

Transactions1

Remember that a given provider—OleDb, for example—doesn’t know what data source you’re connecting to. Therefore, you can use OleDb for instance, to connect to a SQL Server or Oracle database and because the back ends respectively support transactions, you can specify transactions and everything will be fine. Comma-Separated Value files can also be connected via OleDb, and while NotePad is a very useful tool in many instances, transactional support has never been one of its strong points. In other words, client-side transaction processing is only going to be as accessible and useful as the underlying data source provides. We don’t think we’ve ever used a relational database that didn’t support transactions, but they may be out there, so be forewarned to look into this before getting too deep into transactions.

Creating a Local Transaction

In its simplest form, creating and using transactions is very simple. The following example shows how this is facilitated:

Imports System.Data

Imports System.Data.SqlClient

Dim tn as SqlTransaction

Dim sql as String = “INSERT INTO Employees1(EmpID) VALUES (@UserID)”;

Dim cn as new SqlConnection(CONNECTION_STRING)

Try

If cn.State <> ConnectionState.Open Then

Cn.Open

End If

Catch ex As SqlException

Debug.Assert(False, ex.ToString())

tn = cn.BeginTransaction

Dim cmd as New SqlCommand(sql, cn, tn)

cmd.Paramaters.Add(“@UserID”, SqlDbType.Int)

cmd.Paramaters(0).Value = 314

Try

Dim I as Integer = cmd.ExecuteNonQuery

For x As Integer = 0 To 10

cmd.Parameters(“@UserID”).Value = 315 + x)

cmd.ExecuteNonQuery()

Next

tn.Commit

Catch ex as SqlException

tn.Rollback

End Try

End Try

Another question that comes up a lot has to do with save points. Save points are mechanisms by which you can specify points in a transaction so that you can roll back to them. This is particularly helpful for very long transactions in cases where you have a specific failure and you don’t want to rollback everything, just a portion of it. To use a save point, it couldn’t be easier than this:

tx.Save(“SomeName”)

Now let’s look at a slightly more complex scenario. Suppose you had a dataset to update that you were very sure would update without any problems. Afterward, you had another dataset that you weren’t so sure about. In case of an exception, though, you want to only move past the first save point admittedly, we can’t come up with a scenario like this, but it works well for illustrative purposes):

Imports System.Data

Imports System.Data.SqlClient

Try

tx.Save(“FirstSave”)

tx = cn.BeginTransaction()

da.AcceptChangesDuringUpdate = false

da.Update(ds)

tx.Save(“SecondSave”)

da.Update(SomeOtherDataSet)

tx.Commit()

Catch ex as SqlException

tx.Rollback(“SecondSave”)

System.Diagnostics.Debug.Assert(false, ex.ToString())

End Try

The preceding example is not the only way to accomplish local transactions though. SQL Server, for instance, provides its own flavor of SQL with control statements included. This is known as T-SQL. Most major RDBMS implementations have a similar language available. As such, you can set the SqlCommand object’s CommandType property to Text and set the command text to the following to accomplish the same result:

BEGIN TRANSACTION

–Perform everything here

COMMIT

IF @@ERROR <> 0 BEGIN

ROLLBACK

END

Functionally, you get to the exact same place. However, in the preceding example, all transaction logic is handled on the server. Personally, we have a strong preference for the latter. It’s more force of habit than anything else (along with some superstitious fear) because to date, we have yet to run into any real problems using the client-side transaction object. However, as a matter of comfort, it just seems like there’s a lot (i.e., the Network) in between the client application and the database, and handling it on the database is the safest way to go. Should you avoid client-side transaction processing? Hardly. David Sceppa, author of the Microsoft ADO.NET (Core Reference), which is unquestionably the best book on the subject, quotes Stevie Wonder in Chapter 10: “When you believe in things that you don’t understand, then you suffer. Superstition ain’t the way.” Take Stevie’s advice.

Before we delve much further into the subject, we’ll give you one last warning: Things are winding down with the 2.0 Framework and at the time of this writing, we don’t think any functional changes are on the table. However, if changes do occur, they will likely be rather pronounced, due to the complexity of the subject matter. Bill Ryan, for example, wrote an article for “15 Seconds” in September 2004 (www.15seconds.com/issue/040914.htm) and as of this writing, it is completely outdated.

In order to show you how much easier things have gotten recently, we need a comparison point. Therein lies a little bit of the problem. In order to do anything with distributed transactions in the previous version of the Framework, we’d need to spend a good bit of time explaining what is happening and why. Because much of what is available in the 2.0 Framework was put there precisely to address (alleviate is probably a more fitting word) the issues related to distributed transactions in the 1.0/1.1 Frameworks, we don’t cover it here. Believe us, you aren’t missing much. If you’re a glutton for punishment, they’re still available, but use them for a few hours and you’ll be a convert to the newer way of handling things.

Distributed Transactions

According to MSDN, distributed transaction processing systems are “designed to facilitate transactions that span heterogeneous, transaction-aware resources in a distributed environment.” In practice, what that typically means is that distributed transactions are transactions that span more than one data source.

Until recently, they were for the most part exclusive to database systems, but functionality has been expanded to included other resources such as Microsoft Message Queues (MSMQ). Back in the days when networks weren’t common and every computer in an office was effectively an isolated island, distributed transactions were unknown to most developers (and we’re not talking about all that long ago—at least one of the authors of this book worked in a large office 10 years ago that had a majority of its computers unlinked to a network). We’re not suggesting that no developers dealt with transactions (simple or distributed) back then, but it definitely was more the exception than the norm. However, that changed quickly as desktop databases or spreadsheets became major bottlenecks to productivity.

Network-available databases became a lot more affordable and an increasing number of developers/companies started creating applications with networked databases.

Previously, for example, you might have had a few different software packages that were used by different departments. Sales and Human Resources, for instance, might have used use two totally different primary software packages, both of which used different back-end databases, but data from one might have been dependent on data in the other. Similarly, if a salesperson leaves a company and notifications aren’t sent to the sales department, then it’s possible that someone in that department might assign a task to them or give them credit for something. Although this is a fictitious example, it’s not hard to conceive of a scenario you encountered in your own career where one database was dependent on another one.

As such, in a very short period of time, distributed transactions went from being something only a few, very sophisticated developers had to deal with, to something that was pretty commonplace at most mid to large-size companies.

In the 1.x Framework, using a client-side transaction looks something like the following:

Imports System.Data

Imports System.Data.SqlClient

Private Function OldSchool() as Boolean

DIM IsConsistent as BOOLEAN = false;

DIM oldSchoolTrans AS ICommittableTransaction = Transaction.Create()

Using (cn as new SqlConnection(CONNECTION_STRING))

Dim sql as String = “DELTE CATEGORIES”

Dim cmd as new SqlCommand(sql, cn)

cn.Open()

cn.EnlistTransaction((ITransaction)oldSchoolTrans);

Try

cmd.ExecuteNonQuery()

IsConsistent = true

Return true

145

Transactions

CATCH (ex as SqlException)

//You can specify additional error handling here

//This is where you’d rollback your transaction

Return (ex.ToString.Length < 1)

cn.Close()

End Try

End Using

End Function

This is an example of a “simple” transaction but it’s pretty much the standard approach you use with simple transactions under the 1.x Framework. Why didn’t we show a distributed transaction example? There’s a lot more involved in getting one working and there’s nothing simple about them, and because this book is about ADO.NET 2.0, well, keep reading.

Distributed Transactions in ADO.NET 2.0

In most of the examples so far, we’ve referenced the System.Data.SqlClient library, which is the native library for SQL Server. However, that library isn’t of much use in distributed transaction scenarios because you probably aren’t working with SQL Server across the board. Do you think that if you called an Oracle database it would have any clue what @@Error is? As such, the starting point here is the System.Transactions namespace. Out of the box, this isn’t a referenced assembly, so you’ll need to add a reference to it in order to use it.

Transactions2

You can do this by selecting the Project menu item, then Add Reference, and then System.Transactions under the .NET tab, as shown in Figure.

Note that at the time of this writing, Oracle, SQL Server, and MSMQ were the only data sources provided under the TransactionScope object. If you need to use another DB, then its COM+ for you—although in all likelihood, it’s virtually assured that other vendors will provide support for it.

In order for distributed transactions to work correctly, remember to adhere to the following order:

  1. Create Transaction.
  2. Create Connection.
  3. Dispose Connection.
  4. Call TransactionScope’s Complete() method.
  5. Dispose Transaction.

Here’s how you would create a simple transaction:

Dim ConnectString as String = @”Data Source=.\SQLExpress;Integrated

Security=True;AttachDBFilename=C:\xxxxxxxxxxxxxxxx.mdf”;private void

btnTryTransaction_Click(object sender, EventArgs e)

{

TimeSpan ts = new TimeSpan(0, 0, 5);//Didin’t do this yet – should have taken it

out.

TransactionScope scopeObject = new TransactionScope();

string sql = “INSERT INTO tb_Customers(Customer_ID, Customer_FirstName,

Customer_LastName) VALUES (@CustID, @FirstName , @LastName)”;

using (scopeObject)

{

using (SqlConnection cn = new SqlConnection(ConnectString))

{

SqlCommand cmd = new SqlCommand(sql, cn);

cmd.Parameters.Add(“@CustID”, SqlDbType.Int, 4).Value = 8;

cmd.Parameters.Add(“@FirstName”, SqlDbType.VarChar,

50).Value = “William”;

cmd.Parameters.Add(“@LastName”, SqlDbType.VarChar,

50).Value = “Gates”;

cn.Open();

cmd.ExecuteNonQuery();

cmd.Parameters.Clear();

cmd.CommandText = “SELECT COUNT(*) FROM tb_Customers”;

System.Int32 Result = (int)cmd.ExecuteScalar();//7 Records

after Successful Insert

cn.Close();

//Open a connection to a Different Sql Server database, MSMQ, Oracle etc and do

something there.

}

scopeObject.Complete(); //At this point, the transaction is

committed

}

MessageBox.Show(GetTotalCount().ToString());

}

Private Function GetTotalCount() As Integer

Dim cn As New SqlConnection(CONNECTION_STRING)

Dim cmd As New SqlCommand(“SELECT COUNT(*) FROM tb_Customers”, cn)

cn.Open()

Dim i As Integer = CType(cmd.ExecuteScalar, Int32)

cn.Close()

Return i

End Function

Regarding the defaults—the default isolation that will be used is serializable, and the default timeout on the transaction is 60 seconds, or 1 minute. However, you will probably come across scenarios in which you want completely different settings. For this, the TransactionOptions class comes to save the day:

Dim transactionOption as new TransactionOptions()

transactionOption.IsolationLevel = System.Transactions.IsolationLevel.Snapshot

‘Set the transaction timeout to 30 seconds

‘In reality, you’d probably want to get this from a .Config setting

‘For resource file

transactionOption.Timeout = new TimeSpan(0, 0, 30);

Dim ts = new TransactionScope(TransactionScopeOption.Required,

transactionOption);

Other than Timeout and IsolationLevel, there isn’t much you can do with this, but it is a straightforward way to manipulate these settings.

Back when we first loaded the Alpha bits of Whidbey, things were a little more complex (hats off to Angel Saenz-Badillos and the whole ADO.NET team for making it ever easier). Previously, there was a property named Consistent (actually, the property is still there, but you don’t have to constantly set it).

At each pass through your code, you’d set it to false if something failed. At the end, when the code exited the block and the scope was disposed, if the Consistent property was set to true, everything would commit. If it was false, it would roll back. Compared to what you had to previously do, it was a walk in the park, but it was still a little short on elegance. Now, when you are done and you are sure you want to commit everything, you simply call the Complete method and voilà, everything is committed.

Presently, you can and should call the Complete method of the TransactionScope object to finish off the transaction. You can still set the Consistent property, but the latest Whidbey build indicates that it has already been deprecated.

In the following example, only one data store is being used. As such, this transaction is operating as a local transaction. However, suppose we make a slight modification to this code, such that another data store were used:

Public Sub Test()

Dim cn As New SqlConnection(CONNECTION_STRING)

Dim sql = “SELECT COUNT(*) FROM SomeTable”

Dim cmd As New SqlCommand(sql, cn)

cmd.Parameters.Add(“@CustID”, SqlDbType.Int, 4).Value = 8

cmd.Parameters.Add(“@FirstName”, SqlDbType.VarChar, 50).Value = “William”

cmd.Parameters.Add(“@LastName”, SqlDbType.VarChar, 50).Value = “Gates”

cn.Open()

cmd.ExecuteNonQuery()

Dim i As Integer = CType(cmd.ExecuteScalar, Integer)

cn.Close()

End Sub

What would happen is very interesting. At first, a local transaction would be created. When the second connection was created and opened, it would be automatically enlisted into a distributed transaction. If you don’t want to use the TransactionScope and you want to do things manually, a great new feature simplifies things:

IDBConnection.EnlistTransaction

Therefore, each derivation of this—SqlConnection, OracleConnection, and so on—has the capability of manually enlisting the transaction, although as far as we know, System.Data.SqlClient is the only provider in beta that has actually implemented it.

Monitoring Transactions and Their Performance

As mentioned previously, there’s an inverse relationship between performance and accuracy in respect to isolation level. Another thing to remember is that distributed transactions require a lot more monitoring, so there is obviously more overhead associated with them. At any rate, you will no doubt want to monitor them at some point.

The easiest way to accomplish this is visually. Select Start➪Control Panel➪Administrative Tools➪Component Services➪Component Services (Under Console Root)➪Computers➪My Computer➪Distributed Transaction Coordinator. You should see something like what is shown in Figure 6-2.

Transactions3

Figure 6-2

From there, you can select either the Transaction List (which will show you all currently running distributed transactions), or Transaction Statistics (which will show you the performance statistics of any given transaction). In most instances, the latter will be much more useful, as shown in Figure 6-3.

Transactions4

Figure 6-3

Keep in mind, however, that what you are viewing here are distributed transactions, not local ones. If a local transaction has been promoted/enlisted, then it will become visible. Remember that this is the “Distributed Transaction Coordinator”—hence, its use is for monitoring distributed (not local) transactions.

How Does This Affect Local Data?

As a regular in most of the .NET-related newsgroups, author Bill Ryan sees questions about using System .Data.DataSet objects and transactions just about every day. Many developers new to ADO.NET create a transaction, call Update on a IDBDataAdapter object, and somewhere before completion it fails. The transaction in the database is rolled back, but the changes that were successfully updated in the DataSet are lost.

To understand why this is a problem, you need to understand how IDBDataAdapter objects work and what the AcceptChanges method does to a given DataRow. More problems in ADO.NET have been created by not understanding this than anything else we’ve seen, so we’re going to cover it in detail here.

When you call Fill on a IDBDataAdapter object, the RowState of every row is Unchanged unless you set AcceptChangesDuringFill to false. The default is true, which makes sense. If you aren’t familiar with RowState or AcceptChanges, this may not be clear yet, but bear with us.  When the Update method of an adapter is called, it walks through each row and looks at the row’s RowState. Currently, the RowState enumeration is defined as follows in the MSDN Documentation:

Transactions5

As such, the default RowState of a row after calling Fill is Unchanged. If you delete it, it will change to Deleted.

Many newcomers to ADO.NET often get confused with the Remove and Delete methods. Remove physically takes the row out of the collection. For example, if you have 10 rows, call Remove on 10 of them. You now have 0 rows. If you have 10 rows, call Delete on 10 of them—you still have 10 rows until AcceptChanges is called. We’ve seen at least 50 people run into major problems removing rows when they meant to delete them but didn’t and then wondered why their updates didn’t work.

If you create a new row and add it to a DataTable object, its RowState will be added. If you change an existing row, its RowState will be set to Modified. If you create a new row but forget to add it to a table’s Rows collection, then its RowState is Detached—and as such, isn’t part of the DataTable’s Rows collection.

Moving on, when you call Update, the adapter iterates through the given table you specify. If you pass in a DataSet to the adapter and don’t specify a DataTable, it will default to the one with the 0th index. If you pass in just a DataRow, only it will be used. If you pass in an array of DataRows, then it will behave just like a DataTable’s Rows collection.

At each iteration, the RowState is examined. For each RowState of Added, the UpdateCommand, if one is specified, is used. Note that you don’t have to specify a valid Update, Delete, or Insert command. However, if you don’t have a valid command but a Rowstate is in the Collection that is Updated, Modified, or Deleted, then an exception will be thrown. No exceptions will be thrown if you don’t add one of those commands as long as you never call Update—or when you do, they aren’t needed. Otherwise, they will throw an exception. That said, each row is examined, the corresponding command is examined, and the adapter fires the respective command using the row provided. Upon successful completion, the AcceptChanges method of the DataRow is called, which effectively resets its RowState to Unchanged.

If the Rowstate was Deleted, then it is removed from the Collection. If it’s modified, the original values are now what the proposed values were just prior to the update. By default, if an exception is thrown, then everything stops and no more attempts are made at updates.

However, there is a ContinueUpdateOnError property for each adapter implementation, which, if an error is caused, will just proceed with the next row. For example, if you had 100 rows with a “changed”

RowState, and the row 98 caused an error, you’d have 97 successful changes in the database and only three rows in your DataSet with changes (assuming you were not inside a transaction). If you specified ContinueUpdateOnError, you’d have 99 successful updated records, but only 1 row with changes in it. However, if this occurred inside a transaction, you’d have a serious problem. From the client side, nothing would be different, but in the first examples, you’d have 97 rows with AcceptChanges called on them, which would reset their Rowstate to Unchanged, but when the transaction rolled back, no changes would be present in the database, which would cause those changes to be effectively lost.

So how do you get around this? First, you need to use the GetChanges() method of a DataSet and then pass in the result to your IDBDataAdapter. If successful, commit your transaction and call AcceptChanges on the entire DataSet. This has some potential problems because it’s possible that the power could go out between the commit statement and the AcceptChanges, but fortunately this all happens so fast that such a situation is highly unlikely. Here is an example of how to do this:

Private Sub Test()

Dim da As New SqlDataAdapter(“SELECT * FROM SOMETABLE”)

Dim ds As New DataSet

Dim dt As New DataTable

ds.Tables.Add(dt)

Dim cn As New SqlConnection(CONNECTION_STRING)

Dim tx As SqlTransaction

Try

tx = cn.BeginTransaction

da.Update(ds.GetChanges)

tx.Commit()

ds.AcceptChanges()

Catch ex As SqlException

tx.Rollback()

System.Diagnostics.Debug.Assert(False, ex.ToString)

Finally

cn.Close()

End Try

End Sub

Now you can do away with the intermediate step of getting the changes, and just specify the AcceptChangesDuringUpdate property of the adapter as false. This will prevent the adapter from calling AcceptChanges on each row as it updates them. Here’s an example of how this works:

Private Sub Test()

Dim da As New SqlDataAdapter(“SELECT * FROM SOMETABLE”)

Dim ds As New DataSet

Dim dt As New DataTable

ds.Tables.Add(dt)

Dim cn As New SqlConnection(CONNECTION_STRING)

Dim tx As SqlTransaction

Try

tx = cn.BeginTransaction

da.AcceptChangesDuringUpdate()

da.Update(ds.GetChanges)

tx.Commit()

ds.AcceptChanges()

Catch ex As SqlException

tx.Rollback()

System.Diagnostics.Debug.Assert(False, ex.ToString)

Finally

cn.Close()

End Try

End Sub

Nested Transactions

You can nest TransactionScope objects if you need to. There are two ways to do this. For lack of better terminology, we’ll call these the “obvious” and the “less obvious” ways. The obvious way entails simply creating another TransactionScope object inside the first one:

Dim transactionScope As New System.Transactions.TransactionScope

Using transactionScope

Try

da.Update(ds.Tables(0))

da2.OrdersTableAdapter.Update(ds)

transactionScope.Complete()

Using transactionScope2

Try

da3.Update(ds.Tables(0))

da4.OrdersTableAdapter.Update(ds)

transactionScope2.Complete()

Finally

ds.Tables(0).SuspendValidation = False

End Try

End Using

Finally

ds.Tables(0).SuspendValidation = False

End Try

End Using

As you can see, you can do a single nesting but you can take it as deep as you want:

Dim transactionScope As New System.Transactions.TransactionScope

Using transactionScope

Try

da.Update(ds.Tables(0))

da2.OrdersTableAdapter.Update(ds)

transactionScope.Complete()

Using transactionScope2

Try

da3.Update(ds.Tables(0))

da4.OrdersTableAdapter.Update(ds)

transactionScope2.Complete()

Using transactionScope3

Try

da5.Update(ds.Tables(0))

da6.OrdersTableAdapter.Update(ds)

transactionScope3.Complete()

Finally

ds.Tables(0).SuspendValidation = False

End Try

End Using

Finally

ds.Tables(0).SuspendValidation = False

End Try

End Using

Finally

ds.Tables(0).SuspendValidation = False

End Try

End Using

The less obvious way entails using one scope, calling a function that in turn contains another scope:

Dim transactionScope As New System.Transactions.TransactionScope

Using transactionScope

Try

da.Update(ds.Tables(0))

da2.OrdersTableAdapter.Update(ds)

transactionScope.Complete()

ScopeExample()

Finally

ds.Tables(0).SuspendValidation = False

End Try

End Using

Public Sub ScopeExample()

Dim transactionScope As New System.Transactions.TransactionScope

Using transactionScope

Try

da.Update(ds.Tables(0))

da2.OrdersTableAdapter.Update(ds)

transactionScope.Complete()

Finally

ds.Tables(0).SuspendValidation = False

End Try

End Using

End Sub

Transactions in Web Services

Without a doubt, one cool feature of transaction processing is the wrapping of Web Services as serviced components. It makes sense if you think about it. We use Web Services to update data and retrieve it, so why wouldn’t the requirements of transactions apply here if we were to use Web Services as a fully functioning communications layer? We begin with an example of a Web Service and explain what it does:

<%@ WebService Language=”VB” Class=”TransactionSamples” %>

<%@ assembly name=”System.EnterpriseServices” %>

Imports System

Imports System.Data

Imports System.Data.SqlClient

Imports System.Web.Services

Imports System.Web.Util

Imports System.EnterpriseServices

Public Class TransactionSamples Inherits WebService

<WebMethod(TransactionOption := TransactionOption.RequiresNew)> _

Public Function InsertRecord(keyValue As Integer) As Integer

Dim sql As [ String = “INSERT INTO TESTTABLE(KeyField) VALUES (KEYVALUE)”

Dim cn As New SqlConnection(“Integrated

Security=SSPI;database=xxxxxxx;server=xxxxxxxxxx”)

Dim cmd As New SqlCommand(sql, cn)

cmd.Connection.Open()

Return myCommand.ExecuteNonQuery()

End Function

<WebMethod(TransactionOption := TransactionOption.RequiresNew)> _

Public Function DELETERecord(keyValue As Integer) As Integer

Dim sql As String = “DELETE FROM TestTable WHERE KeyField = keyvalue”

Dim cn As New SqlConnection(“Integrated

Security=SSPI;database=xxxxxxx;server=xxxxxxxxxx”)

Dim cmd As New SqlCommand(sql, cn)

cmd.Connection.Open()

Return myCommand.ExecuteNonQuery()

End Function

<WebMethod(TransactionOption := TransactionOption.RequiresNew)> _

Public Function DELETERecord(keyValue As Integer) As Integer

Dim sql As String = “UPDATE TestTable SET LastUpdated = GetDate() WHERE

KeyField = keyvalue”

Dim cn As New SqlConnection(“Integrated

Security=SSPI;database=xxxxxxx;server=xxxxxxxxxx”)

Dim cmd As New SqlCommand(sql, cn)

cmd.Connection.Open()

Return myCommand.ExecuteNonQuery()

End Function

End Class

It’s hoped that you’re thinking this looks really simple. It is. Basically, you just need to add a reference

to EnterpriseServices and then use some minor attributes. All you need in order to make a method

participate in a transaction is to add the <WebMethod(TransactionOption := TransactionOption

.RequiresNew)> attribute. Actually, you can use any of the TransactionOptions, but in this case,

RequiresNew makes the most sense.

Flow-Through Transactions

One of the newest features that we’ve come across is what’s known as flow-through transactions. While a thorough discussion of these could alone span multiple chapters, it’s worth a brief discussion. With the advent of Yukon, database logic can now be hosted inside the database server and run through the Common Language Runtime (CLR). Essentially, that means that anything you can write in VB.NET or C# can be stored and used inside Yukon, so you could conceivably have a client-side transaction that updates a database table and calls a CLR stored procedure, which in turn updates a totally different database. CLR stored procedures run in different processes, so previously, even if you were to roll back your client transaction, it would remain inaccessible as far as the CLR procedure was concerned and vice versa. Although it’s a fairly complex mechanism, ADO.NET 2.0 will provide a way for transactions to flow through so that a rollback or a commit will span the entire contents of the transaction, both the client-side code and anything executed by the CLR routine.

Getting System.Transactions to Work Correctly

Along the way, we’ve had some problems getting the new distributed transactions to work, most of which had little to do with the code itself and a lot to do with operating systems and service packs. The following table provides a quick rundown of what is necessary as of the time of this writing:

OS Requirements

Windows 2000 (Professional) Service Pack 4

Windows XP Distributed transactions are disabled by default. They will need to be enabled in order to use System.Transactions fully.

Windows XP SP2 Install Hotfix Q828758.

Windows 2003 (Server) Both COM+ and DTC support.

Summary

The ADO.NET Team has made life a lot easier for a lot of people. In the past, distributed transactions were often reserved for the realm of 3:33t K0d3rz and the like. (This syntax is often used by Hackers to avoid getting filtered by filtering software. 3:33t K0d3rz is ‘H4x0r’ for the Elite Coder.) Many developers weren’t even aware that they were possible. Many developers tried “rolling their own” solutions, which often created more problems than it solved. Many developers stuck their heads in the ground and pretended that the problems didn’t even exist. Prior to COM+, distributed transactions were not much fun to work with. COM+ made things a lot easier. Then .NET simplified the process somewhat but it also complicated it in the sense that now you had to learn the behavior of ADO.NET objects like the DataSet and IDbDataAdapter object.

Therefore, while much of the ADO.NET 2.0 features are evolutionary, the System.Transactions namespace packs a lot of bang for the buck. Not only does it make a lot of previously excruciatingly difficult functionality accessible to the common developer, System.Transactions also makes it pretty fun. By having a consistent and intuitive interface for dealing with transactions and distributed transactions, there will no doubt be a lot more people using them. Furthermore, Microsoft has expanded support for providers such as Oracle and MSMQ, and the Windows file system is rumored to be in the pipeline.

There are similar rumors of other major players, such as IBM, extending this functionality to its popular DB2 database. As such, dealing with transactions and distributed transactions is about to get a whole lot more interesting.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: