Quantcast
Channel: Niels Berglund
Viewing all 983 articles
Browse latest View live

New T-SQL Features in SQL 11 / Denali – Error Handling

$
0
0

A couple of days ago I wrote my wish-list to Santa what I wanted to see in next version of SQL Server (SQL 11 / Denali). I was pleasantly surprised that I could find out for myself shortly after; i.e. SQL Server Denali CTP1 was released during the PASS Summit.

I have literally finished installing the next version of SQL Server (Denali / SQL 11) on a new VM, like 10 minutes ago, and I have done a quick check of the new features of SQL Server Denali (what I could find at least) against my wish-list.

So it seems that my autonomous transactions have not been implemented. That does not necessarily meat that they won’t be there in later releases, but for now it is a downer Sad smile. In my list I also mentioned finally blocks. From what I can see that has not been implemented either, BUT something else has…

RAISERROR

No RAISERROR is not anything new. We have used RAISERROR since beginning of time to throw an error in SQL Server. When using RAISERROR we either indicate an error-number, or a message. If we were to raise based on a number, that error number had to exist in sys.messages. If we used a message instead, the error number we received back was 50000, i.e. something like so:

RAISERROR('An error happened', 16, 1)

produced this:

Msg 50000, Level 16, State 1, Line 1
An error happened

TRY … CATCH

In SQL 2005 proper structured error handling was introduced using TRY … CATCH blocks. So instead of having to “litter” our code with SELECT @@ERROR statements, we could enclose our code in BEGIN TRY END TRY followed by BEGIN CATCH END CATCH. Something like so:

BEGIN TRY
  SELECT 'hello';
  SELECT 1 / 0;
  SELECT 'world'
END TRY
BEGIN CATCH
  --handle the error
END CATCH

… and life was good (IMHO the structured exception handling was one of the greatest new features in SQL Server 2005). However, we are never completely satisfied, we always want more. And what we wanted to do, was to be able to handle the error, and then perhaps re-throw the error (like we can do in other modern development languages). Up until SQL Server Denali / SQL 11 the only way to do that was to use RAISERROR. That would not have been so bad apart from the fact that we are not allowed to raise an error with a system defined error number, i.e. RAISERROR(8134 …). So instead we had to resort to various “hack” to achieve what we wanted.

THROW

This has now been fixed in SQL Server Denali / SQL 11 by the introduction of THROW. THROW does not require that an error number being thrown exists in sys.messages, so you can raise an exception like so: “THROW 50001, ‘OOPS – something happened’, 1”. Notice how you do not define a severity when using THROW, all exceptions being raised by THROW has a severity of 16.

The really great thing with THROW however is how you can use it like you would use THROW in other languages. In other words you use use it to re-throw an exception:

BEGIN TRY
  SELECT 'hello';
  SELECT 1 / 0;
  SELECT 'world'
END TRY
BEGIN CATCH
  --handle the error
  PRINT 'here we are handling the error'
  THROW
END CATCH

The above code snippet produces this output:

here we are handling the error
Msg 8134, Level 16, State 1, Line 3
Divide by zero error encountered.

I do not know about you, but I think this is fairly cool. I do still want finally blocks and autonomous transactions, but right now I take what I can get.

As I mentioned in the beginning of this post; I have just installed SQL Server Denali, and have not had time to do much “spelunking”. Stay tuned for more posts in the coming days. You should also check Simon Sabin’s blog, where he has quite a lot of SQL Server Denali coverage.


More T-SQL Error Functionality in Denali / SQL 11

$
0
0

In my previous post I wrote about the new THROW keyword in Denali / SQL 11. Having played around a bit more with Denali, I wanted to write some additional things about THROW and it’s relation to RAISERROR.

RAISERROR

First some background / overview of RAISERROR:

  • RAISERROR allows you to throw an error based on either an error number or a message, and you can define the severity level and state of that error:
     RAISERROR(50001, 16, 1);
     --or
     RAISERROR('Ooops', 16, 1);
     
  • If you call RAISERROR with an error number, that error number has to exist in sys.messages.
  • You can use error numbers between 13001 and 2147483647 (it cannot be 50000) with RAISERROR.

As I mentioned in my previous post, RAISERROR has been around since forever – and it works fairly well. One of the major drawbacks with RAISERROR – as I also wrote in my previous post – is that it cannot be used to re-throw an error we might have trapped in a structured error handling block. Or rather, this may not be that much a RAISERROR issue, as an issue that SQL Server has not previously supported the notion of re-throwing an error. Be as it may with that, there are other drawbacks with RAISERROR which I will mention later in this post.

THROW

In Denali / SQL 11 Microsoft introduces the THROW keyword, which allows us to re-throw an exception caught in an exception handling block. Some characteristics of THROW:

  • Using THROW you can throw a specific error number as well as message:
    THROW 50000, 'Ooops', 1;
  • When using THROW> you have to define both an error number as well as a message (and state), unless you re-throw an exception.
  • The error number does not have to exist in sys.messages but, it has to be between 50000 and 2147483647.

So, THROW looks fairly cool, but what are the drawbacks with RAISERROR I mentioned above? Well, for one – beginning with Denali / SQL 11 RAISERROR is being deprecated, i.e. it will eventually be removed from SQL Server. Another reason has to do with transactions and error handling.

UPDATE:  According to Aaron Bertrand, in his post here, it is only some very old RAISERROR syntax that is being deprecated. 

XACT_ABORT

As every T-SQL programmer worth his (or her) salt should know, an exception does not roll back a transaction by default (ok, ok, it does depend on severity level to an extent – but a “normal” exception does not roll back a tran). I.e. the following code would cause two rows to be inserted in the table t1:

 --first create a test table which we will use throughout the code samples
 CREATE TABLE t1 (id int primary key, col1 nvarchar(15));
 --now onto the 'meat'
 BEGIN TRAN
 INSERT INTO t1 VALUES(1, 'row1');
 --emulate some error, this will indeed cause an exception to happen,
 --but the processing will continue
 SELECT 1 / 0
 INSERT INTO t1 VALUES(2, 'row2')
 COMMIT
 

We can indicate to SQL Server that we want “automatic” rollback of transactions when an exception happens  by setting XACT_ABORT. This will cause a rollback to happen if a system exception happens. So based on the example above, no rows will be inserted when the code below executes:

 SET XACT_ABORT ON
 BEGIN TRAN
 INSERT INTO t1 VALUES(3, 'row3');
 SELECT 1 / 0
 INSERT INTO T1 VALUES(4, 'row4')
 COMMIT
 

However, what happens if the user throws an exception using RAISERROR? In that case no rollback happens, i.e. RAISERROR does not honor the XACT_ABORT setting:

 SET XACT_ABORT ON
 BEGIN TRAN
 INSERT INTO t1 VALUES(5, 'row5');
 --the user raises an error, but the tx will not roll back
 RAISERROR('Oooops', 16, 1)
 INSERT INTO t1 VALUES(6, 'row6')
 COMMIT
 

This can catch developers out and is in my opinion a fairly severe drawback. So with the introduction of Denali / SQL 11 and the THROW keyword, Microsoft has tried to fix this by making THROW honor XACT_ABORT:

 SET XACT_ABORT ON
 BEGIN TRAN
 INSERT INTO t1 VALUES(7, 'row7');
 --the user raises an error, and the tx will roll back
 THROW 50000, 'Ooops', 1
 INSERT INTO t1 VALUES(8, 'row8')
 COMMIT
 

When you run the code above, you will see that the transaction is indeed rolled back and no rows are inserted.

So developers, “go forth” and THROW exceptions in SQL Server Denali / SQL 11.

 


You can subscribe to my RSS feed at: http://feeds.feedburner.com/manageddata.

 

I am also at twitter as @nielsberglund

Beginners F# Resources

$
0
0

This post is more as a reminder to myself where to find online resources when learning F#. If anyone else can find it useful, so much better. And, if anyone out there has other online  resources, please leave a comment and I will include it. So, in no particular order:

Finally, a list like this would be incomplete without the link to the Man himself: Don Syme; http://blogs.msdn.com/b/dsyme/

UPDATE: added Brian McNamara to the list November 26 (I don’t know how I missed him initially)

 


You can subscribe to my RSS feed at: http://feeds.feedburner.com/manageddata.
I am also at twitter as @nielsberglund

 

SQL Server Denali CTP 1 SUX ….

$
0
0

.. from a relational developers perspective!!

Well, the title may be a bit harsh, but at least it grabbed your attention – did it not?! :)

A week ago, or so, I wrote a wish list to Santa for Denali from a relational developers perspective. In that wish list I wrote that there has been fairly little love for relational SQL developers in the recent versions of SQL Server, and that I hoped in this version (i.e Denali) Microsoft would “go back to the roots” and give us developers some new stuff.

So I downloaded the CTP when it became available, and have been playing around with it for a bit, in order to see what new “stuff” I could find and how it stacked up against my wish list:

  • Autonomous transactions – not a whiff of it :(
  • Enhancements to SQLCLR – Denali is still loading version 2.0.50727 of the runtime (i.e. the original – SQL 2005 – version). So nothing here either, and they have not even added Microsoft.SqlServer.Types (for the geo and hierarchy types) to the blessed list. This (lack of SQLCLR enhancements) is probably the one thing that saddens me the most – it seems that after all the initial hoopla and fanfare about SQLCLR when it was introduced in SQL Server 2005, Microsoft has decided to not fullfil its potentials. :( :(
  • Finally blocks – well, we do not have finally blocks but we now have a proper way of throwing and re-throwing exceptions; the THROW keyword. I wrote about it here and here. So at least this is something.
  • Other T-SQL enhancements – this is an area where there are at least a couple of new things: SEQUENCE and OFFSET. Those are cool and useful and Aaron B wrote about them here and here. But this is still not very much, and no evidence of that Microsoft want to continue to enhance T-SQL as a first class development language (as they have stated in the past).

So, the report card does not look that good and that’s the reason for the title of this post. Granted, there are things that are in the cards but not included in this CTP; things like:

  • Column storage – however, that is more a BI feature, but it will be usable in the OLTP world as well.
  • FileTable – a way of storing files in SQL Server. It looks like FileStream v.NEXT or (do I dare say it) WinFS (now I have most certainly condemned this to death). It looks interesting, but – as I said – not in this CTP.

As you can gather from the above, I am not that stoked about Denali. I hope later CTP’s will bring more things, but somehow I doubt it.

What are your take on this, are you happy with what Denali gives you (from a relational developers perspective), and if not – what would you like to see included. Answers in the comments please.

Using F# in SQLCLR

$
0
0

Recently I have become very interested in F# and I am at the moment trying to get to grips with it. It is definitely a different beast than C#, but so far I like it – a lot!

Anyway, I am a SQL nerd, and many moons ago I was very heavily involved in SQLCLR (for you who don’t know what that is; it is the ability to run .NET code inside the SQL Server engine. It was firat introduced with SQL Server 2005). So I thought it would be a “giggle” to see if I could get some F# code running inside SQL Server.

I created the simplest of the simple F# dll’s. SQLCLR requires you have a public class and your publicly exposed SQLCLR methods to be static, so my F# code looked like so:

namespace ManagedData.Samples.FSharp
  type SqlClr =
    static member Adder a b = a + b
    static member Factorial n =
      match n with
      | 0  -> 1
      | _ -> n * (SqlClr.Factorial( n - 1))

As you can see my class is extremely advanced (not); it has two methods:

  • The canonical Adder method (every SQLCLR dll has to have an Adder method, it’s the law – nah, I’m just kidding :) ), which takes two integers and returns an integer.
  • A factorial method, which takes an integer and calculates the factorial from that.

By the way, any pointers about how to write efficient F# code are very welcome .

Having written and compiled the code, it was time to deploy! When running .NET code in SQL Server, you need to deploy your assembly to the database you want to execute your code in, and SQL Server will actually load the assembly from the database. In fact most assemblies are loaded from the database, even quite a few of Microsoft’s own system assemblies which normally are loaded from the GAC. There are only about 13 system assemblies that are allowed to be loaded from the GAC – these are known as the “blessed list”. You also need to create T-SQL wrapper objects (procedures, functions, triggers, etc.) around the methods you want to publicly expose.

In my SQL Server 2008R2 instance I created a database in which I wanted my F# assembly to, and then it was time to deploy. You can deploy in several ways, the easiest is something like this (in the database you want to use):

CREATE ASSEMBLY fsasm
FROM 'c:\repos\F#\testcode\fssqlclr\fslib\bin\debug\fslib.dll'
WITH permission_set = SAFE;
GO

The problem with the code above is that F# projects have a dependency on the assembly FSharp.Core.dll, so when I tried to deploy my assembly as per above, I got an exception. What I had to do was to deploy FSharp.Core.dll to my database first:

CREATE ASSEMBLY fsasm
FROM 'C:\path to ...\FSharp.Core.dll'
WITH permission_set = UNSAFE;
GO

Notice the use of permission_set = UNSAFE, this is to tell SQL Server that I know what I am doing :) and SQL Server should keep from doing a lot if validation. When I had catalogued the FSharp.Core.dll assembly I had no problems deploying my assembly to the database.

All there remained to do now was to create the T-SQL wrapper object(s) around my F# methods. This is done with “normal” CREATE ... syntax. The code for my factorial looks like so:

CREATE FUNCTION FsFactorial(@x int)
RETURNS int
EXTERNAL NAME fsasm.[ManagedData.Samples.FSharp.SqlClr].Factorial;
GO

This also went without problems, so now it is “crunch-time”. Can I execute a F# method in SQLCLR?

SELECT dbo.FsFactorial(4);

Lo and behold, it executed and I received 24 back! I had just now executed F# running inside SQL Server!!

So, what does this prove? Nothing really :) , it was just an exercise from me to see if it could be done. However, F# is really suitable for quite a few tasks you would want to use SQLCLR for, so it now gives a database developer another tool in his tool-belt.

If anyone is interested in the full code for this, please drop me a comment and I’ll email it to you.

TPL Dataflow, Axum v.NEXT?

$
0
0

At PDC 2010 Microsoft showed the new Async features of coming C# (and VB.NET) versions, and quite a lot has been written about it already. Part of the Async CTP is TPL Dataflow, and this has gone somewhat un-noticed.

TPL Dataflow is a library for building concurrent applications. It utilises an actor/agent-oriented designs via primitives for in-process message passing, dataflow, and pipelining. It looks and feels a bit like Axum, and one can wonder if TPL Dataflow will be the productization (is this a word?) of Axum, especially as Axum’s future seems a bit unclear at the moment.

I am at the moment writing some test-code for TPL Dataflow, which I will post as soon as I have tidied it up a bit. In the meantime Matt Davey, have a quite a few posts about TPL Dataflow on his blog. So if you are interested, go and have a look.


Subscribe to my RSS feed: http://feeds.feedburner.com/manageddata.
Follow me on twitter as: @nielsberglund

F#, Mono and Mac

$
0
0

This is a first post about my experiences with running F# and Mono on a Mac. 

In a previous post I wrote about how I have started to play with F#. As that post also covered SQLCLR it was obvious I was on Windows. Even though I make my living from development in a Windows environment, my main machine is a MacBook, and I run OSX as my main OS. I have previously also been running Linux (ArchLinux) on this machine as my main OS. Naturally I have heard about Mono (and also installed it a couple of times – and quickly un-installed again :) ), but I have not really done anything with it. I have always run Windows in a VM on my MacBook for development etc. However after the announcement that F# was going Open Source, and Tomas P posted about his F# MonoDevelop plug-in, I decided that I should have a look at what it would be like to do F# “stuff” in OSX.

This is what I did:

  1. Downloaded Mono from here.
  2. Downloaded F# from here. You want to download the zip file.

Having downloaded what I thought was necessary (I decided to hold off with MonoDevelop until I had everything running), I started the installation process. Installing Mono was straight forward, just mount the .dmg and then run the .pkg file. The only slight issue after installation  was where it had been installed. Mostly for my own reference for later installations; Mono is located at: /Library/Frameworks/Mono.framework.

After I had installed Mono, I copied the bin directory from the unzipped F# file to a directory I created in the same root folder as where Mono was: /Library/Frameworks/FSharp. I copied the install-mono.sh file to the FSharp directory and was ready to start the installation. Fortunately before I executed the install-mono.sh file, I read the comments in the file. At this stage I realised I had not downloaded everything necessary.

One of the F# dll’s FSharp.Core.dll needs to be installed in the gac. In order to do that, the dll needs to be re-signed with the mono.snk key. The installation file mentions how you can download the file using wget. As I did not have wget I found a link to it and downloaded it by right-clicking on the link and choose “Save Link As …”. Once again mostly for my future reference; the file can be found at: http://github.com/mono/mono/raw/master/mcs/class/mono.snk (just right click and choose “Save Link As …”. I saved it into the F# root folder (the same folder where the install-mono.sh is).

So, now everything should be ready to go. I executed the install file and promptly got an error saying that the FSharp.Core.dll could not be installed in the gac. Hmm, not good! Fortunately the error message mentioned something about a possible permission error, so I looked at the permissions on the gac folder (../Mono.framework/Versions/2.8/lib/mono/gac), and sure enough – I did not have write permissions. I gave myself write permissions, and re-ran the installation and everything went OK. Cool!!

After this it was time to test it out. From the F# bin directory I ran the following from a terminal window to execute the compiler: <code>mono fsc.exe</code>. It seemed to work as I got this error back:

I then tried the interactive window: mono fsi.exe. I wrote some simple test code:

As you can see, that worked as well!! So I am now well on the way of running (and learning) F# on Mono. Next step is to install MonoDevelop and Tomas P’s plugin for F#. Stay tuned ….

F#, Mono and Mac – Take II

$
0
0

So yesterday I wrote about how I have started using F# and Mono on my MacBook.

I wrote about how I downloaded the F# bits, unzipped and put them in a specific directory I had created. Today after having browsed around a bit more I realized I had done it the hard way. To install the required bits for F# for Mac, you only have to download a zip file with an install package for Mac from the F# Cross Platform site on CodePlex. The actual zip-file for the November 2010 CTP is here.

After you have downloaded the file you unzip it and run the .pkg file. This takes care of everything; no re-signing with the .snk file etc. The added benefit of installing from the .pkg file is that a couple of F# compiler dll’s are automatically gac:ed (they are needed if you want to run the F# plugin for MonoDevelop), and aliases are created for the F# compiler and the F# interactive window.


Transactions in SQL Server (take 2956)

$
0
0

Transactions in SQL Server seems to be a difficult topic to grasp. This weekend I came across a blog-post where the poster showed a “solution” to the “The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION” error we sometimes see when various stored procedures call each other. The solution (even though it masked out the error in question) did not get it quite right. So I thought I would make a post about the subject.

Nested Transactions in SQL Server and the Evil @@TRANCOUNT

In SQL Server we have the @@TRANCOUNT variable which gives us the number of transactions active in the session – or that’s at least what we might believe. Take this  extremely simple code:

SET NOCOUNT ON
CREATE TABLE #t (col1 varchar(15))
PRINT @@TRANCOUNT
BEGIN TRAN
PRINT @@TRANCOUNT
INSERT INTO #t VALUES('HELLO')
BEGIN TRAN
PRINT @@TRANCOUNT
INSERT INTO #t VALUES('WORLD')
COMMIT
PRINT @@TRANCOUNT
COMMIT
PRINT @@TRANCOUNT

You should see something like this:

0
1
2
1
0

I.e. it seems like the transaction count is increasing for each BEGIN TRAN, and decrease with COMMIT. And if you were to SELECT * FROM #t you would see two rows returned. So far so good, so what is wrong with @@TRANCOUNT then? Well, let us change the code slightly (don’t forget to drop #t if you copy and paste this code):

SET NOCOUNT ON
CREATE TABLE #t (col1 varchar(15))
PRINT @@TRANCOUNT
BEGIN TRAN
PRINT @@TRANCOUNT
INSERT INTO #t VALUES('HELLO')
BEGIN TRAN
PRINT @@TRANCOUNT
INSERT INTO #t VALUES('WORLD')
COMMIT
PRINT @@TRANCOUNT
ROLLBACK
PRINT @@TRANCOUNT

If you now were to (don’t do it immediately) SELECT * FROM #t, how many rows would you get back – 0, 1, or 2? Seeing how the @@TRANCOUNT is increasing with every BEGIN TRAN and decreasing with COMMIT / ROLLBACK, it is understandable if your answer is 1:

  • we start a transaction and insert a row
  • we then start another transaction and insert a second row
  • we call commit after the second insert (the inner transaction)
  • finally we do a rollback, on the “outer” transaction

As we after the second BEGIN TRAN can see @@TRANCOUNT being 2, we could assume that the commit would commit the second insert. However, we all know what happens when we assume  (now would be a good time to do the SELECT)  ….

Right, the SELECT did not return any rows at all, so it is probably fair to say that we did not have multiple transactions, even though @@TRANCOUNT showed us more than one. So, then we might assume (keep in mind what I’ve said about assume) that the reason we rolled back was because ROLLBACK was the last statement. Let us switch the COMMIT on line 10 with the ROLLBACK on line 12 (we now have ROLLBACK on line 10 and COMMIT on line 12) and execute. WHOA – we got a big fat exception, what happened here? To answer that, let us look a bit closer at the main parts of transaction control in your code.

BEGIN TRAN, COMMIT and ROLLBACK

When you execute BEGIN TRAN in T-SQL, SQL will look around in the execution context of your session and see if there already exists a transactional context. If not, SQL will start a new transaction. If there is a transaction already, SQL will enlist in this transaction. However in both cases SQL will increase the @@TRANCOUNT variable.

Then, when you execute a COMMIT, SQL will not immediately commit the transaction but will decrease the transaction count with 1. If the transaction count has reached 0 due to the commit, a commit will take place. OK, so far so good, but this does not explain the error we received when switching the COMMIT and ROLLBACK statements, if it works as described, then we should have committed?

Ah, yes – however, a ROLLBACK not only decrements the transaction count – it sets it to 0 immediately, and as the transaction count is now 0, a rollback will happen. So in our second example we are seeing something similar to when we – in stored procs – are getting the ”The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION” error.

Stored Procedures and Transactions

It is quite common to write procs something like so:

CREATE PROC sp2
AS
SET NOCOUNT ON
BEGIN TRAN
BEGIN TRY
  -- do some stuff
  -- then if all is OK we commit
  COMMIT TRAN
  RETURN 0;
END TRY
BEGIN CATCH
  DECLARE @errMSg varchar(max);
  SELECT @errMSg = ERROR_MESSAGE()
  ROLLBACK TRAN
  RETURN 999; --things have gone very wrong
END CATCH

Then we are having a similar proc, looking almost the same, but it, in addition, calls into sp2:

CREATE PROC sp1
AS
SET NOCOUNT ON
BEGIN TRAN
BEGIN TRY
  -- do some stuff
  -- do some more stuff by calling into sp2
  EXEC sp2;
  -- then if all is OK we commit
  COMMIT TRAN
  RETURN 0;
END TRY
BEGIN CATCH
  DECLARE @errMSg varchar(max);
  SELECT @errMSg = ERROR_MESSAGE()
  ROLLBACK TRAN
  RETURN 999; --things have gone very wrong
END CATCH

This is now when we will potentially see the error mentioned before. We call sp1, when sp1 is called there is no transactional context around, so SQL creates a new transaction. Then we go on to call sp2 from sp1. In the BEGIN TRAN call in sp2, there exists a transactional context, so SQL enlists us in that context.

If all now goes well and we call COMMIT in sp2, the commit causes the transaction count to be decreased to 1 – but no “real” commit happens. So when we subsequently calls COMMIT in sp1, we decrement the transaction count to 0, and we are committed.

In the case when things go wrong is sp2 and we call rollback, the transaction count is immediately set to 0, and a rollback happens. When we come back to sp1, SQL sees that we had a transaction in sp1, but there are no transactions around, and we will get the error discussed. If we then go on and do a rollback (as in our code) – we will get additional errors.

Solution

A solution to the problem is to use the “evil” @@TRANCOUNT, to see if there are any transactions around. If there aren’t any, we start a transaction. If there are a transaction already, we don’t do anything, and we let the existing transaction handle everything:

CREATE PROC sp2
AS
DECLARE @tranCount int = @@TRANCOUNT; --I'm using SQL2008 here
SET NOCOUNT ON
IF(@tranCount = 0) --no tx's around, we can start a new
  BEGIN TRAN
BEGIN TRY
  -- do some stuff
  -- then if all is OK we commit
  --if the variable @tranCount is 0,
  -- we have started the tx ourselves, and can commit
  IF(@tranCount = 0 AND XACT_STATE() = 1) --XACT_STATE - just to be on the safe side
    COMMIT TRAN;

  RETURN 0;
END TRY
BEGIN CATCH
  DECLARE @errMSg varchar(max);
  SELECT @errMSg = ERROR_MESSAGE()
  --if the variable @tranCount is 0,
  -- we have started the tx ourselves, and can rollback
  IF(@tranCount = 0 AND XACT_STATE() <> 0) --XACT_STATE - just to be on the safe side
    ROLLBACK TRAN;

  --tell an eventual calling proc that things have gone wrong
  --and the calling proc should rollback
  RETURN 999;
END CATCH

Obviously the calling proc would have similar code to decide if to start a tran or not.

In the above scenario we let the “outer” proc handle all the transactional control. Sometimes you are in a situation where – if things go wrong in the “inner” proc (sp2 in our case) – you do not want to roll back everything done, but only what was done in the inner proc. For such a scenarion, you can use named savepoints:

CREATE PROC sp2
AS
DECLARE @tranCount int = @@TRANCOUNT; --I'm using SQL2008 here
SET NOCOUNT ON
IF(@tranCount = 0) --no tx's around, we can start a new
  BEGIN TRAN
ELSE --we are already in a tx, take a savepoint here
  SAVE TRANSACTION sp2 --this is just a name

BEGIN TRY
  -- do some stuff
  -- then if all is OK we commit
  --if the variable @tranCount is 0,
  -- we have started the tx ourselves, and can commit
  IF(@tranCount = 0 AND XACT_STATE() = 1) --XACT_STATE - just to be on the safe side
    COMMIT TRAN;

  RETURN 0;
END TRY
BEGIN CATCH
  DECLARE @errMSg varchar(max);
  SELECT @errMSg = ERROR_MESSAGE()
  --if the variable @tranCount is 0,
  -- we have started the tx ourselves, and can rollback
  IF(@tranCount = 0 AND XACT_STATE() != 0) --XACT_STATE - just to be on the safe side
    ROLLBACK TRAN;
  ELSE IF (@tranCount > 0 AND XACT_STATE != -1)
    ROLLBACK TRANSACTION sp2 --we are rolling back to the save-point

  --tell an eventual calling proc that things have gone wrong
  --and let the calling proc decide what to do with its parts
  RETURN 999;
END CATCH

Personally, I do not use named save-points that much as they cannot be used together with linked servers, and we – unfortunately – are using linked servers a lot.

A final note about named save-points; they are not the same thing as beginning / committing / rolling back a transaction with a name:

SET NOCOUNT ON
CREATE TABLE #t (col1 varchar(15))
BEGIN TRAN t1
INSERT INTO #t VALUES('HELLO')
ROLLBACK TRAN t1

Beginning a transaction with a name, is for most parts just a convenience. It has no effect on nesting (unless you use named save points), and SQL Server Books OnLine says this about naming of transactions:
“Naming multiple transactions in a series of nested transactions with a transaction name has little effect on the transaction. Only the first (outermost) transaction name is registered with the system. A rollback to any other name (other than a valid savepoint name) generates an error. None of the statements executed before the rollback is, in fact, rolled back at the time this error occurs. The statements are rolled back only when the outer transaction is rolled back”.

If you have questions, observations etc., please feel free to leave me a comment, or drop me an email.

First Impressions Microsoft BUILD & Win 8

$
0
0

I have a while ago just finished watching the live stream of the first keynote (yes there will be one tomorrow as well), at Microsoft BUILD. Having attended / presented, at quite a few of these kind of events – and being somewhat jaded (well OK then, a lot jaded), I must still say that I am impressed.

As quite a few other developers I have been fairly worried about what will happen when Win 8 comes; .NET/WPF/SilverLight is dead – long live HTML etc., but at least for now it seems that the fears have been un-founded. I.e, the .NET as we know and love is still there, SilverLight as well (come to think about it, nothing much was said about WPF). And it seems pretty straightforward to build the new “Metro” style apps using the tools we know.

What do I think then: well, Win 8 promises to be really, really slick and cool – but we have been here before (Longhorn anyone?), so let’s wait and see until we get to RC stages. However, the whole Win RT, i.e the underlying “goo” of Windows (graphics, networking, etc) being exposed to all different types of programming languages; native, .NET, HTML/JavaScript, etc., seems very, very cool. I can’t wait to getmy hands on some bits and start playing around with this. Speaking of that; bits will apparently be released later at: http://bit.ly/nX2K3a.

So at this stage I am fairly optimistic, and I would not rule out, myself running Win 8 on a couple of machines here at home. I am looking forward to the keynote tomorrow, where they will talk more about the development experience, and hopefully drill deeper into Visual Studio.Next.

Having finally seen the session-list, there are some really interesting sessions during the week. I really, really hope these ones will be videoed:

  • F# 3.0: data, services, Web, cloud, at your fingertips, by Don “Mr F#” Syme: bit.ly/n16Xyu
  • What’s new in .NET Framework 4.5: bit.ly/n7tUKU
  • Lessons learned designing the Windows Runtime: bit.ly/pd3XZN
  • Deep dive into the kernel of the .NET Framework: http://bit.ly/nX5czN
  • Using the Windows Runtime from C++: http://bit.ly/r8Iyq8
  • Using the Windows Runtime from C# and Visual Basic: http://bit.ly/r4Q1cT

That’s all for now “folks”. Will hopefully post more as the week and the conference goes by.

SqlClrProject on GitHub

$
0
0

As some of you may know, I – once upon a time – developed a project (VS add-in, templates, etc) for automatic deployment of CLR assemblies to SQL Server: SqlClrProject. That project has been dormant now for a couple of years, but I now and then get requests for where it can be downloaded from (I had it on CodePlex, but had to take it down as I didn’t publish the source code).

A while ago I decided to start to use Git and GitHub as source control (I have been using SVN since forever), and as part of the “getting to grips” with Git, I created a repo for SqlClrProject on GitHub. So the source for the project is now available on GitHub. (https://github.com/nberglund/sqlclrproject)

If you are interested in the  fork it, play with it. The state of it is that it “should” work on VS 2008 / SQL 2008. It most likely will work on VS 2010 as well. And of course the standalone deployment executable will work regardless of VS version.

Debugging in SQL Server 2008

$
0
0

As good as SQL 2005 was (well, still are), one disappointment was that you needed Visual Studio if you wanted to debug your stored procedures. Seriously, what was MS thinking when they did that, especially as in SQL 2000, Query Analyzer had debug capabilities?!!

Anyway, today I was playing around, errm – doing serious stuff in the RC0 release of SQL Server 2008, and just by coincidence noticed that there is a debug menu entry in the toolbar (how blind can one be – I must have been looking at that toolbar quite a few times). So I wrote some T-SQL code, put in a couple of breakpoints and hit Alt + F5, and lo and behold – my bp’s were hit and I could step through the code. I then wrote a very basic stored proc, wrote some code that called the proc, put a bp at the call into the proc and executed. When the execution stopped at the bp I hit F11 and I stepped into the proc – WoHoo!!! Call me sad, but stuff like this make me happy!!

Now, let’s hope that MS will keep this feature in and not pull it at the last minute – anyone remember the XQuery designer in one of the very early SQL 2005 beta’s?

Twitter

$
0
0

As the saying goes; “It is hard to learn an old dog new tricks”, but…  Even though I am a really old dog, I hope I’ll be able to learn a bit about “social networking”, and therefore I created an account on Twitter a couple of days ago. Hopefully I’ll be able to be more active on Twitter than what I’ve been here at the blog. Well, that should not be too hard, seeing how infrequent I post here.

Anyway, my twitter account is @nielsberglund, so if you are interested you know where to go.

SQL Server 2008 R2 August CTP

$
0
0

Yesterday I downloaded and installed the August CTP of SQL Server 2008 R2, and today I played around with it for a while. So, what are my impressions?

Well, from a perspective of being a relational dev and internals guy, my immediate response is “yawn – where is the beef?”. I.e., it is not much there, and I doubt we will see much more in coming releases. However, if I were a BI / reporting guy I’d be over the moon, and definitely look forward to future CTP’s! Even if I were a (wait for it –) DBA I would be fairly interested.

I will let you decide for yourself what is interestimg for you, but one thing that is not in the CTP at the moment but is promised (and keeps me interested) is StreamInsight (based on Complex Event Processing). This will be part of SQL Servr 2008 R2. Coming from the financial industry and dealing with message based applications (that’s why I love SQL Server Service Broker), this is something I am really interested in. So, even if you are a T-SQL / internals guy, do not despair – there may be something for us as well.

Stream and Complex Event Processing from a Relational Guy's Eye

$
0
0

This is a re-post from my previous blog. However, as that blog has now gone to the big blog repository in the sky (or wherever blogs go to when they are no more), I decided to repost this, seeing how CEP and StreamInsight are becoming more and more popular.

This is the first in (hopefully) a series of blog posts where I will be looking into Microsoft’s new technology for Complex Event Processing (CEP); StreamInsight (SI). This post is an overview of the problem domain that Microsoft tries to target SI at. As I am a relational database guy at heart, I look at it from a relational guy’s perspective.

Relational Database Systems

The relational database system (RDBMS) is the backbone from almost any enterprise application today, and the various RDBMS’s are highly optimized to deliver the best performance available, for its particular type of applications. The particular type of applications a RDBMS is (mostly) optimized for is an application where updates to the data don’t happen that frequently (i.e. not like 100,000’s of updates per second) and queries against the database are queries against (from what can be described as) a snapshot of the database.

The last couple of decades we have seen the emergence of types of applications that has somewhat different requirements and characteristics than a typical RDBMS based application. Examples of these type of applications are OLAP, Data Mining as well as storage and querying new data types such as XML, media and spatial. This has required the RDBMS to add new functionality as well as extending existing functionality.

Streaming Data

The last few years there has been yet another type of data intensive applications arriving on the scene, but these applications has somewhat different requirement’s than “just” being able to query “static” data. These are types of applications where data can potentially arrive with very high frequency and we may need to run queries against this data continuously and / or from the arriving data derive new types of data (change the schema of the original data) – which we also may want to run queries against. I am talking about Stream Data Processing (SDP) and Complex Event Processing (CEP) applications.

The main differences between a typical RDBMS application and a SDP/CEP application are:

  • The data in a SDP/CEP application can be never ending. I.e. the data continuously arrives.
  • When we query data in a RDBMS app, we do it against a static snapshot of the data at that particular time.  The data is being evaluated once – and output once.
  • Querying against SDP/CEP data however is typically done in a continuous fashion. The data is continuously evaluated and output.

RDBMS vs. SDP/CEP

We can use RDBMS systems for SDP/CEP applications; we load the incoming data into the database and then we run queries continuously against the stored data. This will work OK, but we will run into some issues with it:

  • By storing the data before we query it, we are adding latency as per Figure 1 below.
  • We may have to write some convoluted queries in order to be able to querying the data in a continuous manner.

Figure 1:RDBMS’s Handling Stream Data

So, even if we can use RDBM’s for SDP/CEP type applications it is fairly obvious that this may not be the best approach. Hence the rise of another type of management systems for SDP/CEP applications: the Data Stream Management Systems (DSMS).

The DSMS systems work under the premises that we have some sort of server (running in memory), which serves up application(s) that handles the incoming data. The incoming data is fed to the application(s) by the use of input adapters. In the application(s) there are continuous queries running over the data from the input adapters. The results of the queries are then being fed to output adapters which serve up the data to applications that need the data. Figure 2 tries to illustrate a DSMS system.

Figure 2:General Overview of DSMS

Dependent on the DSMS system the language of the query may vary. Quite a few systems are using languages that are fairly similar to SQL, whereas SI is using LINQ. As we can see from Figure 2, the main part of the DSMS runs in a low latency environment, and it is only if we need any sort of look-up data loaded from a RDBMS that we may run into high latency issues.

Complex Event Processing

So what is the difference between processing the streaming data and doing CEP? In CEP we look at the individual events, try to correlate them and look at the impact on a macro-level. A typical example of this (quite a few DSMS systems, are using this as an example) is where we collect sensor signals from cars, let’s say each car sends out a signal every 30:th second. This signal contains information about position, speed, road, lane in the road etc. When analyzing these event signals we say that a car-crash has happened if any given car has during 4 consecutive signals the same position and 0 speed. We have analyzed the individual events and from them derived a new event: a Complex Event.

This was a very rudimentary explanation. To get a fuller (and much better and in-depth) explanation have a look at a series of blog posts by Tim Bass.

Finally

As mentioned at the very beginning; this was a repost, and in the original post I said that I would in the next instalment write about the architecture of StreamInsight. The blog disappeared before that, but look out for a post shortly here about the architecture.


What New Programmability Features Will There Be in SQL 11?

$
0
0

There is probably no secret that Microsoft is working hard on next version of SQL Server. The rumour has it that it will be named SQL 11 (it apparently goes under the code name of Denali. Quiz; MS has used the Denali code name previously, what was it for? Answers in the comments ).

Anyway, the SQL PASS summit is this coming week and another rumour says we might see a CTP being released during the conference. I, for one, cannot wait to see a CTP and see what new features it brings. Which brings me back to this post. If we look at some of the previous releases of SQL we can see that they have had a mixed bag of features for developers:

  • SQL 2005: HUGE; SQLCLR, Service Broker, DDL Events, PIVOT, CTE’s, XML, etc., etc.
  • SQL 2008: Not so much; the table type and TVP’s (which are cool), T-SQL enhancements (cool), some new data types, extended events, but not much more (unless you are a BI guy – which I am not).
  • SQL 2008R2: Even less; some enhancements to Service Broker, and StreamInsight, but that is basically it.

So, IMHO, it is now time for relational developers to get some love from Microsoft in this release of SQL Server. Seeing that the Christmas is soon upon us here is my wish list to Santa SQL:

  • Autonomous Transactions: Nested transactions are independent.
  • Autonomous Transactions: see above (yes, I really, really do want this).
  • Enhancements to SQLCLR: I would love to be able to use TVP’s in SQLCLR
  • Finally blocks: we have had try...catch since 2005, it is now time to finish this and introduce finally.
  • Other T-SQL enhancements: I would love to see T-SQL get new features that would make it more like a “real” programming language.

This is my wish-list, I wonder how much of this we will see, if any. Post your own wish-lists in the comments please.

More about new features in SQL 11 / Denali

$
0
0

So yesterday I posted my wish-list for new programmability features in the upcoming release of SQL 11 / Denali.

Today I see that Simon S has posted about a new series of posts he will do, covering what is new in SQL 11. Knowing Simon, it will be really, really good. So if you are interested I suggest you keep your eyes open for his posts.

UPDATE: Ben C commented and said that CTP1 has been released (or something to that effect), and here is where it can be downloaded from.

New T-SQL Features in SQL 11 / Denali - Error Handling

$
0
0

A couple of days ago I wrote my wish-list to Santa what I wanted to see in next version of SQL Server (SQL 11 / Denali). I was pleasantly surprised that I could find out for myself shortly after; i.e. SQL Server Denali CTP1 was released during the PASS Summit. I have literally finished installing the next version of SQL Server (Denali / SQL 11) on a new VM, like 10 minutes ago, and I have done a quick check of the new features of SQL Server Denali (what I could find at least) against my wish-list.

So it seems that my autonomous transactions have not been implemented. That does not necessarily meat that they won’t be there in later releases, but for now it is a downer. In my list I also mentioned finally blocks. From what I can see that has not been implemented either, BUT something else has…

RAISERROR

No RAISERROR is not anything new. We have used RAISERROR since beginning of time to throw an error in SQL Server. When using RAISERROR we either indicate an error-number, or a message. If we were to raise based on a number, that error number had to exist in sys.messages. If we used a message instead, the error number we received back was 50000, i.e. something like so:

1
RAISERROR('An error happened',16,1)

produced this:

12
Msg50000,Level16,State1,Line1Anerrorhappened

TRY … CATCH

In SQL 2005 proper structured error handling was introduced using TRY … CATCH blocks. So instead of having to “litter” our code with SELECT @@ERROR statements, we could enclose our code in BEGIN TRY END TRY followed by BEGIN CATCH END CATCH. Something like so:

12345678
BEGINTRYSELECT'hello';SELECT1/0;SELECT'world'ENDTRYBEGINCATCH--handle the errorENDCATCH

… and life was good (IMHO the structured exception handling was one of the greatest new features in SQL Server 2005). However, we are never completely satisfied, we always want more. And what we wanted to do, was to be able to handle the error, and then perhaps re-throw the error (like we can do in other modern development languages). Up until SQL Server Denali / SQL 11 the only way to do that was to use RAISERROR. That would not have been so bad apart from the fact that we are not allowed to raise an error with a system defined error number, i.e. RAISERROR(8134 …). So instead we had to resort to various “hack” to achieve what we wanted.

THROW

This has now been fixed in SQL Server Denali / SQL 11 by the introduction of THROW. THROW does not require that an error number being thrown exists in sys.messages, so you can raise an exception like so: THROW 50001, ‘OOPS – something happened’, 1. Notice how you do not define a severity when using THROW, all exceptions being raised by THROW has a severity of 16.

The really great thing with THROW however is how you can use it like you would use THROW in other languages. In other words you use use it to re-throw an exception:

12345678910
BEGINTRYSELECT'hello';SELECT1/0;SELECT'world'ENDTRYBEGINCATCH--handle the errorPRINT'here we are handling the error'THROWENDCATCH

The above code snippet produces this output:

123
herewearehandlingtheerrorMsg8134,Level16,State1,Line3Dividebyzeroerrorencountered.

I do not know about you, but I think this is fairly cool. I do still want finally blocks and autonomous transactions, but right now I take what I can get.

As I mentioned in the beginning of this post; I have just installed SQL Server Denali, and have not had time to do much “spelunking”. Stay tuned for more posts in the coming days. You should also check Simon Sabin’s blog, where he has quite a lot of SQL Server Denali coverage.

More T-SQL Error Functionality in Denali / SQL 11

$
0
0

In my previous post I wrote about the new THROW keyword in Denali / SQL 11. Having played around a bit more with Denali, I wanted to write some additional things about THROW and it’s relation to RAISERROR.

RAISERROR

First some background / overview of RAISERROR:

  • RAISERROR allows you to throw an error based on either an error number or a message, and you can define the severity level and state of that error:
  • If you call RAISERROR with an error number, that error number has to exist in sys.messages.
  • You can use error numbers between 13001 and 2147483647 (it cannot be 50000) with RAISERROR.

As I mentioned in my previous post, RAISERROR has been around since forever – and it works fairly well. One of the major drawbacks with RAISERROR– as I also wrote in my previous post; is that it cannot be used to re-throw an error we might have trapped in a structured error handling block. Or rather, this may not be that much a RAISERROR issue, as an issue that SQL Server has not previously supported the notion of re-throwing an error. Be as it may with that, there are other drawbacks with RAISERROR which I will mention later in this post.

THROW

In Denali / SQL 11 Microsoft introduces the THROW keyword, which allows us to re-throw an exception caught in an exception handling block. Some characteristics of THROW:

  • Using THROW you can throw a specific error number as well as message:
1
THROW50000,'Ooops',1;
  • When using THROW> you have to define both an error number as well as a message (and state), unless you re-throw an exception.
  • The error number does not have to exist in sys.messages but, it has to be between 50000 and 2147483647.

So, THROW looks fairly cool, but what are the drawbacks with RAISERROR I mentioned above? Well, for one – beginning with Denali / SQL 11 RAISERROR is being deprecated, i.e. it will eventually be removed from SQL Server. Another reason has to do with transactions and error handling.

UPDATE: According to Aaron Bertrand, in his post, it is only some very old RAISERROR syntax that is being deprecated.

XACT_ABORT

As every T-SQL programmer worth his (or her) salt should know, an exception does not roll back a transaction by default (ok, ok, it does depend on severity level to an extent – but a “normal” exception does not roll back a tran). I.e. the following code would cause two rows to be inserted in the table t1:

12345678910
--first create a test table which we will use throughout the code samplesCREATETABLEt1(idintprimarykey,col1nvarchar(15));--now onto the 'meat'BEGINTRANINSERTINTOt1VALUES(1,'row1');--emulate some error, this will indeed cause an exception to happen,--but the processing will continueSELECT1/0INSERTINTOt1VALUES(2,'row2')COMMIT

We can indicate to SQL Server that we want “automatic” rollback of transactions when an exception happens by setting XACT_ABORT. This will cause a rollback to happen if a system exception happens. So based on the example above, no rows will be inserted when the code below executes:

123456
SETXACT_ABORTONBEGINTRANINSERTINTOt1VALUES(3,'row3');SELECT1/0INSERTINTOT1VALUES(4,'row4')COMMIT

However, what happens if the user throws an exception using RAISERROR? In that case no rollback happens, i.e. RAISERROR does not honor the XACT_ABORT setting:

1234567
SETXACT_ABORTONBEGINTRANINSERTINTOt1VALUES(5,'row5');--the user raises an error, but the tx will not roll backRAISERROR('Oooops',16,1)INSERTINTOt1VALUES(6,'row6')COMMIT

This can catch developers out and is in my opinion a fairly severe drawback. So with the introduction of Denali / SQL 11 and the THROW keyword, Microsoft has tried to fix this by making THROW honor XACT_ABORT:

1234567
SETXACT_ABORTONBEGINTRANINSERTINTOt1VALUES(7,'row7');--the user raises an error, and the tx will roll backTHROW50000,'Ooops',1INSERTINTOt1VALUES(8,'row8')COMMIT

When you run the code above, you will see that the transaction is indeed rolled back and no rows are inserted.

So developers, “go forth” and THROW exceptions in SQL Server Denali / SQL 11.

Beginners F# Resources

$
0
0

This post is more as a reminder to myself where to find online resources when learning F#. If anyone else can find it useful, so much better. And, if anyone out there has other online, resources, please leave a comment and I will include it. So, in no particular order:

Finally, a list like this would be incomplete without the link to the Man himself: Don Syme

Viewing all 983 articles
Browse latest View live