record and use those two values to update a single record in a table. In
order to assign those two values to variables and then use those variables
in the UPDATE statement, I created a cursor and used Fetch Next... Into.
This way, I only have to call the complex SQL once instead of twice.
This seems like the best way to go. However, I've always used cursors for
scrolling through resultsets. In this case, though, there is just a single
record being returned, and the cursor doesn't scroll.
Is that the most efficient way to go, or is there a better way to be able to
use both values from the SQL statement without having to call it twice?
Thanks.Hi Neil
I'd need more details regarding the query/DDL to say anything too
meaningful, but certainly a set-based solution is always preferable to
an iterative/cursor-solution.|||here is a guess without seeing your code.
declare @.v1 int, @.v2 int
select @.v1=[col1], @.v2=[colx]
from (
-- your complex query
) as derived_table
--
-oj
"Neil" <nospam@.nospam.net> wrote in message
news:vhAne.4180$s64.2269@.newsread1.news.pas.earthl ink.net...
>I need to get two values from a complex SQL statement which returns a
>single record and use those two values to update a single record in a
>table. In order to assign those two values to variables and then use those
>variables in the UPDATE statement, I created a cursor and used Fetch
>Next... Into. This way, I only have to call the complex SQL once instead
>of twice.
> This seems like the best way to go. However, I've always used cursors for
> scrolling through resultsets. In this case, though, there is just a single
> record being returned, and the cursor doesn't scroll.
> Is that the most efficient way to go, or is there a better way to be able
> to use both values from the SQL statement without having to call it twice?
> Thanks.|||SQL Server doesn't support the standard SQL syntax for this but it does
have a proprietary syntax to do the same job:
UPDATE T1
SET x = foo,
y = bar
FROM
(SELECT foo, bar /* your query here */
FROM ... ) AS T2
WHERE T2.key_col = T1.key_col
/* join condition should yield a single row from T2 for each row in
T1 */
> I've always used cursors for
> scrolling through resultsets
Really? For what purpose? Cursors should be the rare exception rather
than the rule. Usually there are better set-based solutions.
--
David Portas
SQL Server MVP
--|||> SQL Server doesn't support the standard SQL syntax for this but it does
> have a proprietary syntax to do the same job:
> UPDATE T1
> SET x = foo,
> y = bar
> FROM
> (SELECT foo, bar /* your query here */
> FROM ... ) AS T2
> WHERE T2.key_col = T1.key_col
> /* join condition should yield a single row from T2 for each row in
> T1 */
Yes, that was what I was looking for (though I needed to use UPDATE T1
SET... From T1, (Select foo...) As T2...)
Also, since I'm only updating a single row in T1, and since T2 only returns
a single row with values, I eliminated the WHERE T2.keycol=T1.keycol. My SQL
looks like:
UPDATE T1
SET X = T2.FOO, Y=T2.BAR
FROM T1, (SELECT FOO, BAR FROM MYQUERY WHERE ID=@.VALUE) AS T2
WHERE T1.ID=@.VALUE
Do you see any problem with that?
>> I've always used cursors for
>> scrolling through resultsets
> Really? For what purpose? Cursors should be the rare exception rather
> than the rule. Usually there are better set-based solutions.
I guess one of the main areas where I've used them is in order-rearranging
functions -- such as where there are a set of items in a table, each with a
value in a field that specifies the order. The user clicks, say, an up arrow
in the interface, and the current item needs to move up one in order --
decrement it's field value by one, and increment the preceding item's by
one.
Another time I used a cursor was in a procedure in which the length of two
fields combined needed to be compared to a value and then, based on the
length of the combined fields, different values would be placed in a certain
field. I suppose that could have just been done with a set-based solution;
but the cursor seemed more straightforward. It was also only dealing with
one record at a time.
Thanks for your help!
Neil
> --
> David Portas
> SQL Server MVP
> --|||Re-arranging order based on a column (pos):
UPDATE foo
SET pos = CASE pos
WHEN @.old_pos
THEN @.new_pos
ELSE pos + SIGN(@.old_pos - @.new_pos)
END
WHERE pos BETWEEN @.old_pos AND @.new_pos
OR pos BETWEEN @.new_pos AND @.old_pos
Update different columns based on the length of a string value:
UPDATE YourTable
SET col1 =
CASE
WHEN LEN(x+y)<=10
THEN a ELSE b END,
col2 =
CASE
WHEN LEN(x+y)>10
THEN a ELSE b END
WHERE ...
--
David Portas
SQL Server MVP
--
UPDATE|||David Portas Jun 2, 7:07 am show options
Newsgroups: comp.databases.ms-sqlserver,
microsoft.public.sqlserver.programming
From: "David Portas" <REMOVE_BEFORE_REPLYING_dpor...@.acm.o*rg> - Find
messages by this author
Date: 2 Jun 2005 04:07:17 -0700
Local: Thurs,Jun 2 2005 7:07 am
Subject: Re: Is Cursor Best Way To Go?
Reply | Reply to Author | Forward | Print | Individual Message | Show
original | Report Abuse
Re-arranging order based on a column (pos):
UPDATE foo
SET pos = CASE pos
WHEN @.old_pos
THEN @.new_pos
ELSE pos + SIGN(@.old_pos - @.new_pos)
END
WHERE pos BETWEEN @.old_pos AND @.new_pos
OR pos BETWEEN @.new_pos AND @.old_pos;
Very neat! I always did a monster CASE expression with extra WHEN
clauses based on (old_pos ?? newpos).|||Hi, David.
Here's another one for you. I have an sp that takes various input parameters
for a customer, and processes the data using various case statements. I now
want to run this sp for all customers on a nightly basis. My immediate
reaction, as previously, would be to use a cursor to loop through all the
customers, get the input parameters for the sp from the Customer table, and
call the sp once for each customer. Is there a way to do this without a
cursor?
Thanks,
Neil
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1117710437.665131.255080@.g47g2000cwa.googlegr oups.com...
> Re-arranging order based on a column (pos):
> UPDATE foo
> SET pos = CASE pos
> WHEN @.old_pos
> THEN @.new_pos
> ELSE pos + SIGN(@.old_pos - @.new_pos)
> END
> WHERE pos BETWEEN @.old_pos AND @.new_pos
> OR pos BETWEEN @.new_pos AND @.old_pos
> Update different columns based on the length of a string value:
> UPDATE YourTable
> SET col1 =
> CASE
> WHEN LEN(x+y)<=10
> THEN a ELSE b END,
> col2 =
> CASE
> WHEN LEN(x+y)>10
> THEN a ELSE b END
> WHERE ...
> --
> David Portas
> SQL Server MVP
> --
>
>
>
> UPDATE|||Neil (nospam@.nospam.net) writes:
> Here's another one for you. I have an sp that takes various input
> parameters for a customer, and processes the data using various case
> statements. I now want to run this sp for all customers on a nightly
> basis. My immediate reaction, as previously, would be to use a cursor to
> loop through all the customers, get the input parameters for the sp from
> the Customer table, and call the sp once for each customer. Is there a
> way to do this without a cursor?
Yes, but you will of course have to rewrite the procedure, so that it
works with many customers. To do this, you need to pass the input
parameters in a table rather than as parameter. This table can be a temp
table, or a permanent table which is keyed by @.@.spid or similar. I discuss
this on http://www.sommarskog.se/share_data.html#temptables.
Well, rather you would write a new procedure that works with many, and
then rewrite the old procedure to be a wrapper on the new procedure.
Now, whether you actually should go this route depends. Let's say that
it takes 10 minutes to run a cursor over all customers and call the
existing procedure, and that you have plenty of time to spare in the
night. In this case, it's not likely to be worth the development effort.
Also, if you opt to use a temp table to pass the input parameters, the
procedure will be recompiled each time. This will have the net effect
that calls for single customers will now be more expensive, and could
even be performance problems, if the procedure is huge.
We actually did this exercise with a core procedure in our system, and
in our case it was really necessary. But it was a major developement task.
Our estimate was 200 hours for development, but I think the true outcome
was more than 300 hours. But that was a long procedure, on 700-800 lines
and which called several sub-procedures. The final multi-version is a
3000-line monster with no less than 43 table variables.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Everything Erland has said. This is where it pays to have a good design
pattern from kick-off. For an UPDATE/INSERT/DELETE proc servicing the
UI you may typically want to pass parameters for a single row. For
procs that implement other business logic however, you should generally
design with a set-based approach in mind. Unfortunately, programmers
used to other languages too often try to encapsulate all logic in procs
that act like scalar functions - a sure route to cursor hell!
--
David Portas
SQL Server MVP
--|||David Portas (REMOVE_BEFORE_REPLYING_dportas@.acm.org) writes:
> Everything Erland has said. This is where it pays to have a good design
> pattern from kick-off. For an UPDATE/INSERT/DELETE proc servicing the
> UI you may typically want to pass parameters for a single row. For
> procs that implement other business logic however, you should generally
> design with a set-based approach in mind. Unfortunately, programmers
> used to other languages too often try to encapsulate all logic in procs
> that act like scalar functions - a sure route to cursor hell!
Permit me to expand a bit on what I touched in my previous post.
In many cases it is reasonable to write a procedure that operates
on a scalar set of values. It cannot be denied that writing such a
procedure is simpler, and thus cuts development costs at that stage.
Passing data in tables is actually quite messy. Let's look at the options:
1) Use a temp table. The caller must create the temp table, and the callee
trust the caller. If the procedure is called from many places, many
callers must create the table. This can be address with an include-
file, if you have the luxury of a preprocessor. We have that, but it's
not a standard feature.
And if even you get by all this, the callee is recompiled for each
new instance of the caller. This can be expensive.
2) A permanent table, typically spid-keyed. We use this technique for the
really heavy-duty stuff. If you make this routine, you get lots of
these tables. Note also that the tables are typically stored disjunct
in the version-control system, which means that procedure and
"parameter list" are in two places.
3) Clients can't use any of 1 or 2, but they can pass comma-separated
lists or XML-documents. But if A_SP calls B_SP, it would be a bad
idea if A_SP built an XML document from its data, only to be able
to call B_SP. What you can to is to have a wrapper that accepts
the XML document, and unpacks that into the temp table or spid-
keyed table. If the client is mainly interested in single-row
operations, it probably needs a scalar wrapper as well. Else, it
will be a lot extra development overhead to build XML documents.
So, clearly, if you at point A in your devleopment cycle only have a need
for a procedure that operates on scalar parameters, you write a procedure
that works with scalar procedure only, because that is what you are paid
for.
If you later at point B need to do the same operation on many rows,
you have to make a judicious choice between:
1) Write a cursor loop.
2) Just forget about the old procedure, and write a new set-based.
3) Replace the old procedure.
If the logic of the procedure is trivial, like "IF NOT EXISTS INSERT ELSE
UPDATE" you should pick #2. But say that the logic is non-trivial, for
instance includes updates to dependent tables in some unnormalised
scenario, then at some point #2 becomes completely impermissible. At
this point #1 can very well be the best pick. Say that you know that
it will be rare that the cursor will comprise as much as 100 rows. If
the procedure takes 100 ms to run, it may be very difficult to motivate
to rewrite the old procedure, if this would take 100 hours.
There is also another issue here that is worth mentioning. Say that your
procedure performs some sort of INSERT operation (in a couple of tables),
and the data comes from some less trustable source, which thus may
supply non-conformant data. If you have a scalar procedure, error
handling is fairly simple. You can do explicit checks on anticipated
errors, but you can be fairly relaxed, because if some data violates a
constraint or trigger check, the operation will fail.
This because a lot more complex if you accept input data in a table.
Because if you apply the same strategy, 1000 rows could fail to insert
when there is an error in a single one. This could very likely be
entirely unacceptable. Thus in case, you will need to duplicate all
constraint and trigger checks in your code, so you can mark which rows
that are illegal.
So while it is easy to say "replace cursor loops with set-based
statments", one should realise that in complex cases, this is far from
trivial.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp
No comments:
Post a Comment