Monday, February 20, 2012

Locks, Scope & Performance

I'm getting LOCKS caused by a particular report that uses two sProcs to rend
er a
report.
I'm as to why the following should be locking my contact calendar a
nd
history tables, they just do a select into a table, shouldn't this be read o
nly
and not cause any locks?
Is there a way to be sure that I exec as read only and do not create a table
lock?
Details....
The 1st sProc is called (nested) from within the second as
create proc my_sProc_Main
declare some vars, @.tmpTbl varchar(36)
set @.tmpTbl = replace('-',newID(),'9')
create a real table.... exec(create table table tblWorkingName'+@. tmpTbl)
Insert myTable exec my_sProc_Sub
(my_sProc_Sub - select complex conditions, union another set of complex
conditions, scan 1m row table return 1200 rows)
create another real results table.... exec(create table tblResultsName'+@.
tmpTbl)
Evaluate data in the first table (containing 1200 rows) and insert into the
second table about 20 resulting rows
return the results of the second table
drop both tables
Total process time between 13 and 56 seconds
I run similar processes, some as complex but not using two sProcs, however t
his
one report brings my server to its knees.
The production server is a Compaq dual 700ghz, 1gb Ram
My dev box is a single 1.4ghz, 1gb Ram and it execs somewhat faster and with
half the server killing effects.
TIA
JeffP....use NO LOCK hints on your SELECT statements
OR change isolation level to READ UNCOMMITTED
Greg Jackson
Portland, OR|||SELECT still issues shared resorce locks and INSERT EXEC runs the entire sub
procedure within the INSERT transaction. This combined with the fact you are
selecting from 1M rows is most likely your culprit.
You could override the locking policy on certian tables and tell it to use
page locks or no locks instead. or change the transaction isolation level to
a more lax level.
You may want to consider a different way of storing the intermediate data,
e.g. temp table which can be filled within the sub procedure and then the
calling procedure would get recompiled if necessary for the next step.
This would address the cause rather than the symptom and probably give a
significant performance boost into the bargain.
Mr Tea
"JDP@.Work" <JPGMTNoSpam@.sbcglobal.net> wrote in message
news:%23wEsvHuEFHA.628@.TK2MSFTNGP15.phx.gbl...
> I'm getting LOCKS caused by a particular report that uses two sProcs to
> render a
> report.
> I'm as to why the following should be locking my contact calendar
> and
> history tables, they just do a select into a table, shouldn't this be read
> only
> and not cause any locks?
> Is there a way to be sure that I exec as read only and do not create a
> table
> lock?
> Details....
> The 1st sProc is called (nested) from within the second as
> create proc my_sProc_Main
> declare some vars, @.tmpTbl varchar(36)
> set @.tmpTbl = replace('-',newID(),'9')
> create a real table.... exec(create table table tblWorkingName'+@. tmpTbl)
> Insert myTable exec my_sProc_Sub
> (my_sProc_Sub - select complex conditions, union another set of complex
> conditions, scan 1m row table return 1200 rows)
> create another real results table.... exec(create table tblResultsName'+@.
> tmpTbl)
>
> Evaluate data in the first table (containing 1200 rows) and insert into
> the
> second table about 20 resulting rows
> return the results of the second table
> drop both tables
> Total process time between 13 and 56 seconds
> I run similar processes, some as complex but not using two sProcs, however
> this
> one report brings my server to its knees.
> The production server is a Compaq dual 700ghz, 1gb Ram
> My dev box is a single 1.4ghz, 1gb Ram and it execs somewhat faster and
> with
> half the server killing effects.
> TIA
> JeffP....
>|||Thanks to pdxJaxon, I'm looking into with NOLOCK
if my exec(@.str) from looks like...
from
'+@.db+'.dbo.CONTACT1 contact1
,'+@.db+'.dbo.CONTACT2 contact2
,'+@.db+'.dbo.CAL cal
,'+@.db+'.dbo.conthist ch
,'+@.db+'.dbo.users
where
..adding...
from
'+@.db+'.dbo.CONTACT1 contact1 with NOLOCK
,'+@.db+'.dbo.CONTACT2 contact2 with NOLOCK
,'+@.db+'.dbo.CAL cal with NOLOCK
,'+@.db+'.dbo.conthist ch with NOLOCK
,'+@.db+'.dbo.users with NOLOCK
where
Lee,
I moved away from using the temp db #mytable to creating a table dynamically
named as noted in my pseudo snipit.
I did this because it improved performance from some now in place processes
that
used to bring the server to it's knees.
However I'm still interested and would appreciate a clarification

> e.g. temp table which can be filled within the sub procedure and then the
> calling procedure would get recompiled if necessary for the next step.
1. Is there any difference between using a table dynamically named within my
outer sProc and #myTemp?
2. Would the process you are suggesting look like a proc that calls a sub pr
oc
and creates w/recompile an inner sProc?
create proc my_sProc_Main_Outer
exec mysProc_sub inserting rows to a (temp) table
then...
create proc my_sProc_Main_Inner
with recomplike
as
....
return
drop table(s)
return
Exec'ing the sProc for the report would look something like....
exec my_sProc_Main_Outer @.db ,@.w_str ,@.action
TIA
JeffP...
"Lee Tudor" <mr_tea@.ntlworld.com> wrote in message
news:A5aQd.547$Se3.306@.newsfe5-win.ntli.net...
> SELECT still issues shared resorce locks and INSERT EXEC runs the entire s
ub
> procedure within the INSERT transaction. This combined with the fact you a
re
> selecting from 1M rows is most likely your culprit.
> You could override the locking policy on certian tables and tell it to use
> page locks or no locks instead. or change the transaction isolation level
to
> a more lax level.
> You may want to consider a different way of storing the intermediate data,
> e.g. temp table which can be filled within the sub procedure and then the
> calling procedure would get recompiled if necessary for the next step.
> This would address the cause rather than the symptom and probably give a
> significant performance boost into the bargain.
> Mr Tea
> "JDP@.Work" <JPGMTNoSpam@.sbcglobal.net> wrote in message
> news:%23wEsvHuEFHA.628@.TK2MSFTNGP15.phx.gbl...
>|||Understand that NOLOCK can give you invalid results, since it can see dirty
data. If having consistent results are critical, this is not a good idea.
If you can live with possible spurrious results, based on rows being picked
up that turn out to have been invalid, then it is fine. I don't know about
your architecture, but chances are you will have very litte of this unless
you have quite high throughput, but it is a concern.
A million rows is quite a few, but it is not an incredible amount. Consider
optimizing your query, and you might only have blocks that last a second or
two if any. A query that brings the server to its knees is often a sign
that you have table scans that are slowly walking through your table. It
might also be taking a table lock rather than rowlocks based on resource
utilization. This query could be the problem, but it could also be your
update queries or others. Can you tell us which query is blocking which?
----
Louis Davidson - drsql@.hotmail.com
SQL Server MVP
Compass Technology Management - www.compass.net
Pro SQL Server 2000 Database Design -
http://www.apress.com/book/bookDisplay.html?bID=266
Blog - http://spaces.msn.com/members/drsql/
Note: Please reply to the newsgroups only unless you are interested in
consulting services. All other replies may be ignored :)
"JDP@.Work" <JPGMTNoSpam@.sbcglobal.net> wrote in message
news:ezFTA9uEFHA.3200@.TK2MSFTNGP10.phx.gbl...
> Thanks to pdxJaxon, I'm looking into with NOLOCK
> if my exec(@.str) from looks like...
> from
> '+@.db+'.dbo.CONTACT1 contact1
> ,'+@.db+'.dbo.CONTACT2 contact2
> ,'+@.db+'.dbo.CAL cal
> ,'+@.db+'.dbo.conthist ch
> ,'+@.db+'.dbo.users
> where
> ..adding...
> from
> '+@.db+'.dbo.CONTACT1 contact1 with NOLOCK
> ,'+@.db+'.dbo.CONTACT2 contact2 with NOLOCK
> ,'+@.db+'.dbo.CAL cal with NOLOCK
> ,'+@.db+'.dbo.conthist ch with NOLOCK
> ,'+@.db+'.dbo.users with NOLOCK
> where
> Lee,
> I moved away from using the temp db #mytable to creating a table
> dynamically
> named as noted in my pseudo snipit.
> I did this because it improved performance from some now in place
> processes that
> used to bring the server to it's knees.
> However I'm still interested and would appreciate a clarification
>
> 1. Is there any difference between using a table dynamically named within
> my
> outer sProc and #myTemp?
> 2. Would the process you are suggesting look like a proc that calls a sub
> proc
> and creates w/recompile an inner sProc?
> create proc my_sProc_Main_Outer
> exec mysProc_sub inserting rows to a (temp) table
> then...
> create proc my_sProc_Main_Inner
> with recomplike
> as
> ....
> return
> drop table(s)
> return
> Exec'ing the sProc for the report would look something like....
> exec my_sProc_Main_Outer @.db ,@.w_str ,@.action
> TIA
> JeffP...
>
> "Lee Tudor" <mr_tea@.ntlworld.com> wrote in message
> news:A5aQd.547$Se3.306@.newsfe5-win.ntli.net...
>|||When the Crystal (pronounced cryst-hel) report runs we see that all the tabl
es
are blocking other requests.
We have less than 25 users in the system and about another 50+/- running
reports, but another 500 who syncronize their data once a day this occurs ne
arly
all day long, someone is sync'ing with the server at most any time of the da
y.
My sProcs provide data for reports, if they miss an item on a report that th
ey
expect to see their first recourse is to think that the data has not sync'd
with
the server and they will re-sync, running the query after a moment should no
w
include any data that was being updated. Most users only have their own dat
a,
however it's not uncommon for our other daily updating and sales-process-cap
ture
to lock tables as well.
It was more pronounced that when one of the regional VPs would try to run th
e
report for the entire region we'd see that there were blocks and processes w
ould
hang as well as my dynamically named webReprt3984023984023840238402 tables w
ould
still be there due to the proc not completing or the user canceling.
The report takes a little over 2 min to render for an entire market center a
nd
little less than 20 seconds for an individual salesperson.
Another issue is that each time I optimize the sProcs' they add another colu
mn,
some are complex and are dependent on other table info to validate the resul
ts
data, this cross table, cross db stuff is just way too intense for such a lo
w
power system and I hear that we are getting a new production server in the n
ext
month or so.
I also see that workning on a low power system has a side benefit to encoura
ge
us to squeze as much as we can. Another issue is bandwidth which in a word
sucks.
Any I was able to get my outer sProc to run in as little as 4 seconds for a
sales person down from 1min44seconds to 19 seconds and now the 4 seconds.
I know that there must be substantially more optimizations available, howeve
r we
don't have a budget, so we are working for nearly free at this point just sa
ving
face that our stuff works at all. The client knows that they have a margina
l
network, a low power server and very complex reports.
This report for example is a whiteboard, filling in the data in each column
not
respectful of rows except that a particular column in that row is filled, it
loads data just as a human would fill in a sales whiteboard in the office fo
r
each salesperson, with first time visits scheduled, along with columns or %
probablity of closing columns for each pending sale, or targeted prospect.
Thanks to all.
JeffP...
"Louis Davidson" <dr_dontspamme_sql@.hotmail.com> wrote in message
news:eRIEKGxEFHA.3928@.TK2MSFTNGP15.phx.gbl...
> Understand that NOLOCK can give you invalid results, since it can see dirt
y
> data. If having consistent results are critical, this is not a good idea.
> If you can live with possible spurrious results, based on rows being picke
d
> up that turn out to have been invalid, then it is fine. I don't know abou
t
> your architecture, but chances are you will have very litte of this unless
> you have quite high throughput, but it is a concern.
> A million rows is quite a few, but it is not an incredible amount. Consid
er
> optimizing your query, and you might only have blocks that last a second o
r
> two if any. A query that brings the server to its knees is often a sign
> that you have table scans that are slowly walking through your table. It
> might also be taking a table lock rather than rowlocks based on resource
> utilization. This query could be the problem, but it could also be your
> update queries or others. Can you tell us which query is blocking which?
>
> --
> ----
--
> Louis Davidson - drsql@.hotmail.com
> SQL Server MVP
> Compass Technology Management - www.compass.net
> Pro SQL Server 2000 Database Design -
> http://www.apress.com/book/bookDisplay.html?bID=266
> Blog - http://spaces.msn.com/members/drsql/
> Note: Please reply to the newsgroups only unless you are interested in
> consulting services. All other replies may be ignored :)
> "JDP@.Work" <JPGMTNoSpam@.sbcglobal.net> wrote in message
> news:ezFTA9uEFHA.3200@.TK2MSFTNGP10.phx.gbl...
>|||@.db? Now you're getting serious :)
I would try and acheive the desired result without too much dynamic SQL as
its pretty hard to tune. one way would be to put my_sProc_Main_Outer within
the master db and then call it in the context of a specific DB using
SET @.sp=@.db+'..sp_my_sProc_Main_Outer'
EXEC @.sp @.w_str ,@.action
if you dont like that then you will have to ditch the sp_ prefixes and put
the procs on each database
then your outer sproc would do its stuff (you may be able to simplify this
as well):
CREATE TABLE #tmp
EXEC sp_my_sProc_Main_Inner
Do something with data in #tmp.
your sproc inner would just add data to the temporary table:
insert #tmp select col001...coln FROM dbo. ...
when you return to the inner table, the db will check the stats on the
temporary table data and recompile it for better execution if necesary (if
you know that there is no need for dynamic recompilation then you can look
at some other way of storing the data, e.g. table variable and inner UDF)
getting rid of those dynamic clauses, if possible, allowing you to view and
check the query plan.
Mr Tea
"JDP@.Work" <JPGMTNoSpam@.sbcglobal.net> wrote in message
news:ezFTA9uEFHA.3200@.TK2MSFTNGP10.phx.gbl...
> Thanks to pdxJaxon, I'm looking into with NOLOCK
> if my exec(@.str) from looks like...
> from
> '+@.db+'.dbo.CONTACT1 contact1
> ,'+@.db+'.dbo.CONTACT2 contact2
> ,'+@.db+'.dbo.CAL cal
> ,'+@.db+'.dbo.conthist ch
> ,'+@.db+'.dbo.users
> where
> ..adding...
> from
> '+@.db+'.dbo.CONTACT1 contact1 with NOLOCK
> ,'+@.db+'.dbo.CONTACT2 contact2 with NOLOCK
> ,'+@.db+'.dbo.CAL cal with NOLOCK
> ,'+@.db+'.dbo.conthist ch with NOLOCK
> ,'+@.db+'.dbo.users with NOLOCK
> where
> Lee,
> I moved away from using the temp db #mytable to creating a table
> dynamically
> named as noted in my pseudo snipit.
> I did this because it improved performance from some now in place
> processes that
> used to bring the server to it's knees.
> However I'm still interested and would appreciate a clarification
>
> 1. Is there any difference between using a table dynamically named within
> my
> outer sProc and #myTemp?
> 2. Would the process you are suggesting look like a proc that calls a sub
> proc
> and creates w/recompile an inner sProc?
> create proc my_sProc_Main_Outer
> exec mysProc_sub inserting rows to a (temp) table
> then...
> create proc my_sProc_Main_Inner
> with recomplike
> as
> ....
> return
> drop table(s)
> return
> Exec'ing the sProc for the report would look something like....
> exec my_sProc_Main_Outer @.db ,@.w_str ,@.action
> TIA
> JeffP...
>
> "Lee Tudor" <mr_tea@.ntlworld.com> wrote in message
> news:A5aQd.547$Se3.306@.newsfe5-win.ntli.net...
>|||I'm sorry I was so long...
I moved away from #tmp tables as it alone with some earlier report sProcs wa
s
dogging the server and using the dynamic real tables helped.
As far as the location of the sProcs as often as possible we've placed all t
he
sProcs in the Reports db. There are now 4 regional plus another 2 reported
db's
and to keep all the versions updated has been another challenge, so keeping
them
in the reports db has been a benefit to Q&A.
The way I optimize is...
1. Add elements like the new "with (nolock) and see if it doesn't exec faste
r,
it does, in fact my partner was slightly amazed that I was able to run the
sProc/Report for an entire region and market center without locking. We'll
see
in the next few days if it's truely helped.
2. During dev, I run my exec's as select @.str's take the results and run the
m in
the QA with execPlan on.
There isn't often much I can do, we are working in part with an out'a the bo
x
app and adding indexes without pain can be difficult.
I've no real experience with @.table vars and I thought that their scope was
only
within the calling sProc, could the Inner sProc return a @.table ?
Also, with memory a premium, would the @.table var be an improvement over
creating a real table dynamically named? (FYI: I dynamicaly name my tables s
o
that each user who is running a report only see's their stuff)
UDF, I have a few UDF's, a fnProperCase and fnParse which parses any string
by
any separator, anyway I was recently reading about using a UDF to return a t
able
as a parameterized UDF. Anyone have a practical link?
TIA
JeffP....
"Lee Tudor" <mr_tea@.ntlworld.com> wrote in message
news:5LiQd.128$dv2.54@.newsfe5-win.ntli.net...
> @.db? Now you're getting serious :)
> I would try and acheive the desired result without too much dynamic SQL as
> its pretty hard to tune. one way would be to put my_sProc_Main_Outer withi
n
> the master db and then call it in the context of a specific DB using
> SET @.sp=@.db+'..sp_my_sProc_Main_Outer'
> EXEC @.sp @.w_str ,@.action
> if you dont like that then you will have to ditch the sp_ prefixes and put
> the procs on each database
> then your outer sproc would do its stuff (you may be able to simplify this
> as well):
> CREATE TABLE #tmp
> EXEC sp_my_sProc_Main_Inner
> Do something with data in #tmp.
>
> your sproc inner would just add data to the temporary table:
> insert #tmp select col001...coln FROM dbo. ...
> when you return to the inner table, the db will check the stats on the
> temporary table data and recompile it for better execution if necesary (if
> you know that there is no need for dynamic recompilation then you can look
> at some other way of storing the data, e.g. table variable and inner UDF)
> getting rid of those dynamic clauses, if possible, allowing you to view an
d
> check the query plan.
> Mr Tea
> "JDP@.Work" <JPGMTNoSpam@.sbcglobal.net> wrote in message
> news:ezFTA9uEFHA.3200@.TK2MSFTNGP10.phx.gbl...
>

No comments:

Post a Comment