Monday, March 19, 2012

Is it possible to change the initial size of the transaction log?

I have a database, and the log file would appear to have been created with
some default settings (it is 512Kb size, which I believe is the minimum SQL
2000 can allocate, and its auto-grow factor is 10%).
When I insert some binary data into an image column, I can see a whole
series of "log file auto grow" events in Profiler. That's what I'd expect,
but each auto-grow event has a pause of just over 1 second before the next
auto-grow. So, given that 10% of 512Kb isn't much, and I therefore can see
a
lot of auto-grow events in response to one insertion, the overall query
execution time is very long.
What does SQL Server have an apparently deliberate delay between auto-grow
events? Surely it can grow the log file and initialise it in multiple chunk
s
together, or even not have to wait between chunks? If the insertion causes
10 auto-grow events, that's 10 seconds longer it takes to execute than I'd
expect?
The obvious solution (I can see you thinking) is to change the auto-grow
factor to a more useful fixed value, such as 10Mb (well, the value is
arbitrary, but I think 10Mb will be good for my situation). Doing so clearl
y
will reduce the auto-grow events, and slash query execution time. Yes, I ca
n
do that, but what I really want to achieve is to eliminate auto-growth as
much as possible by setting a larger initial size for my database transactio
n
log.
Is it possible to change the initial size of a database transaction log? I
have tried using an ALTER DATABASE, MODIFY FILE to set a new size. This
works, but only temporarily. The problem here is that my database is using
the simple recovery model, and has AUTO_SHRINK switched on. Therefore, I
find that the database is sometimes automatically shrunk, and when that
happens, it returns to the 512Kb size it was initially created with. Very
annoying - I want it to return to a more useful size! Is there any way to
configure auto shrinking to return the database to a specific size (such as
you can with DBCC SHRINKFILE)?
I thought about using sp_detach_db and then sp_attach_single_file_db, but
BOL says that doing so will create a new transaction log, which I presume
will have the same defaults as the last time (and therefore renders this
approach rather pointless). Is there any way to ensure the new transaction
log has a more useful initial size?
OK, so that's quite a few in-depth questions I guess Thanks for reading,
and thanks in advance for any help you can offer!
Robyou would do this in the model database. All subsequent databases (including
tempdb) would be created with this autogrow increment.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Rob Pain" <RobPain@.discussions.microsoft.com> wrote in message
news:26AE11DB-ABBF-497E-9087-C40F5838BF5E@.microsoft.com...
>I have a database, and the log file would appear to have been created with
> some default settings (it is 512Kb size, which I believe is the minimum
> SQL
> 2000 can allocate, and its auto-grow factor is 10%).
> When I insert some binary data into an image column, I can see a whole
> series of "log file auto grow" events in Profiler. That's what I'd
> expect,
> but each auto-grow event has a pause of just over 1 second before the next
> auto-grow. So, given that 10% of 512Kb isn't much, and I therefore can
> see a
> lot of auto-grow events in response to one insertion, the overall query
> execution time is very long.
> What does SQL Server have an apparently deliberate delay between auto-grow
> events? Surely it can grow the log file and initialise it in multiple
> chunks
> together, or even not have to wait between chunks? If the insertion
> causes
> 10 auto-grow events, that's 10 seconds longer it takes to execute than I'd
> expect?
> The obvious solution (I can see you thinking) is to change the auto-grow
> factor to a more useful fixed value, such as 10Mb (well, the value is
> arbitrary, but I think 10Mb will be good for my situation). Doing so
> clearly
> will reduce the auto-grow events, and slash query execution time. Yes, I
> can
> do that, but what I really want to achieve is to eliminate auto-growth as
> much as possible by setting a larger initial size for my database
> transaction
> log.
> Is it possible to change the initial size of a database transaction log?
> I
> have tried using an ALTER DATABASE, MODIFY FILE to set a new size. This
> works, but only temporarily. The problem here is that my database is
> using
> the simple recovery model, and has AUTO_SHRINK switched on. Therefore, I
> find that the database is sometimes automatically shrunk, and when that
> happens, it returns to the 512Kb size it was initially created with. Very
> annoying - I want it to return to a more useful size! Is there any way to
> configure auto shrinking to return the database to a specific size (such
> as
> you can with DBCC SHRINKFILE)?
> I thought about using sp_detach_db and then sp_attach_single_file_db, but
> BOL says that doing so will create a new transaction log, which I presume
> will have the same defaults as the last time (and therefore renders this
> approach rather pointless). Is there any way to ensure the new
> transaction
> log has a more useful initial size?
> OK, so that's quite a few in-depth questions I guess Thanks for reading,
> and thanks in advance for any help you can offer!
> Rob|||Rob Pain wrote:
> I have a database, and the log file would appear to have been created with
> some default settings (it is 512Kb size, which I believe is the minimum SQ
L
> 2000 can allocate, and its auto-grow factor is 10%).
> When I insert some binary data into an image column, I can see a whole
> series of "log file auto grow" events in Profiler. That's what I'd expect
,
> but each auto-grow event has a pause of just over 1 second before the next
> auto-grow. So, given that 10% of 512Kb isn't much, and I therefore can se
e a
> lot of auto-grow events in response to one insertion, the overall query
> execution time is very long.
> What does SQL Server have an apparently deliberate delay between auto-grow
> events? Surely it can grow the log file and initialise it in multiple chu
nks
> together, or even not have to wait between chunks? If the insertion cause
s
> 10 auto-grow events, that's 10 seconds longer it takes to execute than I'd
> expect?
> The obvious solution (I can see you thinking) is to change the auto-grow
> factor to a more useful fixed value, such as 10Mb (well, the value is
> arbitrary, but I think 10Mb will be good for my situation). Doing so clea
rly
> will reduce the auto-grow events, and slash query execution time. Yes, I
can
> do that, but what I really want to achieve is to eliminate auto-growth as
> much as possible by setting a larger initial size for my database transact
ion
> log.
> Is it possible to change the initial size of a database transaction log?
I
> have tried using an ALTER DATABASE, MODIFY FILE to set a new size. This
> works, but only temporarily. The problem here is that my database is usin
g
> the simple recovery model, and has AUTO_SHRINK switched on. Therefore, I
> find that the database is sometimes automatically shrunk, and when that
> happens, it returns to the 512Kb size it was initially created with. Very
> annoying - I want it to return to a more useful size! Is there any way to
> configure auto shrinking to return the database to a specific size (such a
s
> you can with DBCC SHRINKFILE)?
> I thought about using sp_detach_db and then sp_attach_single_file_db, but
> BOL says that doing so will create a new transaction log, which I presume
> will have the same defaults as the last time (and therefore renders this
> approach rather pointless). Is there any way to ensure the new transactio
n
> log has a more useful initial size?
> OK, so that's quite a few in-depth questions I guess Thanks for reading,
> and thanks in advance for any help you can offer!
> Rob
Turn off auto-shrink... It's really quite pointless to repeatedly
shrink a database and/or log file that you know is going to grow again.
You're just causing excessive file fragmentation, and ultimately
hurting your performance.
If you insist on shrinking, look into the DBCC SHRINKFILE command,
where you have more control over what is done...|||You have autoshrink on, and say that there is a problem with the grow. It is
like saying
"It hurts every time I shoot myself in the foot. How can I shoot myself in t
he foot without it
hurting?"
The solution is to turn off autoshrink and let the log file be the size it n
eed to be to accommodate
your transactions. You might want to check out http://www.karaszi.com/SQLServer/in...dont_shrink.asp
for elaboration.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Rob Pain" <RobPain@.discussions.microsoft.com> wrote in message
news:26AE11DB-ABBF-497E-9087-C40F5838BF5E@.microsoft.com...
>I have a database, and the log file would appear to have been created with
> some default settings (it is 512Kb size, which I believe is the minimum SQ
L
> 2000 can allocate, and its auto-grow factor is 10%).
> When I insert some binary data into an image column, I can see a whole
> series of "log file auto grow" events in Profiler. That's what I'd expect
,
> but each auto-grow event has a pause of just over 1 second before the next
> auto-grow. So, given that 10% of 512Kb isn't much, and I therefore can se
e a
> lot of auto-grow events in response to one insertion, the overall query
> execution time is very long.
> What does SQL Server have an apparently deliberate delay between auto-grow
> events? Surely it can grow the log file and initialise it in multiple chu
nks
> together, or even not have to wait between chunks? If the insertion cause
s
> 10 auto-grow events, that's 10 seconds longer it takes to execute than I'd
> expect?
> The obvious solution (I can see you thinking) is to change the auto-grow
> factor to a more useful fixed value, such as 10Mb (well, the value is
> arbitrary, but I think 10Mb will be good for my situation). Doing so clea
rly
> will reduce the auto-grow events, and slash query execution time. Yes, I
can
> do that, but what I really want to achieve is to eliminate auto-growth as
> much as possible by setting a larger initial size for my database transact
ion
> log.
> Is it possible to change the initial size of a database transaction log?
I
> have tried using an ALTER DATABASE, MODIFY FILE to set a new size. This
> works, but only temporarily. The problem here is that my database is usin
g
> the simple recovery model, and has AUTO_SHRINK switched on. Therefore, I
> find that the database is sometimes automatically shrunk, and when that
> happens, it returns to the 512Kb size it was initially created with. Very
> annoying - I want it to return to a more useful size! Is there any way to
> configure auto shrinking to return the database to a specific size (such a
s
> you can with DBCC SHRINKFILE)?
> I thought about using sp_detach_db and then sp_attach_single_file_db, but
> BOL says that doing so will create a new transaction log, which I presume
> will have the same defaults as the last time (and therefore renders this
> approach rather pointless). Is there any way to ensure the new transactio
n
> log has a more useful initial size?
> OK, so that's quite a few in-depth questions I guess Thanks for reading,
> and thanks in advance for any help you can offer!
> Rob|||You really, really don't want to depend upon AUTOGROW and AUTOSHRINK in a
production database. You have no control over when it happens, and the
resulting performance hits may come at inconvenient times for your users.
(Of course, AUTOGROW ON, for a large 'chunk', is still a good safety net.)
Using database properties, you can set the db (and log file) size. Pick a
size large enough to handle a periods work (day, week, month). Then at the
end of each period, check the free space, and if more is needed, expand the
db (and log file) size to a size large enough to handle the expected next
periods work. This can be automated to occur at a time of least activity,
thereby having minimal impact on users.
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
"Rob Pain" <RobPain@.discussions.microsoft.com> wrote in message
news:26AE11DB-ABBF-497E-9087-C40F5838BF5E@.microsoft.com...
>I have a database, and the log file would appear to have been created with
> some default settings (it is 512Kb size, which I believe is the minimum
> SQL
> 2000 can allocate, and its auto-grow factor is 10%).
> When I insert some binary data into an image column, I can see a whole
> series of "log file auto grow" events in Profiler. That's what I'd
> expect,
> but each auto-grow event has a pause of just over 1 second before the next
> auto-grow. So, given that 10% of 512Kb isn't much, and I therefore can
> see a
> lot of auto-grow events in response to one insertion, the overall query
> execution time is very long.
> What does SQL Server have an apparently deliberate delay between auto-grow
> events? Surely it can grow the log file and initialise it in multiple
> chunks
> together, or even not have to wait between chunks? If the insertion
> causes
> 10 auto-grow events, that's 10 seconds longer it takes to execute than I'd
> expect?
> The obvious solution (I can see you thinking) is to change the auto-grow
> factor to a more useful fixed value, such as 10Mb (well, the value is
> arbitrary, but I think 10Mb will be good for my situation). Doing so
> clearly
> will reduce the auto-grow events, and slash query execution time. Yes, I
> can
> do that, but what I really want to achieve is to eliminate auto-growth as
> much as possible by setting a larger initial size for my database
> transaction
> log.
> Is it possible to change the initial size of a database transaction log?
> I
> have tried using an ALTER DATABASE, MODIFY FILE to set a new size. This
> works, but only temporarily. The problem here is that my database is
> using
> the simple recovery model, and has AUTO_SHRINK switched on. Therefore, I
> find that the database is sometimes automatically shrunk, and when that
> happens, it returns to the 512Kb size it was initially created with. Very
> annoying - I want it to return to a more useful size! Is there any way to
> configure auto shrinking to return the database to a specific size (such
> as
> you can with DBCC SHRINKFILE)?
> I thought about using sp_detach_db and then sp_attach_single_file_db, but
> BOL says that doing so will create a new transaction log, which I presume
> will have the same defaults as the last time (and therefore renders this
> approach rather pointless). Is there any way to ensure the new
> transaction
> log has a more useful initial size?
> OK, so that's quite a few in-depth questions I guess Thanks for reading,
> and thanks in advance for any help you can offer!
> Rob

No comments:

Post a Comment