Showing posts with label production. Show all posts
Showing posts with label production. Show all posts

Friday, March 30, 2012

Is it possible to Migrate 2005 database to 2000 engine?

Hi,
We have been developing a new database using 2005 with the understanding
that the deployment would be on a 2005 production instance. Now we learn
that the target server is running 2000. We could create a separate 2005
instance on the same server but it seems that the additional administration
of security and object ownership will be more hassle that downgrading the
empty data objects to 2000. So far, the database consists of about 20
tables with accompanying meta data (table/column descriptions). I was
planning to use sql scripts generated in 2005 to recreate the tables on a
2000 instance but realize that some modifications will be required to
recreate the metadata.
Does anyone have any suggestions on the migration of a 2005 database to
2000?
Thanks, Simon
No easy way ... Try your own manual migration and come up with a white
paper ... Read below link
http://searchsqlserver.techtarget.co...301536,00.html
Keppe on posting the issues you face so that we can help...
Thanks,
Sree
[Please specify the version of Sql Server as we can save one thread and time
asking back if its 2000 or 2005]
"Simon" wrote:

> Hi,
> We have been developing a new database using 2005 with the understanding
> that the deployment would be on a 2005 production instance. Now we learn
> that the target server is running 2000. We could create a separate 2005
> instance on the same server but it seems that the additional administration
> of security and object ownership will be more hassle that downgrading the
> empty data objects to 2000. So far, the database consists of about 20
> tables with accompanying meta data (table/column descriptions). I was
> planning to use sql scripts generated in 2005 to recreate the tables on a
> 2000 instance but realize that some modifications will be required to
> recreate the metadata.
> Does anyone have any suggestions on the migration of a 2005 database to
> 2000?
> Thanks, Simon
>
>
sql

Wednesday, March 28, 2012

Is it possible to Migrate 2005 database to 2000 engine?

Hi,
We have been developing a new database using 2005 with the understanding
that the deployment would be on a 2005 production instance. Now we learn
that the target server is running 2000. We could create a separate 2005
instance on the same server but it seems that the additional administration
of security and object ownership will be more hassle that downgrading the
empty data objects to 2000. So far, the database consists of about 20
tables with accompanying meta data (table/column descriptions). I was
planning to use sql scripts generated in 2005 to recreate the tables on a
2000 instance but realize that some modifications will be required to
recreate the metadata.
Does anyone have any suggestions on the migration of a 2005 database to
2000?
Thanks, SimonNo easy way ... Try your own manual migration and come up with a white
paper ... Read below link
http://searchsqlserver.techtarget.c...x301536,00.html
Keppe on posting the issues you face so that we can help...
Thanks,
Sree
[Please specify the version of Sql Server as we can save one thread and
time
asking back if its 2000 or 2005]
"Simon" wrote:

> Hi,
> We have been developing a new database using 2005 with the understanding
> that the deployment would be on a 2005 production instance. Now we learn
> that the target server is running 2000. We could create a separate 2005
> instance on the same server but it seems that the additional administratio
n
> of security and object ownership will be more hassle that downgrading the
> empty data objects to 2000. So far, the database consists of about 20
> tables with accompanying meta data (table/column descriptions). I was
> planning to use sql scripts generated in 2005 to recreate the tables on a
> 2000 instance but realize that some modifications will be required to
> recreate the metadata.
> Does anyone have any suggestions on the migration of a 2005 database to
> 2000?
> Thanks, Simon
>
>

Is it possible to Migrate 2005 database to 2000 engine?

Hi,
We have been developing a new database using 2005 with the understanding
that the deployment would be on a 2005 production instance. Now we learn
that the target server is running 2000. We could create a separate 2005
instance on the same server but it seems that the additional administration
of security and object ownership will be more hassle that downgrading the
empty data objects to 2000. So far, the database consists of about 20
tables with accompanying meta data (table/column descriptions). I was
planning to use sql scripts generated in 2005 to recreate the tables on a
2000 instance but realize that some modifications will be required to
recreate the metadata.
Does anyone have any suggestions on the migration of a 2005 database to
2000?
Thanks, SimonNo easy way :(... Try your own manual migration and come up with a white
paper :)... Read below link
http://searchsqlserver.techtarget.com/expert/KnowledgebaseAnswer/0,289625,sid87_gci1149585_tax301536,00.html
Keppe on posting the issues you face so that we can help...
--
Thanks,
Sree
[Please specify the version of Sql Server as we can save one thread and time
asking back if its 2000 or 2005]
"Simon" wrote:
> Hi,
> We have been developing a new database using 2005 with the understanding
> that the deployment would be on a 2005 production instance. Now we learn
> that the target server is running 2000. We could create a separate 2005
> instance on the same server but it seems that the additional administration
> of security and object ownership will be more hassle that downgrading the
> empty data objects to 2000. So far, the database consists of about 20
> tables with accompanying meta data (table/column descriptions). I was
> planning to use sql scripts generated in 2005 to recreate the tables on a
> 2000 instance but realize that some modifications will be required to
> recreate the metadata.
> Does anyone have any suggestions on the migration of a 2005 database to
> 2000?
> Thanks, Simon
>
>

Friday, March 9, 2012

Is it customary to reindex every night?

I've been doing this DBA thing for a while. I'm having to deal with
more production issues as I'm working in more complex businesses now.
One of the systems that we recently put into production is being
maintained by a non-DBA type whom I had a recent chat with. He told me
that they've found that the performance of the system seems to degrade
over a week as queries seem to take longer to execute as the week goes
on. So to combat the performance loss, he scheduled nightly reindexing
of the database.
Nightly reindexing of a database seems okay when you have the luxury
of not being a 24x7 business use application. However, I wouldn't have
ordinarily expected such harsh degredation over a weeks time period
(40 sec delays per query issued in some cases). This is a database
that is relatively more insert/update than select/report. I'm just
curious as to what your experiences have been? Have you found cases
where you've run indexing every night? If so why? If not, do you have
any opinions on what I should take a look at to investigate the
performance decrease?
Thanks in advance,
Osolage
Hi
"Osolage" wrote:

> I've been doing this DBA thing for a while. I'm having to deal with
> more production issues as I'm working in more complex businesses now.
> One of the systems that we recently put into production is being
> maintained by a non-DBA type whom I had a recent chat with. He told me
> that they've found that the performance of the system seems to degrade
> over a week as queries seem to take longer to execute as the week goes
> on. So to combat the performance loss, he scheduled nightly reindexing
> of the database.
> Nightly reindexing of a database seems okay when you have the luxury
> of not being a 24x7 business use application. However, I wouldn't have
> ordinarily expected such harsh degredation over a weeks time period
> (40 sec delays per query issued in some cases). This is a database
> that is relatively more insert/update than select/report. I'm just
> curious as to what your experiences have been? Have you found cases
> where you've run indexing every night? If so why? If not, do you have
> any opinions on what I should take a look at to investigate the
> performance decrease?
> Thanks in advance,
> Osolage
>
The amount of degredation is depending on the amount of change that the data
undergoes, therefore if there are a high number of transaction on the
database you may need to re-index more often. This can be made worse by
having poorly thought out fill factors or possibly even poor programming or
datatype selection.
Look at outputing what DBCC SHOWCONTIG gives before you re-index. You may
want to try and follow something similar to the example on the DBCC
SHOWCONTIG topic in books online which will only re-index when the
fragmentation gets above a certain level.
Also watch out for database shrinking (either auto or manual) which can also
cause fragmentation.
John
|||Yes, I've seen this.
Generally, there was some sloppy code and design, such that the
database ran well as long as it was squeezed into available RAM, but
with even a little fragmentation the missing indexes and bad plans
caused an exponential degradation in performance.
There was initially a lot of inserting to tables with (nonsequential)
GUID clustered PKs, after that was changed to simple identity int's
much of the problem went away.
So, it was split between some sloppy code, and some scalability issues
that called for specific redesigns.
Some of these problems can be a real pain to diagnose, when they only
occur on the production box, with full-scale data, and different
critical points because of a different processor count and different
RAM layout than any dev environments.
J.
On 29 Jan 2007 22:15:25 -0800, "Osolage" <osolage@.gmail.com> wrote:

>I've been doing this DBA thing for a while. I'm having to deal with
>more production issues as I'm working in more complex businesses now.
>One of the systems that we recently put into production is being
>maintained by a non-DBA type whom I had a recent chat with. He told me
>that they've found that the performance of the system seems to degrade
>over a week as queries seem to take longer to execute as the week goes
>on. So to combat the performance loss, he scheduled nightly reindexing
>of the database.
>Nightly reindexing of a database seems okay when you have the luxury
>of not being a 24x7 business use application. However, I wouldn't have
>ordinarily expected such harsh degredation over a weeks time period
>(40 sec delays per query issued in some cases). This is a database
>that is relatively more insert/update than select/report. I'm just
>curious as to what your experiences have been? Have you found cases
>where you've run indexing every night? If so why? If not, do you have
>any opinions on what I should take a look at to investigate the
>performance decrease?
>Thanks in advance,
>Osolage
|||Osolage wrote:
> I've been doing this DBA thing for a while. I'm having to deal with
> more production issues as I'm working in more complex businesses now.
> One of the systems that we recently put into production is being
> maintained by a non-DBA type whom I had a recent chat with. He told me
> that they've found that the performance of the system seems to degrade
> over a week as queries seem to take longer to execute as the week goes
> on. So to combat the performance loss, he scheduled nightly reindexing
> of the database.
> Nightly reindexing of a database seems okay when you have the luxury
> of not being a 24x7 business use application. However, I wouldn't have
> ordinarily expected such harsh degredation over a weeks time period
> (40 sec delays per query issued in some cases). This is a database
> that is relatively more insert/update than select/report. I'm just
> curious as to what your experiences have been? Have you found cases
> where you've run indexing every night? If so why? If not, do you have
> any opinions on what I should take a look at to investigate the
> performance decrease?
> Thanks in advance,
> Osolage
>
Depends on how the indexes are defined, how many inserts/updates across
index keys are occurring, etc. Also, as John mentioned, make sure
you're not shrinking the database - that will effectively "undo" any
reindexing that you've done.
Rather than rebuild EVERY index, consider rebuilding only those that are
fragmented. Here's a script to get you started. It needs a couple of
tweaks that I haven't gotten around to making, in particular it needs to
exclude index ID = 255, but it should get you started.
Tracy McKibben
MCDBA
http://www.realsqlguy.com
|||You can look at Reorganize indexes and Reindex only highly fragmented
indexes.
BOL has good Example for it.
On Jan 30, 3:43 pm, JXStern <JXSternChange...@.gte.net> wrote:
> Yes, I've seen this.
> Generally, there was some sloppy code and design, such that the
> database ran well as long as it was squeezed into available RAM, but
> with even a little fragmentation the missing indexes and bad plans
> caused an exponential degradation in performance.
> There was initially a lot of inserting to tables with (nonsequential)
> GUID clustered PKs, after that was changed to simple identity int's
> much of the problem went away.
> So, it was split between some sloppy code, and some scalability issues
> that called for specific redesigns.
> Some of these problems can be a real pain to diagnose, when they only
> occur on the production box, with full-scale data, and different
> critical points because of a different processor count and different
> RAM layout than any dev environments.
> J.
> On 29 Jan 2007 22:15:25 -0800, "Osolage" <osol...@.gmail.com> wrote:
>
>
>
> - Show quoted text -
|||Hi, everyone,
I thank you for your comments so far. They validate my suspicions.
The procedures for checking index fragmentation and what-not can be
very tedious. This is something I've traditionally avoided except in
serious cases, because of the tedium. I'd be more inclined to dive in
if there are tools that help speed up the discovery of problem spots.
Have any of you used any tools to help speed up the research or to
even automate the discovery of problemmatic indexes? Please share or
link to your experiences if you are willing to share. I would love to
learn from you as it will make my job easier.
Sincere thanks,
Osoalge
|||On Feb 5, 4:38 pm, "Osolage" <osol...@.gmail.com> wrote:
> Hi, everyone,
> I thank you for your comments so far. They validate my suspicions.
> The procedures for checking index fragmentation and what-not can be
> very tedious. This is something I've traditionally avoided except in
> serious cases, because of the tedium. I'd be more inclined to dive in
> if there are tools that help speed up the discovery of problem spots.
> Have any of you used any tools to help speed up the research or to
> even automate the discovery of problemmatic indexes? Please share or
> link to your experiences if you are willing to share. I would love to
> learn from you as it will make my job easier.
> Sincere thanks,
> Osoalge
http://www.realsqlguy.com/bin/view/RealSQLGuy/DefraggingIndexes

Is it customary to reindex every night?

I've been doing this DBA thing for a while. I'm having to deal with
more production issues as I'm working in more complex businesses now.
One of the systems that we recently put into production is being
maintained by a non-DBA type whom I had a recent chat with. He told me
that they've found that the performance of the system seems to degrade
over a week as queries seem to take longer to execute as the week goes
on. So to combat the performance loss, he scheduled nightly reindexing
of the database.
Nightly reindexing of a database seems okay when you have the luxury
of not being a 24x7 business use application. However, I wouldn't have
ordinarily expected such harsh degredation over a weeks time period
(40 sec delays per query issued in some cases). This is a database
that is relatively more insert/update than select/report. I'm just
curious as to what your experiences have been? Have you found cases
where you've run indexing every night? If so why? If not, do you have
any opinions on what I should take a look at to investigate the
performance decrease?
Thanks in advance,
OsolageHi
"Osolage" wrote:

> I've been doing this DBA thing for a while. I'm having to deal with
> more production issues as I'm working in more complex businesses now.
> One of the systems that we recently put into production is being
> maintained by a non-DBA type whom I had a recent chat with. He told me
> that they've found that the performance of the system seems to degrade
> over a week as queries seem to take longer to execute as the week goes
> on. So to combat the performance loss, he scheduled nightly reindexing
> of the database.
> Nightly reindexing of a database seems okay when you have the luxury
> of not being a 24x7 business use application. However, I wouldn't have
> ordinarily expected such harsh degredation over a weeks time period
> (40 sec delays per query issued in some cases). This is a database
> that is relatively more insert/update than select/report. I'm just
> curious as to what your experiences have been? Have you found cases
> where you've run indexing every night? If so why? If not, do you have
> any opinions on what I should take a look at to investigate the
> performance decrease?
> Thanks in advance,
> Osolage
>
The amount of degredation is depending on the amount of change that the data
undergoes, therefore if there are a high number of transaction on the
database you may need to re-index more often. This can be made worse by
having poorly thought out fill factors or possibly even poor programming or
datatype selection.
Look at outputing what DBCC SHOWCONTIG gives before you re-index. You may
want to try and follow something similar to the example on the DBCC
SHOWCONTIG topic in books online which will only re-index when the
fragmentation gets above a certain level.
Also watch out for database shrinking (either auto or manual) which can also
cause fragmentation.
John|||Yes, I've seen this.
Generally, there was some sloppy code and design, such that the
database ran well as long as it was squeezed into available RAM, but
with even a little fragmentation the missing indexes and bad plans
caused an exponential degradation in performance.
There was initially a lot of inserting to tables with (nonsequential)
GUID clustered PKs, after that was changed to simple identity int's
much of the problem went away.
So, it was split between some sloppy code, and some scalability issues
that called for specific redesigns.
Some of these problems can be a real pain to diagnose, when they only
occur on the production box, with full-scale data, and different
critical points because of a different processor count and different
RAM layout than any dev environments.
J.
On 29 Jan 2007 22:15:25 -0800, "Osolage" <osolage@.gmail.com> wrote:

>I've been doing this DBA thing for a while. I'm having to deal with
>more production issues as I'm working in more complex businesses now.
>One of the systems that we recently put into production is being
>maintained by a non-DBA type whom I had a recent chat with. He told me
>that they've found that the performance of the system seems to degrade
>over a week as queries seem to take longer to execute as the week goes
>on. So to combat the performance loss, he scheduled nightly reindexing
>of the database.
>Nightly reindexing of a database seems okay when you have the luxury
>of not being a 24x7 business use application. However, I wouldn't have
>ordinarily expected such harsh degredation over a weeks time period
>(40 sec delays per query issued in some cases). This is a database
>that is relatively more insert/update than select/report. I'm just
>curious as to what your experiences have been? Have you found cases
>where you've run indexing every night? If so why? If not, do you have
>any opinions on what I should take a look at to investigate the
>performance decrease?
>Thanks in advance,
>Osolage|||Osolage wrote:
> I've been doing this DBA thing for a while. I'm having to deal with
> more production issues as I'm working in more complex businesses now.
> One of the systems that we recently put into production is being
> maintained by a non-DBA type whom I had a recent chat with. He told me
> that they've found that the performance of the system seems to degrade
> over a week as queries seem to take longer to execute as the week goes
> on. So to combat the performance loss, he scheduled nightly reindexing
> of the database.
> Nightly reindexing of a database seems okay when you have the luxury
> of not being a 24x7 business use application. However, I wouldn't have
> ordinarily expected such harsh degredation over a weeks time period
> (40 sec delays per query issued in some cases). This is a database
> that is relatively more insert/update than select/report. I'm just
> curious as to what your experiences have been? Have you found cases
> where you've run indexing every night? If so why? If not, do you have
> any opinions on what I should take a look at to investigate the
> performance decrease?
> Thanks in advance,
> Osolage
>
Depends on how the indexes are defined, how many inserts/updates across
index keys are occurring, etc. Also, as John mentioned, make sure
you're not shrinking the database - that will effectively "undo" any
reindexing that you've done.
Rather than rebuild EVERY index, consider rebuilding only those that are
fragmented. Here's a script to get you started. It needs a couple of
tweaks that I haven't gotten around to making, in particular it needs to
exclude index ID = 255, but it should get you started.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||You can look at Reorganize indexes and Reindex only highly fragmented
indexes.
BOL has good Example for it.
On Jan 30, 3:43 pm, JXStern <JXSternChange...@.gte.net> wrote:
> Yes, I've seen this.
> Generally, there was some sloppy code and design, such that the
> database ran well as long as it was squeezed into available RAM, but
> with even a little fragmentation the missing indexes and bad plans
> caused an exponential degradation in performance.
> There was initially a lot of inserting to tables with (nonsequential)
> GUID clustered PKs, after that was changed to simple identity int's
> much of the problem went away.
> So, it was split between some sloppy code, and some scalability issues
> that called for specific redesigns.
> Some of these problems can be a real pain to diagnose, when they only
> occur on the production box, with full-scale data, and different
> critical points because of a different processor count and different
> RAM layout than any dev environments.
> J.
> On 29 Jan 2007 22:15:25 -0800, "Osolage" <osol...@.gmail.com> wrote:
>
>
>
>
> - Show quoted text -|||Hi, everyone,
I thank you for your comments so far. They validate my suspicions.
The procedures for checking index fragmentation and what-not can be
very tedious. This is something I've traditionally avoided except in
serious cases, because of the tedium. I'd be more inclined to dive in
if there are tools that help speed up the discovery of problem spots.
Have any of you used any tools to help speed up the research or to
even automate the discovery of problemmatic indexes? Please share or
link to your experiences if you are willing to share. I would love to
learn from you as it will make my job easier.
Sincere thanks,
Osoalge|||On Feb 5, 4:38 pm, "Osolage" <osol...@.gmail.com> wrote:
> Hi, everyone,
> I thank you for your comments so far. They validate my suspicions.
> The procedures for checking index fragmentation and what-not can be
> very tedious. This is something I've traditionally avoided except in
> serious cases, because of the tedium. I'd be more inclined to dive in
> if there are tools that help speed up the discovery of problem spots.
> Have any of you used any tools to help speed up the research or to
> even automate the discovery of problemmatic indexes? Please share or
> link to your experiences if you are willing to share. I would love to
> learn from you as it will make my job easier.
> Sincere thanks,
> Osoalge
http://www.realsqlguy.com/bin/view/...fraggingIndexes

Is it customary to reindex every night?

I've been doing this DBA thing for a while. I'm having to deal with
more production issues as I'm working in more complex businesses now.
One of the systems that we recently put into production is being
maintained by a non-DBA type whom I had a recent chat with. He told me
that they've found that the performance of the system seems to degrade
over a week as queries seem to take longer to execute as the week goes
on. So to combat the performance loss, he scheduled nightly reindexing
of the database.
Nightly reindexing of a database seems okay when you have the luxury
of not being a 24x7 business use application. However, I wouldn't have
ordinarily expected such harsh degredation over a weeks time period
(40 sec delays per query issued in some cases). This is a database
that is relatively more insert/update than select/report. I'm just
curious as to what your experiences have been? Have you found cases
where you've run indexing every night? If so why? If not, do you have
any opinions on what I should take a look at to investigate the
performance decrease?
Thanks in advance,
OsolageHi
"Osolage" wrote:
> I've been doing this DBA thing for a while. I'm having to deal with
> more production issues as I'm working in more complex businesses now.
> One of the systems that we recently put into production is being
> maintained by a non-DBA type whom I had a recent chat with. He told me
> that they've found that the performance of the system seems to degrade
> over a week as queries seem to take longer to execute as the week goes
> on. So to combat the performance loss, he scheduled nightly reindexing
> of the database.
> Nightly reindexing of a database seems okay when you have the luxury
> of not being a 24x7 business use application. However, I wouldn't have
> ordinarily expected such harsh degredation over a weeks time period
> (40 sec delays per query issued in some cases). This is a database
> that is relatively more insert/update than select/report. I'm just
> curious as to what your experiences have been? Have you found cases
> where you've run indexing every night? If so why? If not, do you have
> any opinions on what I should take a look at to investigate the
> performance decrease?
> Thanks in advance,
> Osolage
>
The amount of degredation is depending on the amount of change that the data
undergoes, therefore if there are a high number of transaction on the
database you may need to re-index more often. This can be made worse by
having poorly thought out fill factors or possibly even poor programming or
datatype selection.
Look at outputing what DBCC SHOWCONTIG gives before you re-index. You may
want to try and follow something similar to the example on the DBCC
SHOWCONTIG topic in books online which will only re-index when the
fragmentation gets above a certain level.
Also watch out for database shrinking (either auto or manual) which can also
cause fragmentation.
John|||Yes, I've seen this.
Generally, there was some sloppy code and design, such that the
database ran well as long as it was squeezed into available RAM, but
with even a little fragmentation the missing indexes and bad plans
caused an exponential degradation in performance.
There was initially a lot of inserting to tables with (nonsequential)
GUID clustered PKs, after that was changed to simple identity int's
much of the problem went away.
So, it was split between some sloppy code, and some scalability issues
that called for specific redesigns.
Some of these problems can be a real pain to diagnose, when they only
occur on the production box, with full-scale data, and different
critical points because of a different processor count and different
RAM layout than any dev environments.
J.
On 29 Jan 2007 22:15:25 -0800, "Osolage" <osolage@.gmail.com> wrote:
>I've been doing this DBA thing for a while. I'm having to deal with
>more production issues as I'm working in more complex businesses now.
>One of the systems that we recently put into production is being
>maintained by a non-DBA type whom I had a recent chat with. He told me
>that they've found that the performance of the system seems to degrade
>over a week as queries seem to take longer to execute as the week goes
>on. So to combat the performance loss, he scheduled nightly reindexing
>of the database.
>Nightly reindexing of a database seems okay when you have the luxury
>of not being a 24x7 business use application. However, I wouldn't have
>ordinarily expected such harsh degredation over a weeks time period
>(40 sec delays per query issued in some cases). This is a database
>that is relatively more insert/update than select/report. I'm just
>curious as to what your experiences have been? Have you found cases
>where you've run indexing every night? If so why? If not, do you have
>any opinions on what I should take a look at to investigate the
>performance decrease?
>Thanks in advance,
>Osolage|||Osolage wrote:
> I've been doing this DBA thing for a while. I'm having to deal with
> more production issues as I'm working in more complex businesses now.
> One of the systems that we recently put into production is being
> maintained by a non-DBA type whom I had a recent chat with. He told me
> that they've found that the performance of the system seems to degrade
> over a week as queries seem to take longer to execute as the week goes
> on. So to combat the performance loss, he scheduled nightly reindexing
> of the database.
> Nightly reindexing of a database seems okay when you have the luxury
> of not being a 24x7 business use application. However, I wouldn't have
> ordinarily expected such harsh degredation over a weeks time period
> (40 sec delays per query issued in some cases). This is a database
> that is relatively more insert/update than select/report. I'm just
> curious as to what your experiences have been? Have you found cases
> where you've run indexing every night? If so why? If not, do you have
> any opinions on what I should take a look at to investigate the
> performance decrease?
> Thanks in advance,
> Osolage
>
Depends on how the indexes are defined, how many inserts/updates across
index keys are occurring, etc. Also, as John mentioned, make sure
you're not shrinking the database - that will effectively "undo" any
reindexing that you've done.
Rather than rebuild EVERY index, consider rebuilding only those that are
fragmented. Here's a script to get you started. It needs a couple of
tweaks that I haven't gotten around to making, in particular it needs to
exclude index ID = 255, but it should get you started.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||You can look at Reorganize indexes and Reindex only highly fragmented
indexes.
BOL has good Example for it.
On Jan 30, 3:43 pm, JXStern <JXSternChange...@.gte.net> wrote:
> Yes, I've seen this.
> Generally, there was some sloppy code and design, such that the
> database ran well as long as it was squeezed into available RAM, but
> with even a little fragmentation the missing indexes and bad plans
> caused an exponential degradation in performance.
> There was initially a lot of inserting to tables with (nonsequential)
> GUID clustered PKs, after that was changed to simple identity int's
> much of the problem went away.
> So, it was split between some sloppy code, and some scalability issues
> that called for specific redesigns.
> Some of these problems can be a real pain to diagnose, when they only
> occur on the production box, with full-scale data, and different
> critical points because of a different processor count and different
> RAM layout than any dev environments.
> J.
> On 29 Jan 2007 22:15:25 -0800, "Osolage" <osol...@.gmail.com> wrote:
>
> >I've been doing this DBA thing for a while. I'm having to deal with
> >more production issues as I'm working in more complex businesses now.
> >One of the systems that we recently put into production is being
> >maintained by a non-DBA type whom I had a recent chat with. He told me
> >that they've found that the performance of the system seems to degrade
> >over a week as queries seem to take longer to execute as the week goes
> >on. So to combat the performance loss, he scheduled nightly reindexing
> >of the database.
> >Nightly reindexing of a database seems okay when you have the luxury
> >of not being a 24x7 business use application. However, I wouldn't have
> >ordinarily expected such harsh degredation over a weeks time period
> >(40 sec delays per query issued in some cases). This is a database
> >that is relatively more insert/update than select/report. I'm just
> >curious as to what your experiences have been? Have you found cases
> >where you've run indexing every night? If so why? If not, do you have
> >any opinions on what I should take a look at to investigate the
> >performance decrease?
> >Thanks in advance,
> >Osolage- Hide quoted text -
> - Show quoted text -|||Hi, everyone,
I thank you for your comments so far. They validate my suspicions.
The procedures for checking index fragmentation and what-not can be
very tedious. This is something I've traditionally avoided except in
serious cases, because of the tedium. I'd be more inclined to dive in
if there are tools that help speed up the discovery of problem spots.
Have any of you used any tools to help speed up the research or to
even automate the discovery of problemmatic indexes? Please share or
link to your experiences if you are willing to share. I would love to
learn from you as it will make my job easier.
Sincere thanks,
Osoalge|||On Feb 5, 4:38 pm, "Osolage" <osol...@.gmail.com> wrote:
> Hi, everyone,
> I thank you for your comments so far. They validate my suspicions.
> The procedures for checking index fragmentation and what-not can be
> very tedious. This is something I've traditionally avoided except in
> serious cases, because of the tedium. I'd be more inclined to dive in
> if there are tools that help speed up the discovery of problem spots.
> Have any of you used any tools to help speed up the research or to
> even automate the discovery of problemmatic indexes? Please share or
> link to your experiences if you are willing to share. I would love to
> learn from you as it will make my job easier.
> Sincere thanks,
> Osoalge
http://www.realsqlguy.com/bin/view/RealSQLGuy/DefraggingIndexes

Friday, February 24, 2012

Is database mirroring the solution for this scenario?

We have an OLTP database handling our business production operations.
Our reporting team wants to extract from the OLTP db every 15 min, but
that will hammer the OLTP db I/O-wise, breaking the system. The
reporting team wants a copy of the OLTP system to do their extractions.
The copy should be kept up-to-date with the OLTP system on a near
continuous basis and be fully readable for extraction (At no time should
the copy ever become unavailable during sync activities). We do not
want to use SQL replication, and are looking at various alternatives
involving SAN snapshot technologies. Due to our requirements, several
of the SAN vendors have recommended using the built-in database
mirroring feature in 2005. I was under the impression that the mirror
was not available for use while it was being kept in sync and once the
mirror was made available, it would have to be reinitialized to resume
syncing. Am I correct, or is there a way to have the mirrored copy
available but still syncing?
You are right that the mirror database itself isn't available for reporting.
However, a snapshot from the mirror can provide a read-only view of the
database at a particular point in time. My guess is that this is what the
SAN vendors are suggesting.
Hope this helps.
Dan Guzman
SQL Server MVP
"DotComDBA" <noreply@.nospam.com> wrote in message
news:uFMrCrtKHHA.536@.TK2MSFTNGP02.phx.gbl...
> We have an OLTP database handling our business production operations. Our
> reporting team wants to extract from the OLTP db every 15 min, but that
> will hammer the OLTP db I/O-wise, breaking the system. The reporting team
> wants a copy of the OLTP system to do their extractions. The copy should
> be kept up-to-date with the OLTP system on a near continuous basis and be
> fully readable for extraction (At no time should the copy ever become
> unavailable during sync activities). We do not want to use SQL
> replication, and are looking at various alternatives involving SAN
> snapshot technologies. Due to our requirements, several of the SAN
> vendors have recommended using the built-in database mirroring feature in
> 2005. I was under the impression that the mirror was not available for
> use while it was being kept in sync and once the mirror was made
> available, it would have to be reinitialized to resume syncing. Am I
> correct, or is there a way to have the mirrored copy available but still
> syncing?
|||I was the DBA on an OLTP system for 10 years (yes a dotcom) and I must
question this "poll every 15 minutes that hammers the system" statement.
There should be NOTHING you should be pulling every 15 minutes that could
take more than a few seconds (if that) that is not going to be a waste.
What are they asking for? All sales? All hits? Promo pages? Are you dumping
everything every time? Put an index on the dattime field of the parent table
and dump the activity since the last dump.
I'm guessing they stuck you with RAID 5, or 6 too - write penalty and all.
RAID 0 is my prefered, 1 01 & 10 are OK. And keep the database files away
from everything else - like the temp directory, spooler and transaction logs.
Make a list of their needs, not what they say they want, but find out what
they are using and actually need. Then, if necessarg, build a seperate
database (call it a markting system) and dump data to it incremently.
Seperate the databases by transfering data via files, or perhaps dedicate a
crssover tcp/ip cable.
Whatever you do, keep them off the production server. They have a budget,
have them pay for what they need.
"DotComDBA" wrote:

> We have an OLTP database handling our business production operations.
> Our reporting team wants to extract from the OLTP db every 15 min, but
> that will hammer the OLTP db I/O-wise, breaking the system. The
> reporting team wants a copy of the OLTP system to do their extractions.
> The copy should be kept up-to-date with the OLTP system on a near
> continuous basis and be fully readable for extraction (At no time should
> the copy ever become unavailable during sync activities). We do not
> want to use SQL replication, and are looking at various alternatives
> involving SAN snapshot technologies. Due to our requirements, several
> of the SAN vendors have recommended using the built-in database
> mirroring feature in 2005. I was under the impression that the mirror
> was not available for use while it was being kept in sync and once the
> mirror was made available, it would have to be reinitialized to resume
> syncing. Am I correct, or is there a way to have the mirrored copy
> available but still syncing?
>
|||We have a similar need & setup (i.e. OLTP + operational reporting / MIS
needs), and do get significant performance gains for both OLTP and
reporting by mirroring - have found that running reports on the OLTP
server can esp hammer counters like the buffer hit ratios etc
Mirroring (e.g. Asynch in SQL 2005) works fine without loading the OLTP
server
If you do "use" your read-only slave, you might need to consider
licensing issues / costs.
You might also look at using regular log shipping
e.g. we roll our OLTP Enterprise Ed SQL logs forward onto a Std Ed
reporting server at 30 minute intervals, and at the same time copy the
logs off site as a DR safety precaution.
We do have a few "real time" reports which we do allow on the live OLTP
server (but you need to set stringent criteria on such reports as per
other poster). The rest (~500 reports) are run off the slave.
The one 'draw back' of log shipping is that you will need to boot all
connections off your slave reporting database slave every time you roll
forward.
The other alternative considered was replication - on the upside it
allowed creation of different indices (mirroring / log shipping
duplicates indices, meaning you have to create your MIS indices on the
OLTP server as well ... ). On the downside, there seemed no real
'guarantees' on the data synchronisation (i.e. for Disaster recovery we
sleep better if using Mirroring or Log Shipping to the slave).
We didn't meet with much success on the SAN Snapshots (unless you e.g.
use a permanent 'mirror') - depending on the size of the disks you need
to snapshot, the SAN still has a finite bandwidth - (our 5 yo SAN can
data transfer was at only around 500GB per hour), and doing the
snapshot does chew up the SAN resources.
Regards
Stuart

Monday, February 20, 2012

Is database mirroring the solution for this scenario?

We have an OLTP database handling our business production operations.
Our reporting team wants to extract from the OLTP db every 15 min, but
that will hammer the OLTP db I/O-wise, breaking the system. The
reporting team wants a copy of the OLTP system to do their extractions.
The copy should be kept up-to-date with the OLTP system on a near
continuous basis and be fully readable for extraction (At no time should
the copy ever become unavailable during sync activities). We do not
want to use SQL replication, and are looking at various alternatives
involving SAN snapshot technologies. Due to our requirements, several
of the SAN vendors have recommended using the built-in database
mirroring feature in 2005. I was under the impression that the mirror
was not available for use while it was being kept in sync and once the
mirror was made available, it would have to be reinitialized to resume
syncing. Am I correct, or is there a way to have the mirrored copy
available but still syncing?You are right that the mirror database itself isn't available for reporting.
However, a snapshot from the mirror can provide a read-only view of the
database at a particular point in time. My guess is that this is what the
SAN vendors are suggesting.
Hope this helps.
Dan Guzman
SQL Server MVP
"DotComDBA" <noreply@.nospam.com> wrote in message
news:uFMrCrtKHHA.536@.TK2MSFTNGP02.phx.gbl...
> We have an OLTP database handling our business production operations. Our
> reporting team wants to extract from the OLTP db every 15 min, but that
> will hammer the OLTP db I/O-wise, breaking the system. The reporting team
> wants a copy of the OLTP system to do their extractions. The copy should
> be kept up-to-date with the OLTP system on a near continuous basis and be
> fully readable for extraction (At no time should the copy ever become
> unavailable during sync activities). We do not want to use SQL
> replication, and are looking at various alternatives involving SAN
> snapshot technologies. Due to our requirements, several of the SAN
> vendors have recommended using the built-in database mirroring feature in
> 2005. I was under the impression that the mirror was not available for
> use while it was being kept in sync and once the mirror was made
> available, it would have to be reinitialized to resume syncing. Am I
> correct, or is there a way to have the mirrored copy available but still
> syncing?|||I was the DBA on an OLTP system for 10 years (yes a dotcom) and I must
question this "poll every 15 minutes that hammers the system" statement.
There should be NOTHING you should be pulling every 15 minutes that could
take more than a few seconds (if that) that is not going to be a waste.
What are they asking for? All sales? All hits? Promo pages? Are you dumping
everything every time? Put an index on the dattime field of the parent table
and dump the activity since the last dump.
I'm guessing they stuck you with RAID 5, or 6 too - write penalty and all.
RAID 0 is my prefered, 1 01 & 10 are OK. And keep the database files away
from everything else - like the temp directory, spooler and transaction logs
.
Make a list of their needs, not what they say they want, but find out what
they are using and actually need. Then, if necessarg, build a seperate
database (call it a markting system) and dump data to it incremently.
Seperate the databases by transfering data via files, or perhaps dedicate a
crssover tcp/ip cable.
Whatever you do, keep them off the production server. They have a budget,
have them pay for what they need.
"DotComDBA" wrote:

> We have an OLTP database handling our business production operations.
> Our reporting team wants to extract from the OLTP db every 15 min, but
> that will hammer the OLTP db I/O-wise, breaking the system. The
> reporting team wants a copy of the OLTP system to do their extractions.
> The copy should be kept up-to-date with the OLTP system on a near
> continuous basis and be fully readable for extraction (At no time should
> the copy ever become unavailable during sync activities). We do not
> want to use SQL replication, and are looking at various alternatives
> involving SAN snapshot technologies. Due to our requirements, several
> of the SAN vendors have recommended using the built-in database
> mirroring feature in 2005. I was under the impression that the mirror
> was not available for use while it was being kept in sync and once the
> mirror was made available, it would have to be reinitialized to resume
> syncing. Am I correct, or is there a way to have the mirrored copy
> available but still syncing?
>|||We have a similar need & setup (i.e. OLTP + operational reporting / MIS
needs), and do get significant performance gains for both OLTP and
reporting by mirroring - have found that running reports on the OLTP
server can esp hammer counters like the buffer hit ratios etc
Mirroring (e.g. Asynch in SQL 2005) works fine without loading the OLTP
server
If you do "use" your read-only slave, you might need to consider
licensing issues / costs.
You might also look at using regular log shipping
e.g. we roll our OLTP Enterprise Ed SQL logs forward onto a Std Ed
reporting server at 30 minute intervals, and at the same time copy the
logs off site as a DR safety precaution.
We do have a few "real time" reports which we do allow on the live OLTP
server (but you need to set stringent criteria on such reports as per
other poster). The rest (~500 reports) are run off the slave.
The one 'draw back' of log shipping is that you will need to boot all
connections off your slave reporting database slave every time you roll
forward.
The other alternative considered was replication - on the upside it
allowed creation of different indices (mirroring / log shipping
duplicates indices, meaning you have to create your MIS indices on the
OLTP server as well ... ). On the downside, there seemed no real
'guarantees' on the data synchronisation (i.e. for Disaster recovery we
sleep better if using Mirroring or Log Shipping to the slave).
We didn't meet with much success on the SAN Snapshots (unless you e.g.
use a permanent 'mirror') - depending on the size of the disks you need
to snapshot, the SAN still has a finite bandwidth - (our 5 yo SAN can
data transfer was at only around 500GB per hour), and doing the
snapshot does chew up the SAN resources.
Regards
Stuart

Is database mirroring the solution for this scenario?

We have an OLTP database handling our business production operations.
Our reporting team wants to extract from the OLTP db every 15 min, but
that will hammer the OLTP db I/O-wise, breaking the system. The
reporting team wants a copy of the OLTP system to do their extractions.
The copy should be kept up-to-date with the OLTP system on a near
continuous basis and be fully readable for extraction (At no time should
the copy ever become unavailable during sync activities). We do not
want to use SQL replication, and are looking at various alternatives
involving SAN snapshot technologies. Due to our requirements, several
of the SAN vendors have recommended using the built-in database
mirroring feature in 2005. I was under the impression that the mirror
was not available for use while it was being kept in sync and once the
mirror was made available, it would have to be reinitialized to resume
syncing. Am I correct, or is there a way to have the mirrored copy
available but still syncing?You are right that the mirror database itself isn't available for reporting.
However, a snapshot from the mirror can provide a read-only view of the
database at a particular point in time. My guess is that this is what the
SAN vendors are suggesting.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"DotComDBA" <noreply@.nospam.com> wrote in message
news:uFMrCrtKHHA.536@.TK2MSFTNGP02.phx.gbl...
> We have an OLTP database handling our business production operations. Our
> reporting team wants to extract from the OLTP db every 15 min, but that
> will hammer the OLTP db I/O-wise, breaking the system. The reporting team
> wants a copy of the OLTP system to do their extractions. The copy should
> be kept up-to-date with the OLTP system on a near continuous basis and be
> fully readable for extraction (At no time should the copy ever become
> unavailable during sync activities). We do not want to use SQL
> replication, and are looking at various alternatives involving SAN
> snapshot technologies. Due to our requirements, several of the SAN
> vendors have recommended using the built-in database mirroring feature in
> 2005. I was under the impression that the mirror was not available for
> use while it was being kept in sync and once the mirror was made
> available, it would have to be reinitialized to resume syncing. Am I
> correct, or is there a way to have the mirrored copy available but still
> syncing?|||I was the DBA on an OLTP system for 10 years (yes a dotcom) and I must
question this "poll every 15 minutes that hammers the system" statement.
There should be NOTHING you should be pulling every 15 minutes that could
take more than a few seconds (if that) that is not going to be a waste.
What are they asking for? All sales? All hits? Promo pages? Are you dumping
everything every time? Put an index on the dattime field of the parent table
and dump the activity since the last dump.
I'm guessing they stuck you with RAID 5, or 6 too - write penalty and all.
RAID 0 is my prefered, 1 01 & 10 are OK. And keep the database files away
from everything else - like the temp directory, spooler and transaction logs.
Make a list of their needs, not what they say they want, but find out what
they are using and actually need. Then, if necessarg, build a seperate
database (call it a markting system) and dump data to it incremently.
Seperate the databases by transfering data via files, or perhaps dedicate a
crssover tcp/ip cable.
Whatever you do, keep them off the production server. They have a budget,
have them pay for what they need.
"DotComDBA" wrote:
> We have an OLTP database handling our business production operations.
> Our reporting team wants to extract from the OLTP db every 15 min, but
> that will hammer the OLTP db I/O-wise, breaking the system. The
> reporting team wants a copy of the OLTP system to do their extractions.
> The copy should be kept up-to-date with the OLTP system on a near
> continuous basis and be fully readable for extraction (At no time should
> the copy ever become unavailable during sync activities). We do not
> want to use SQL replication, and are looking at various alternatives
> involving SAN snapshot technologies. Due to our requirements, several
> of the SAN vendors have recommended using the built-in database
> mirroring feature in 2005. I was under the impression that the mirror
> was not available for use while it was being kept in sync and once the
> mirror was made available, it would have to be reinitialized to resume
> syncing. Am I correct, or is there a way to have the mirrored copy
> available but still syncing?
>|||We have a similar need & setup (i.e. OLTP + operational reporting / MIS
needs), and do get significant performance gains for both OLTP and
reporting by mirroring - have found that running reports on the OLTP
server can esp hammer counters like the buffer hit ratios etc
Mirroring (e.g. Asynch in SQL 2005) works fine without loading the OLTP
server
If you do "use" your read-only slave, you might need to consider
licensing issues / costs.
You might also look at using regular log shipping
e.g. we roll our OLTP Enterprise Ed SQL logs forward onto a Std Ed
reporting server at 30 minute intervals, and at the same time copy the
logs off site as a DR safety precaution.
We do have a few "real time" reports which we do allow on the live OLTP
server (but you need to set stringent criteria on such reports as per
other poster). The rest (~500 reports) are run off the slave.
The one 'draw back' of log shipping is that you will need to boot all
connections off your slave reporting database slave every time you roll
forward.
The other alternative considered was replication - on the upside it
allowed creation of different indices (mirroring / log shipping
duplicates indices, meaning you have to create your MIS indices on the
OLTP server as well ... ). On the downside, there seemed no real
'guarantees' on the data synchronisation (i.e. for Disaster recovery we
sleep better if using Mirroring or Log Shipping to the slave).
We didn't meet with much success on the SAN Snapshots (unless you e.g.
use a permanent 'mirror') - depending on the size of the disks you need
to snapshot, the SAN still has a finite bandwidth - (our 5 yo SAN can
data transfer was at only around 500GB per hour), and doing the
snapshot does chew up the SAN resources.
Regards
Stuart