Showing posts with label 5gb. Show all posts
Showing posts with label 5gb. Show all posts

Monday, March 26, 2012

LOG FILES

I have a problem with one of the SQL Logs where the LOG
file has reached 5GB in size the DB is only around 400MB,
I have also tried running the command:
dbcc shrinkfile (database, 256)
I have tried altering the number to reflect the size but
alway get the following:
Cannot shrink log file 2 (System21MD_Log) because all
logical log files are in use.
(1 row(s) affected)
also under grids I get:
DbId Filed Current Min Used estimated
size Size Size pages.
10 2 639912 128 639912 128
could anyone advise howto shrink this logfile or truncate
it, please. Any herlp is much appriciated.Adam
Did you try to backup the log file ?
For more details please refer to BOL
"Adam" <anonymous@.discussions.microsoft.com> wrote in message
news:0b2b01c3d52a$07340be0$a301280a@.phx.gbl...
> I have a problem with one of the SQL Logs where the LOG
> file has reached 5GB in size the DB is only around 400MB,
> I have also tried running the command:
> dbcc shrinkfile (database, 256)
> I have tried altering the number to reflect the size but
> alway get the following:
> Cannot shrink log file 2 (System21MD_Log) because all
> logical log files are in use.
> (1 row(s) affected)
>
> also under grids I get:
> DbId Filed Current Min Used estimated
> size Size Size pages.
> 10 2 639912 128 639912 128
>
> could anyone advise howto shrink this logfile or truncate
> it, please. Any herlp is much appriciated.|||the SQL is backed up every night using the veritas addon
for SQL and has previously kept the size down.
>--Original Message--
>Adam
>Did you try to backup the log file ?
>For more details please refer to BOL
>
>"Adam" <anonymous@.discussions.microsoft.com> wrote in
message
>news:0b2b01c3d52a$07340be0$a301280a@.phx.gbl...
>> I have a problem with one of the SQL Logs where the LOG
>> file has reached 5GB in size the DB is only around
400MB,
>> I have also tried running the command:
>> dbcc shrinkfile (database, 256)
>> I have tried altering the number to reflect the size but
>> alway get the following:
>> Cannot shrink log file 2 (System21MD_Log) because all
>> logical log files are in use.
>> (1 row(s) affected)
>>
>> also under grids I get:
>> DbId Filed Current Min Used estimated
>> size Size Size pages.
>> 10 2 639912 128 639912 128
>>
>> could anyone advise howto shrink this logfile or
truncate
>> it, please. Any herlp is much appriciated.
>
>.
>|||make usre you have backed up the log.
then retry the dbcc shrinkfile, until you get some love... It will not work
until all active transactions have moved off of the logical file... Maybe
try this every 15 minutes or so...
Wayne Snyder, MCDBA, SQL Server MVP
Computer Education Services Corporation (CESC), Charlotte, NC
www.computeredservices.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Adam" <anonymous@.discussions.microsoft.com> wrote in message
news:0b2b01c3d52a$07340be0$a301280a@.phx.gbl...
> I have a problem with one of the SQL Logs where the LOG
> file has reached 5GB in size the DB is only around 400MB,
> I have also tried running the command:
> dbcc shrinkfile (database, 256)
> I have tried altering the number to reflect the size but
> alway get the following:
> Cannot shrink log file 2 (System21MD_Log) because all
> logical log files are in use.
> (1 row(s) affected)
>
> also under grids I get:
> DbId Filed Current Min Used estimated
> size Size Size pages.
> 10 2 639912 128 639912 128
>
> could anyone advise howto shrink this logfile or truncate
> it, please. Any herlp is much appriciated.|||This is a multi-part message in MIME format.
--=_NextPart_000_0048_01C3D534.884AB940
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Have you tried backing up the transaction log? This should truncate it.
Also, do you have any long running transactions? These inflate the transaction log, & if they are running at the time of =backup, they will not be truncated.
-- Cheers,
James Goodman MCSE, MCDBA
http://www.angelfire.com/sports/f1pictures
--=_NextPart_000_0048_01C3D534.884AB940
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

Have you tried backing up the =transaction log? This should truncate it.
Also, do you have any long running =transactions? These inflate the transaction log, =& if they are running at the time of backup, they will not be =truncated.
-- Cheers,
James Goodman MCSE, MCDBAhttp://www.angelfire.com/sports/f1pictures">http://www.angelfire.=com/sports/f1pictures

--=_NextPart_000_0048_01C3D534.884AB940--

Wednesday, March 21, 2012

Log File Issue

I am still struguling with shrinking a log file that is
almost 5GB. The database is being replicated using merge
replication. I ran the following commands without any
success
backup log Main_DB with truncate_only
The log was not truncated because records at the
beginning of the log are pending replication. Ensure the
Log Reader Agent is running or use sp_repldone to mark
transactions as distributed.
EXEC sp_repldone @.xactid = NULL, @.xact_segno = NULL,
@.numtrans = 0, @.time = 0, @.reset = 1
Server: Msg 18757, Level 16, State 1, Procedure
sp_repldone, Line 1
The database is not published.
dbcc opentran
Transaction information for database 'Main_DB'.
Replicated Transaction Information:
Oldest distributed LSN : (0:0:0)
Oldest non-distributed LSN : (305:22434:1)
DBCC execution completed. If DBCC printed error messages,
contact your system administrator.
How do I go about shrinking/truncating the log file
without having to remove replication from the database or
is removing and reestablishing replication my only
solution?
Thanks
Emma
Open a support case with PSS.
There is no such thing as a pending replicated transaction with merge as it
does not use the tran log. There is also no such thing as a log reader with
merge either. Sp_repldone won't do anything since it operates against the
distribution database and will have no effect with merge since merge doesn't
use the distribution database for transactions.
You have something else going on and if you don't have transactional
replication configured, you did at some point and something has been left
hanging.
Mike
Principal Mentor
Solid Quality Learning
"More than just Training"
SQL Server MVP
http://www.solidqualitylearning.com
http://www.mssqlserver.com
|||What happens if you stop and start SQL Server, and then try to truncate the
log?
"Emma" <eeemore@.hotmail.com> wrote in message
news:791d01c4312f$425272a0$a501280a@.phx.gbl...
> I am still struguling with shrinking a log file that is
> almost 5GB. The database is being replicated using merge
> replication. I ran the following commands without any
> success
> backup log Main_DB with truncate_only
> The log was not truncated because records at the
> beginning of the log are pending replication. Ensure the
> Log Reader Agent is running or use sp_repldone to mark
> transactions as distributed.
>
> EXEC sp_repldone @.xactid = NULL, @.xact_segno = NULL,
> @.numtrans = 0, @.time = 0, @.reset = 1
> Server: Msg 18757, Level 16, State 1, Procedure
> sp_repldone, Line 1
> The database is not published.
>
> dbcc opentran
> Transaction information for database 'Main_DB'.
> Replicated Transaction Information:
> Oldest distributed LSN : (0:0:0)
> Oldest non-distributed LSN : (305:22434:1)
> DBCC execution completed. If DBCC printed error messages,
> contact your system administrator.
>
> How do I go about shrinking/truncating the log file
> without having to remove replication from the database or
> is removing and reestablishing replication my only
> solution?
> Thanks
> Emma
|||Hilary,
The same thing happens when I stop and restart the
service.
Emma

>--Original Message--
>What happens if you stop and start SQL Server, and then
try to truncate the[vbcol=seagreen]
>log?
>"Emma" <eeemore@.hotmail.com> wrote in message
>news:791d01c4312f$425272a0$a501280a@.phx.gbl...
merge[vbcol=seagreen]
the[vbcol=seagreen]
messages,[vbcol=seagreen]
or
>
>.
>
|||Michael,
What is PSS and how do I contact them?
Thanks
Emma

>--Original Message--
>Open a support case with PSS.
>There is no such thing as a pending replicated
transaction with merge as it
>does not use the tran log. There is also no such thing
as a log reader with
>merge either. Sp_repldone won't do anything since it
operates against the
>distribution database and will have no effect with merge
since merge doesn't
>use the distribution database for transactions.
>You have something else going on and if you don't have
transactional
>replication configured, you did at some point and
something has been left
>hanging.
>--
>Mike
>Principal Mentor
>Solid Quality Learning
>"More than just Training"
>SQL Server MVP
>http://www.solidqualitylearning.com
>http://www.mssqlserver.com
>
>.
>
|||Emma,
this is the relevant webpage:
http://support.microsoft.com/default.aspx?scid=fh;en-
us;Prodoffer41a&sd=MVP
Regards,
Paul
|||I setup a standby/test server with the database and
removed replication completely and recreated it and I was
able to shrink the log file. There must have been a
problem in the original setup of the database or from the
applications accessing the database living transactions
open. I will go through the process again and document
everything I do, and hope it works on the production
server.
I will have to find the appropriate time to try this on
the production server as well. It will be a pain doing
this on the production server because there are about 12
publications there. I do not want to script the
publications because if there is an error in the original
setup, it may be carried over.
Thanks for all your help.
Emma
|||Hi all,
I had a similar problem but seemed to solve it by backing up the database first; the documentation suggests the log needs to be checkpointed and that occurs on a backup. I did this through the Query Analyzer; can't vouch that this works via Enterprise Manager or SQLDMO.
/* Backup */
use master
exec sp_addumpdevice 'disk', 'databak', 'c:\databak.dat'
exec sp_addumpdevice 'disk', 'logbak', 'c:\logbak.dat'
backup database target to databak
backup log target to logbak
/* Shrink log */
use target
declare @.fid int
select @.fid = File_ID('target_log')
dbcc shrinkfile (@.fid)|||Hi everyone,
I got a solution to this issue. I got stuck with this problem when I restored a database from a production server to a development server. On Production server this database was being used in replication also.
I restored this database from production server to development server and after that I wanted to truncate the log file of it on production server.
When I executed this query:
Backup Log <MyDatabaseName> With Truncate_Only
I got this nice message:
The log was not truncated because records at the beginning of the log are pending replication. Ensure the Log Reader Agent is running or use sp_repldone to mark transactions as distributed.
I used sp_repldone then I got the message:
Database is not published.
DBCC Shrinkfile also did not work.
Then I tried a trick, and guys it worked. What I did, I am writing in steps:
1. I published this database using the following query.
Execute SP_ReplicationDbOption <MyDatabaseName>,Publish,True,1
Here 1 is used for the parameter @.ignore_distributor, which can be 0 or 1.
If it set to 0 then this stored procedure will try to connect to ditributor database and update it with the new status of publication database. I used '1' because as I told earlier I restored my database on development server and so there was no distribution database.
2. Then I used that so called 'SP_Repldone' to mark the logs as distributed as given below:
Execute sp_repldone @.xactid = NULL, @.xact_segno = NULL, @.numtrans = 0, @.time = 0, @.reset = 1
3. Then I used DBCC ShrinkFile(<MydatabaseLogFileName>,0)
The log was shrinked successfully.
So, this was all the happy story guys.
sujeet4u <s.sp007@.yahoo.co.in>

Wednesday, March 7, 2012

Log Cache Hit Ratio + Buffer Cache Hit Ratio

Thanks in advance for any info.
SQL Server Standard/ Windows 2000 AS, Dual Proc (1.4's I think), 1.5GB RAM.
RAID 5.
What does one do when the Buffer Cache is at (and stays at) 99% and the Log
Cache is at 99%? Faster processors, more memory, change the RAID config?
Note: this is not daily activity on the box, but a monthly process that has
gone from 18 hours to 28 hours {and counting} with no changes.
Thanks,
MorganIf you're talking about the hit ratios, those are both
excellent numbers. Anything over 95% is optimal. When
those numbers start to drop, in general more memory is
the key, but be sure to check into it more closely before
purchasing hardware.
>--Original Message--
>Thanks in advance for any info.
>SQL Server Standard/ Windows 2000 AS, Dual Proc (1.4's I
think), 1.5GB RAM.
>RAID 5.
>What does one do when the Buffer Cache is at (and stays
at) 99% and the Log
>Cache is at 99%? Faster processors, more memory, change
the RAID config?
>Note: this is not daily activity on the box, but a
monthly process that has
>gone from 18 hours to 28 hours {and counting} with no
changes.
>
>Thanks,
>Morgan
>
>.
>|||Morgan,
> What does one do when the Buffer Cache is at (and stays at) 99% and the
Log
> Cache is at 99%? Faster processors, more memory, change the RAID config?
> Note: this is not daily activity on the box, but a monthly process that
has
> gone from 18 hours to 28 hours {and counting} with no changes.
99% cache hit ratios are optimal.
This means that the server is practically always getting data from the
caches, rather than going to the disk subsystem.
It sounds more like the work needs to be done in the monthly process, as far
as optimizing it goes. Your hardware seems to be performing optimally.
When did it go from 18 to 28 hours? Suddenly? Over time? I would start
suspecting one of the following:
That perhaps either:
A.) The data set has grown
B.) Someone changed one of the queries
C.) Someone dropped a key index
D.) One or more of the above. :-)
James Hokes|||In addition to the other comments the log can be a bottle neck if on a Raid
5 and especially with the data. The Log cache ration doesn't mean much for
writes so you may want to check your disk queues on the Raid that houses the
log file anyway and make sure it has no issues. TempDB can be another place
to look. Usually month end type process does a lot of activity that uses
temp tables and you can have bottlenecks there as well. Essentially you
need to monitor the cpu and disk counters while this process is happening to
see where the bottleneck is coming from. If the data cache ratio is 99%
chances are more memory won't help much. If your procedures are optimized
then it will boil down to disk or cpu. But how sure are you that they are
that optimized?
--
Andrew J. Kelly
SQL Server MVP
"Morgan" <mfears@.spamcop.net> wrote in message
news:OJWUJA%23yDHA.2540@.tk2msftngp13.phx.gbl...
> Thanks in advance for any info.
> SQL Server Standard/ Windows 2000 AS, Dual Proc (1.4's I think), 1.5GB
RAM.
> RAID 5.
> What does one do when the Buffer Cache is at (and stays at) 99% and the
Log
> Cache is at 99%? Faster processors, more memory, change the RAID config?
> Note: this is not daily activity on the box, but a monthly process that
has
> gone from 18 hours to 28 hours {and counting} with no changes.
>
> Thanks,
> Morgan
>|||On Fri, 26 Dec 2003 13:42:08 -0500, "Morgan" wrote:
>SQL Server Standard/ Windows 2000 AS, Dual Proc (1.4's I think), 1.5GB RAM.
>RAID 5.
>What does one do when the Buffer Cache is at (and stays at) 99% and the Log
>Cache is at 99%? Faster processors, more memory, change the RAID config?
>Note: this is not daily activity on the box, but a monthly process that has
>gone from 18 hours to 28 hours {and counting} with no changes.
In addition to what James, James and Andrew said, look at your
statistics. As your database grows over time, the distribution of values
across columns can change substantially. Out of date stats can
contribute to bad query plans, leading to long runs.
Are you able to pinpoint which part(s) of your monthly process chews the
most time?
cheers,
Ross
--
Ross McKay, WebAware Pty Ltd
"The lawn could stand another mowing; funny, I don't even care"
- Elvis Costello|||Thanks to everyone for their insight.
I was somewhat under the impression that if they're both max'd out, then
it's time to start looking at HW. I know for a fact the procedures need
"help", but it's an inherited process that for the first time this month,
took much longer than usual, which is why I knee-jerked my post. To the best
of my knowledge, none of the underlying objects has changed, but upon
further review (below), several tables are missing indicies where they are
required. I also saw quite a bit of parallelism happening via SP_WHO, and
have begun to question the multi-proc machine, which for better or worse, is
most likely fine.
I ran Profiler off-and-on (didn't want to make it any worse than it was),
and found a single procedure that appeared to be somewhat of a bottleneck,
lots of extensive reads for a single query, minimal (single, actually)
writes. Fortunately, a predecessor had the forethought to not use temp
tables, however, the tables in use are not properly indexed. Upon review of
the primary table in the afore mentioned procedure, it turns out there are
no indexes or unique constraints; along with a truncate table statement
against the table at the start of the procedure. I know the cost of a
properly placed index will fix the amount of reads against the suspect table
I saw in Profiler. Top all this off with cursors galore (note: not the
author, but the maintainer), and you can easily appreciate the mess I'm
dealing with. I'm reading this as the beginning of the end for the current
processes... ;)
Thanks again,
Morgan
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uOYObWMzDHA.2408@.tk2msftngp13.phx.gbl...
> In addition to the other comments the log can be a bottle neck if on a
Raid
> 5 and especially with the data. The Log cache ration doesn't mean much for
> writes so you may want to check your disk queues on the Raid that houses
the
> log file anyway and make sure it has no issues. TempDB can be another
place
> to look. Usually month end type process does a lot of activity that uses
> temp tables and you can have bottlenecks there as well. Essentially you
> need to monitor the cpu and disk counters while this process is happening
to
> see where the bottleneck is coming from. If the data cache ratio is 99%
> chances are more memory won't help much. If your procedures are optimized
> then it will boil down to disk or cpu. But how sure are you that they are
> that optimized?
> --
> Andrew J. Kelly
> SQL Server MVP
>
> "Morgan" <mfears@.spamcop.net> wrote in message
> news:OJWUJA%23yDHA.2540@.tk2msftngp13.phx.gbl...
> > Thanks in advance for any info.
> >
> > SQL Server Standard/ Windows 2000 AS, Dual Proc (1.4's I think), 1.5GB
> RAM.
> > RAID 5.
> >
> > What does one do when the Buffer Cache is at (and stays at) 99% and the
> Log
> > Cache is at 99%? Faster processors, more memory, change the RAID config?
> > Note: this is not daily activity on the box, but a monthly process that
> has
> > gone from 18 hours to 28 hours {and counting} with no changes.
> >
> >
> >
> > Thanks,
> > Morgan
> >
> >
>