UNCOMMITTED DATA IN DATAFILES

I know that while we issue COMMIT, LGWR writes committed and uncommitted transactions from redo log buffer into redo log files.
And while we do CHECKPOINT, DBWR writes committed and “uncommitted” transactions from database buffer cache into data files.

1. Why LGWR process writes uncommitted transactions to Redo Log files?
2. Why DBWR writes uncommitted transactions to Data Files?

Why do we need these uncommitted transactions in our Redo Log and Data Files?

Well, just to clarify, log writer is writing committed and uncommitted transactions from the redo log buffer to the log files more or less continuously – not just on commit (when the log buffer is 10mb full, 1/3 full, every 3 seconds or every commit – whichever comes first – those all trigger redo writes).

And DBWR is more or less continuously checkpointing these days – with fast start mttr target and the other log related parameters, dbwr is flushing dirty blocks to disk more or less “all of the time”. There is this major event called a checkpoint, but it is more of a marker than anything else since the work is ongoing.

1.)
because you can do transactions that are much larger than available memory – there isn’t room to buffer it all. That and the fact that if you saved it all up, a commit could take a very very very long time – we want to get ahead of the curve.

You can have transactions that are much larger than available memory.

You can have transactions that are much larger than available online redo log (which leads to #2 in a second)

2) DBWR will write out dirty blocks onto disk in order to

a) allow an online redo log file to be reused, if it did not put the block safely onto disk – we could not reuse the redo log file (that contains the redo to redo the changes). If we cannot reuse the redo log file – we would STOP DEAD. Hence, since stopping dead in our tracks is a bad thing, we put the dirty blocks onto disk so we can reuse the redo information.

b) make more space in the buffer cache, you can have modifications that exceed the available SGA by many times. Also, we are multi-user so many people can be doing things at once. We cannot be constrained by memory limitations

c) limit the time it takes to recover from a crash. Say we were doing a big update and updated ton of information. We commit. We crash immediately after the commit. If DBWR left all of the dirty blocks in the cache, we’d have to wait during startup for that transaction to be replayed – that could take a long time. In order to reduce the mean time to repair the database after a crash – we have things like the fast_start_mttr_target

The database writer can write uncommitted info also to datafiles as a result of following activities:

Flush buffer cache
If buffer cache containing dirty blocks is flushed, dirty blocks containing uncomitted data are written to datafiles.

Before a logfile gets overwritten
A redo log cannot be overwritten until changes recorded in it are written to disk by DBWR. Hence, before a redo log
containing uncommitted data is overwritten, uncommitted changes recorded in it are written to datafiles by DBWR.

Manual checkpoint
Whenever checkpoint is initiated manually, all the dirty blocks containing committed/uncommitted data in buffer cache are flushed to disk .

When uncommitted data belonging to a tablespace  is flushed to disk as a result of
Tablespace offline
Tablespace read only
Tablespace begin backup

Ref
asktom.oracle.com
oracleinaction.com

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s