Oracle uses log files for REDO and has ROLLBACK_SEGMENTS or UNDO Segments (depending on Oracle version) for UNDO. It never uses log files for UNDO - and UNDO is what provides Read Consistency/MVCC in an Oracle database. Changes are written to the LOG_BUFFER (n memory) and periodically - on commit, every 3 seconds max, or when the buffer is 33% full - flushed to the REDO logs. These REDO logs might be archived to disc when they fill up. That Depends on the database archive log mode though. These logs are used when a database is restored and rolled forward (using the RECOVER DATABASE command, for example). In order to roll back changes and to ensure read consistency, UNDO is used. These do live on disc - as tablespace files - but remain in memory in the buffer cache alongside data blocks etc. When a SELECT is started, the data returned are the data from the data blocks. Each row in a block has an indicator that tells when it was last updated. If a pending update is taking place (currently uncommitted) or if a commit has taken place since this SELECT started then the data read from that data block has changed - and is not consistent with the start time of this SELECT transaction. When this is detected, Oracle “rolls back” the changes to the start time of the SELECT taking place by looking for the UNDO block(s) associated with the transaction that made the changes. If that results in the correct (consistent) data, that’s what you get. If it turns out that there were other transactions that also changed the data, they too will be detected and undone. In this way you only ever see data that was consistent at the start of your own transaction. As long as the DBA correctly sizes the UNDO tablespace and correctly sets the UNDO_RETENTION parameter to a decent enough value, data changes are able to be rolled back happily all the time. If the DBA failed miserably in his/her duties, the ORA-01555 Snapshot too old” errors are the result. And are most irritating. Long running SELECTS - batch reports for example - tend to show up this error mostly.Of course, you would never see such problems with Firebird, because the old record versions are stored in database and not the log files. You don’t have to care if system crashes - after reboot it simply works. You might think that engineers who build Firebird are smarter than Oracle’s but sometimes I think Oracle is deliberately made so complicated to require DBA and also offer them job security. And also makes sure nobody can complain it’s too easy to use.
16 дек. 2011 г.
Why is Firebird better DBMS than Oracle?
Не могу не перепостить из блога Милана Бабускова (Milan Babuskov):
Beside being free (both as beer and also open source), you don’t need 24x7 DBA and there are generally less headaches. Here’s a nice example explained by Norman Dumbar in a mailing-list post. Norman administers over 600 Oracle databases and about 40 Firebird ones:
Labels:
Firebird
Комментариев нет:
Отправить комментарий