Byte.com: Journaling File Systems For Linux - Why And How To Implement JournalingMay 30, 2000, 20:55 (0 Talkback[s])
(Other stories by Moshe Bar)
"Journaling file systems are safer than traditional file systems because they keep track of changes applied to the disks' content on a separate log file. They either commit a change or roll back in a transactional manner, much like an RDBMS...."
"Imagine now that you are updating a directory entry. You've just modified 23 file entries in the fifth block of some giant directory entry. Just as the disk is in the middle of writing this block there is a power-outage; the block is now incomplete, and therefore corrupted."
"During reboot, Linux (like all Unix machines) runs a program called "fsck" (file system check) that steps through the entry file system validating all entries and making sure that blocks are allocated and referenced correctly. It will find this corrupted directory entry and attempt to repair it. There is no certainty that fsck will actually manage to repair the damage. Quite often, actually, it does not. Sometimes, in a situation as described above, all the directory entries can be lost."
"For large file systems, fsck can take a very long time. On a machine with many gigabytes of files, fsck can run for up to 10 or more hours. During this time, the system is obviously un-usable and this represents for some shops an unacceptable amount of downtime. This is where journaling file systems help."
0 Talkback[s] (click to add your comment)