I fixed the main problems in my code, so from a debugging perspective, it was a good test. And I was happy to see that, with the dual cores, I could run the user interface without shutting down the IDS. However, Xorg starting hanging occasionally, and there was no choice but to do a hard reset.
This was just an annoyance until I started getting PostgreSQL errors, such as "Could not open file pg_clog/000N", which caused me to lose several days worth of reports. For a pilot project, that is not critical, but it certainly raised a flag. So, I am going to document what I have done as well as what I have found from others.
First, backup your data. For my database, it is sufficient to run pg_dump to create scripts to insert data into the tables. But there are options for creating archives and compressing the data and later using pg_restore.
Unfortunately, the most recent backup I had was from a month before, so I wanted to do something about the pg_clog file. Here is what I did:
- I tried to run pg_dump, but that caused a really bad error which resulted in the partition being remounted in Read Only mode. At that point, I had no choice but to run fsck and reboot.
- With the file system errors fixed, I was able to run pg_dump and save all but a couple dozen reports. I then tried the REINDEX TABLE command, but without the pg_clog file, it failed.
- I was forced to use the DROP TABLE command on the table with the bad index, and then used the original CREATE TABLE script and the backup script to restore the data.
- Unfortunately, the performance accessing that table and another one with a relationship to it was horrible. So I ended up taking another backup, deleting all of the Incidents tables, recreating them, and then restoring the data.
- If you still have a live database, then if you can run "SELECT ctid FROM tab WHERE ..." for the records with unreasonable values that might tell you what blocks are corrupted. The value before the comma is the block number, which when multiplied by 8192 (assuming you're using 8k blocks) will tell you what file offset to look for the page. To find the file to look for the block in run "SELECT relfilenode FROM pg_class WHERE relname = 'tablename';". The answer will be a number that will be the filename, such as 16384. Note that if the file offset is over 1G then you would be looking for a file named 16384.N where N is which gigabyte chunk.
- Create an empty file with the command "touch /POSTGRESDIR/pg_clog/000n". Next, fill the file with zeros ( blocks of 8K ) until the offset is covered, using the command "dd bs=8k count=1 < /dev/zero >> /usr/local/pgsql/data/pg_clog/000n", which is repeated until the offset is covered. If there are other files, in pg_clog, create a file with all zeroes the same size as those.
- If you want to try to narrow down where the corruption is, you can experiment with commands like "SELECT ctid,* from big_table offset N limit 1;"
- Use pg_resetxlog (located in /usr/lib/postgresql/8.3/bin/pg_resetxlog under Debian/Ubuntu)
- Dump and reload the data on an other machine. A problem which can appear is that of data which violates constraints (like NOT NULL). One should remove all the constraints and add them back one by one, cleaning out the data which violates it.
- You can set the client_min_message in postgresql.conf to DEBUG to get some more information.
Later . . . Jim