Monday, November 8, 2010

FOSS GUI IDEs

A question was recently posed on my LUG's mailing list asking what advantages there are to using a GUI IDE over a terminal based editor, such as vim or emacs. Here is my response:

Probably the main difference between terminal editors and GUIs is that the terminal editors require you to use a key sequence to perform tasks while the GUIs automagically display information in popups. If you have spent enough time to learn the key sequences, then (IMO) not having to move your hand between the keyboard and mouse gives terminal editors the advantage. Also, different GUIs provide different capabilities, and none of the ones I have used (see below) are a clear winner over the others.

These features include:

  • Line collapsing
  • Auto-completion of function and variable names
  • Popups for functions showing the arguments, including data type
  • Popups showing the values of macro definitions
  • Popups showing the documentation for functions/classes (a man page in a popup)
Theoretically, the GUIs also offer better debugging capabilities. However, I am more comfortable with command line gdb. To me the GUIs are way too busy and I find the command line more flexible (p *mystruct or x/4x 0x80000c00 as opposed to a 4 click minimum and at least 10 to find the menu for doing anything other than displaying local variables).

However, there are some things that (AFAIK) GUIs can do that terminal editors can't. For example, when a compile has errors, being able to click on a compile error message and go straight to the flagged line in the source. (If anyone tells me that this is a simple matter of using ctags, then I demand you provide a simple explanation of how to use ctags to do this and all the other wonderful things that ctags claim to provide! Otherwise, I will continue to believe that ctags are a waste of time and resources.)

I have used Anjuta (which is written in C/C++ and uses GTK), Eclipse, and lately the Nokia SDK for Qt development. Here are my impressions:

Eclipse: Don't add C plugins to Eclipse, get the Ganymede Eclipse IDE for C/C++. I tried going the first path and it was ugly. Beyond that, yes Eclipse is best at Java, but this version is pretty good at C/C++. When the cursor is on a variable or function/class, all other instances of it within the current scope are highlighted. If you know your way around Eclipse, configuration is easier that the others.

Anjuta: What I love about Anjuta is that it built my autoconf and automake files for me. I was then able to tweak those to handle my custom requirements. If you have ever battled ac/am you know that every bit of help is welcome. Unfortunately, when I upgraded to a newer version, a lot of things changed. My older project files didn't quite work, so I spent a lot of time fixing those. Editor functions were different. The compile errors wouldn't go to the source code.

Nokie Qt SDK: This includes a designer for GUI (Qt specific) interfaces. Having used (and been spoiled by) the Google Android plugin for Eclipse and its GUI designer, I am not all that impressed, but it's better than not seeing the layout until you get your source compiled and debugged. Using the IDE for editing C/C++ is actually quite pleasant, and I am considering using it for future projects.

I have looked at KDevelop, but don't have any projects using it. Maybe the version I have is old, but so far it feels clunkier that the others.

Later . . . Jim


Wednesday, November 4, 2009

PostgreSQL File Corruption

The college where I am running my pilot project was able to let me have a faster, dual-core CPU. To my chagrin, I started getting more errors than before. But it really isn't that surprising, considering that the IDS is pretty intense, and the system is also running PostgreSQL and one or two Java apps.

I fixed the main problems in my code, so from a debugging perspective, it was a good test. And I was happy to see that, with the dual cores, I could run the user interface without shutting down the IDS. However, Xorg starting hanging occasionally, and there was no choice but to do a hard reset.

This was just an annoyance until I started getting PostgreSQL errors, such as "Could not open file pg_clog/000N", which caused me to lose several days worth of reports. For a pilot project, that is not critical, but it certainly raised a flag. So, I am going to document what I have done as well as what I have found from others.

First, backup your data. For my database, it is sufficient to run pg_dump to create scripts to insert data into the tables. But there are options for creating archives and compressing the data and later using pg_restore.

Unfortunately, the most recent backup I had was from a month before, so I wanted to do something about the pg_clog file. Here is what I did:
  1. I tried to run pg_dump, but that caused a really bad error which resulted in the partition being remounted in Read Only mode. At that point, I had no choice but to run fsck and reboot.

  2. With the file system errors fixed, I was able to run pg_dump and save all but a couple dozen reports. I then tried the REINDEX TABLE command, but without the pg_clog file, it failed.

  3. I was forced to use the DROP TABLE command on the table with the bad index, and then used the original CREATE TABLE script and the backup script to restore the data.

  4. Unfortunately, the performance accessing that table and another one with a relationship to it was horrible. So I ended up taking another backup, deleting all of the Incidents tables, recreating them, and then restoring the data.
So that's my story. But I was hopeful that there was a better method, so I have done some searching and here is what I have found from others' experience:
  • If you still have a live database, then if you can run "SELECT ctid FROM tab WHERE ..." for the records with unreasonable values that might tell you what blocks are corrupted. The value before the comma is the block number, which when multiplied by 8192 (assuming you're using 8k blocks) will tell you what file offset to look for the page. To find the file to look for the block in run "SELECT relfilenode FROM pg_class WHERE relname = 'tablename';". The answer will be a number that will be the filename, such as 16384. Note that if the file offset is over 1G then you would be looking for a file named 16384.N where N is which gigabyte chunk.

  • Create an empty file with the command "touch /POSTGRESDIR/pg_clog/000n". Next, fill the file with zeros ( blocks of 8K ) until the offset is covered, using the command "dd bs=8k count=1 < /dev/zero >> /usr/local/pgsql/data/pg_clog/000n", which is repeated until the offset is covered. If there are other files, in pg_clog, create a file with all zeroes the same size as those.

  • If you want to try to narrow down where the corruption is, you can experiment with commands like "SELECT ctid,* from big_table offset N limit 1;"

  • Use pg_resetxlog (located in /usr/lib/postgresql/8.3/bin/pg_resetxlog under Debian/Ubuntu)

  • Dump and reload the data on an other machine. A problem which can appear is that of data which violates constraints (like NOT NULL). One should remove all the constraints and add them back one by one, cleaning out the data which violates it.

  • You can set the client_min_message in postgresql.conf to DEBUG to get some more information.
As you can see, there is no magic wand command to recover your data. But hopefully, this will give you a fighting chance.

Later . . . Jim

Thursday, July 16, 2009

Managing Realeyes Memory

I have been at work on a new project that I hope to announce soon. But at the moment I need a break to let the algorithm for a particularly tricky function to germinate. So I am going to describe how I do memory management in the Realeyes IDS.

First, I have to say that this is not a generic memory manager. It is specific to my application. It may be possible to adapt it to other applications, but the key word here is 'adapt'. However, it will hopefully give anyone who is considering doing their own memory management some food for thought.

The reason I do my own memory management is to avoid fragmentation. The Realeyes IDS manages a lot of sessions simultaneously, so memory has to be used efficiently as possible. If a buffer were allocated for exactly the size of a packet's data, the overall buffer space would develop lots of pockets of unusable space. But to set the size of data buffers to the largest allowed by the Internet Protocol would also be inefficient, because there are a huge number of tinygrams in network traffic.

The solution is a compromise. I allocate fixed size buffers in sizes that are designed to waste as little space as possible. The smallest is 64 bytes. The next is 105 bytes, and the next, 128 bytes. So if 56 bytes are needed, the first size buffer is allocated. If 65 bytes are needed, the second. And if 108 bytes are needed, the third.

If you did a double take on the 105 byte buffer size, there is a method to my madness. These buffers are kept in pools of 8 Kilobytes. A 64 byte buffer pool will hold 256 buffers, and a 128 byte buffer pool will hold 64 buffers. Both of these will fit exactly into the 8K pool with no wasted space. To fine tune this a bit, I found the buffer size between them that wastes the least space. Allocating 78 buffers of 105 bytes each uses 8190 bytes, which wastes 2 bytes of space in an 8K pool.

Here is the complete list of buffer sizes: 64, 105, 128, 195, 256, 390, 512, 744, 1024, 2048, 4096, 8192. If larger sizes are needed, multiple adjacent pools are allocated, up to 64K. Again, this is specific to the Realeyes IDS application, which can guarantee that no buffer larger than 64K will ever be requested.

When the application is initialized, a huge buffer (many megabytes) is allocated and it is divided into 8K pools. Then, when a buffer is requested, if there is no pool for the appropriate buffer size already selected, the next available pool is assigned to provide buffers of that size only, and the first buffer in the pool is returned to the requester. If a pool already exists for the buffer size and has free buffers, a buffer from that pool is returned.

To handle the requests, each allocated pool is kept on one of three queues for that buffer size. The reason there are three is that the entire queue must be locked while buffers are being allocated or freed. The Realeyes IDS functions to handle semaphores allow for a caller to request an immediate return if a lock is already held. This means that if one of the pool queues is in use, the caller can try the next one. The rae_mem_mgmt.c module keeps track of the last queue accessed and uses a simple round robin method to check the next queue.

So far, so good. But there are still some loose ends. What if all of the buffers in a pool are in use, and that pool is at the head of the pool queue? For that matter, what if the first 1,000 pools in a queue have no available buffers? This is where the manager comes in.

For each pool size there are Full and Free queues. The manager periodically (about 500 times a second) checks each of the available queues, removes all pools that have no available buffers, and puts them on the Full queue for that buffer size. Pools on the Full queue that have available buffers are put on the Free queue. And pools that are on the Free queue are divvied up so that each available queue has approximately the same number of pools.

There are a few other steps in managing the queues, which is done in rae_mem_ctl.c. If an allocated pool has all of its buffers freed, it is put on a general available queue to be reused for possibly a different buffer size. Also, there is a queue at each buffer size for full pools that have not had available buffers for a period of time. This is only checked once a second to see if buffers have been freed.

So does it work? In the pilot project I have been running for over a year, the statistics I have collected show that the average number of sessions being managed simultaneously is around 20,000. Assuming an average size of 16K per session, that is 325M of data, plus the application data about each session. And that is just a snapshot. There are many Gigabytes of data being examined over the course of an hour. When the IDS does run out of buffers (I'm working on it, OK?!?), it is because the application hasn't released them when it should, not because the memory management is bogging down.

So that's the essence of how memory management is handled in the Realeyes IDS. However, because the application uses multiple processes instead of threads, the memory must be shared. I will cover that in a future post.

Later . . . Jim

Thursday, June 4, 2009

Good Passwords

I have recently had reason to think about "good" passwords. To begin with, passwords are like keys. And weak passwords are like leaving your keys in the ignition of your car when you are out of it -- before long, it's going to be stolen.

But while there is a lot of talk about strong passwords, I have not heard a really usable way of creating them. And by usable, I mean one that typical computer users will actually use consistently. Of course, this is leading to an algorithm I thought of recently.

First, choose three words. How they are chosen doesn't really matter, as long as they are not ridiculously obvious. I think it would be OK to use a standard theme. As an example, while I have no interest in golf, my uncle loves it. So that will be the theme, and for my first password, I will pick:
    plaid

    birdie

    sand
Next, pick three numbers. Once these are chosen they will almost never change. The numbers will be substituted for a letter in the words. This could be the third letter of each word or the second from the last. For this example, I will pick the numbers 4-7-4 and the third letter.

Next choose another letter position, that is not the same as the previous one. This gets capitalized (while all the others are lower case). For the example, it will be the last letter.

So now create the password:

  • plaidbirdiesand

  • pl4idbi7diesa4d

  • pl4iDbi7diEsa4D
If the site requires puctuation, simply choose a punctuation mark and insert it between two of the words:
  • pl4iDbi7diE:sa4D
Now all the user has to do is remember the three words, which are meaningful to only him or her, and should be reasonably easy to remember, even with four or five different passwords. The transformation is the same for every password. So another example is:
  • sliceironcart

  • sl4ceir7nca4t

  • sl4cEir7Nca4T
While this might not be acceptable to super top secret government facilities or financial institutions, it should be sufficient for the majority of people. And it would be a whole lot better than many passwords being used now. If you agree, teach it to everyone you know who uses passwords. Then we can start working on making sure passwords are always encrypted.

Later . . . Jim

Monday, June 1, 2009

Handling Semaphores in C Programs

A while back, Carlo Schroder, over at LinuxToday.com, put out a request for articles on programming. Now that I have put the downloads for Realeyes IDS version 0.9.5 up on SourceForge, I get to have some fun answering her call.

What I have found in programming books, including those by the late W. Richard Stevens (which I turn to most often) is usually a good start, but never the whole story. But since this is not a general programming text, I will focus on a single issue in detail. This post will cover using semaphores.

First, a little background on locks, in case you have never used them. In *nix, process locks are implemented with the semaphore system calls. Since I use child processes that share memory, I have to implement semaphores. Threads use pthread_mutex calls, which do essentially what these functions do, and then some.

The most common reason for implementing locks is if you have multiple concurrently running processes or threads that have access to the same variable in memory. Obtaining or releasing a lock is guaranteed by the operating system to be completed without interruption. A single line of C code, such as
 if (flag & 4)
requires a minimum of two machine instructions:
  • Get the value of flag from memory into a register

  • Compare the value to zero
It is possible the thread running that code could be swapped out after getting the value, but before comparing it, and the value of flag could be changed by another thread, making the comparison invalid. By requiring every thread to get the flag lock before reading or writing the value of flag, only one thread accesses flag at a time.

The rule of thumb for the code while a lock is held is to do only what requires holding the lock, and no more. Often there are less than ten instructions between getting and releasing a lock. However, sometimes there are a couple dozen, because all of them require holding the lock. My memory management is an example of this, and I will try to cover it down the road.

OK, now for how I implement semaphores. For some reason, the caller is required to define the semun structure. This definition is taken from the semctl man page and is in the rae_lock_mgmt.h file.

union semun {
int val; /* Value for SETVAL */
struct semid_ds *buf; /* Buffer for IPC_STAT, IPC_SET */
unsigned short int *array; /* Array for GETALL, SETALL */
struct seminfo *__buf; /* Buffer for IPC_INFO */
};
All of the following code is from the rae_lock_mgmt.c file. First, I define a global to keep track of a held lock. This is done so that if there is an interrupt, such as a segmentation fault, while the lock is held, it can be released in the signal handler by calling rae_release_lock. The caller must set these fields to the address of the variables used to track the semaphore ID and index.
/* Pointer to currently held lock */
int *rae_held_lock = NULL;
int *rae_hl_index = NULL;
Before a lock can be used, it must be initialized. I use a lot of locks and have found that different Linux distributions have different defaults for the maximum number of semaphores an application may allocate. To keep the number of allocations down, I have grouped the locks by functionality, and each group gets a semaphore set (or array) which only uses a single semaphore ID. Therefore, the number of locks in the group is passed to the init function.
int rae_init_lock(int il_size)
{
int i, il_semid = -1;
union semun sem_union;
The semget call returns a new semaphore set. Then each lock in the array is initialized to a value of 1 to indicate that it is available. If that fails, the semaphore ID is set to a negative value, which is -1 if the semaphore set is released, and -2 if it is not.
  if ((il_semid = semget(IPC_PRIVATE, il_size, (0660|IPC_CREAT))) > 0)
{
sem_union.val = 1;
for (i=0; i < il_size; i++)
{
if (semctl(il_semid, i, SETVAL, sem_union) == -1)
{
if (semctl(il_semid, 0, IPC_RMID, sem_union) == -1)
il_semid = -2;
else
il_semid = -1;
}
}
}
return(il_semid);
} /* end rae_init_lock */
When the application shuts down, the locks must be freed. If not, they remain allocated by the system. You can run 'ipcs -s' to see the locks that are held. If an application fails to relaese a lock, you can run ipcrm (read the man page) as root to release it.

Notice that the location of the semaphore ID in the application is passed to this function, and it is set to zero by the function. This is because the sem_ctl command, IPC_RMID, ignores the index and simply removes the entire semaphore. Also, I prefer to do as much as possible in a function so the caller does not have to worry about the details. This way, when I call the same function from different places, I reduce the risk of forgetting to set something.

int rae_free_lock(int *fl_semid, int fl_idx)
{
int fl_stat = 0;
union semun sem_union;

if (*fl_semid == 0)
goto out;
sem_union.val = 1;
fl_stat = semctl(*fl_semid, fl_idx, IPC_RMID, sem_union);
*fl_semid = 0;
out:
return(fl_stat);
} /* end rae_free_lock */
When a lock is needed for a memory location, the get lock function is called with the lock identifier, which consists of the semaphore ID and the index in its array. I have added a wait flag to allow some locks to be conditional. In my memory management code, I have three available buffer queues, and if the lock on one is held, the caller can simply try the next one without waiting.

The semop call gets and releases a lock by subtracting the supplied value from the semaphore's value. What this means is that you, the programmer, are responsible for defining what value indicates a held or released lock. By keeping all of this logic in a pair of functions, you have control over how it is implemented. All of the examples I have seen use 1 to indicate the lock is available, and 0 to indicate that it is held. I can imagine how other values might be used, but it seems ridiculously complicated and prone to error.
int rae_get_lock(int gl_wait, int gl_semid, int gl_idx)
{
int gl_stat = 0;
struct sembuf sem_b;

if (gl_semid == 0)
{
gl_stat = -2;
goto out;
}
The semaphore buffer structure is defined in the system headers. This is where the semaphore array index and operation are set. And as I said before, this function allows the caller to wait or not, which is accomplished by using the semaphore flag bit, IPC_NOWAIT.

The SEM_UNDO flag bit is supposed to reverse the operation when the process terminates, which implies that if the process fails while holding a lock, the system will release it (but not free it). However, in my experience, that doesn't always work, so I have included the capability to do this in my interrupt handlers, as I mentioned above.

  sem_b.sem_num = (short int) gl_idx;
sem_b.sem_op = -1;
if (gl_wait & raeLOCK_WAIT)
sem_b.sem_flg = SEM_UNDO;
else if (gl_wait & raeLOCK_NWAIT)
sem_b.sem_flg = SEM_UNDO | IPC_NOWAIT;
else
{
gl_stat = -1;
goto out;
}
This is the heart of the function. Read the semop man page for a lot more detail, but the general idea is as follows. The semop system call will attempt to subtract 1 from the current value of the lock. If that value is 1, the operation occurs immediately. Otherwise, the call will wait or return with the errno value of EAGAIN if the lock is held. Of course, there is the possibility the call will fail entirely, which must be handled.

If the lock value is set to 0, this means the lock is obtained, and this function sets the semaphore ID and index in the global lock tracking variables.

  if ((gl_stat = semop(gl_semid, &sem_b, 1)) == -1)
{
if ((gl_wait & raeLOCK_NWAIT) && errno == EAGAIN)
gl_stat = 1;
else if (errno == EIDRM)
gl_stat = -2;
}
if (!gl_stat && rae_held_lock != NULL)
{
*rae_held_lock = gl_semid;
*rae_hl_index = gl_idx;
}
out:
return(gl_stat);
} /* end rae_get_lock */
This is the reverse of the get lock function, in that it adds one to the lock value. There is no wait flag for releasing a lock, so only the semaphore ID and index are supplied. If the lock value is set to 1, the lock is released and this function clears the semaphore ID and index in the global lock tracking variables.
int rae_release_lock(int rl_semid, int rl_idx)
{
int rl_stat = 0;
struct sembuf sem_b;

sem_b.sem_num = (short int) rl_idx;
sem_b.sem_op = 1;
sem_b.sem_flg = SEM_UNDO;
rl_stat = semop(rl_semid, &sem_b, 1);
if (!rl_stat && rae_held_lock != NULL)
{
*rae_held_lock = 0;
*rae_hl_index = 0;
}
return(rl_stat);
} /* end rae_release_lock */
This set of functions makes using semaphores as easy as:
  • Init lock

  • Get lock

  • Release lock

  • Free lock
Of course, the caller code must be well thought out to prevent a deadly embrace. That is accomplished by keeping the code using the Get and Release calls as simple as possible, and making sure the instructions between them absolutely require the lock.

Later . . . Jim