[Techtalk] multithreaded C program

Malcolm Tredinnick malcolm at commsecure.com.au
Sat Dec 1 13:32:57 EST 2001


On Fri, Nov 30, 2001 at 01:17:05PM -0800, Jeannette wrote:
> Hey Chix!
> 
> I am trying to port a stress test from C for Windows (that probably has an
> official name - I have no idea what it is) to both Solaris and Linux.
> 
> Since I knew I had to do both (I did Solaris first), I tried to keep it
> ANSI C/Posix threads, but I am running into a problem running the program
> on Linux that I didn't have in Solaris. The basic structure of the program
> is this: According to parameters provided by the user of X disks to be
> exercised and Y threads per file,
> 
> main program
> |
> |-> launches thread to maintain statistics and time test
> |
> |-> launches X threads (one per disk)
> |            |
> |            |-> each of these threads launches Y threads which
> |            |   continuously do reads +/or writes to their
> |	     |   assigned addresses in their specified drive
> |            |
> |            |waits to join Y threads when time is up
> | waits to join X threads when time is up
> 
> 
> So, I have two problems with this code in Linux that I don't have in
> Solaris:
> 
> 1) I can't open more than one fstream at a time. In the Solaris man page
> for fwrite, it says that you are limited by OPEN_MAX (max number of files
> a process can have open), which is defined in limits.h as 256 (and
> possibly further limited by a POSIX_OPEN_MAX of 16).  In Linux, I can't
> find a similar defintion. Does anyone know if there is a limit of 1 open
> file per process or if I need to do something to get around what seems to
> be this limitation?

This sounds very strange. A default Linux setup will usually allow 1024
open file descriptors per process (have a look at the output of 'ulimit
-a') and this can easily be changed (with ulimit) and Linux will manage.
I have a system in production at the moment that utilises about 15000
open file descriptors without falling over (the postmaster daemon from
postgresql has a _lot_ of connections to it). So that isn't the problem.

If I guess a little bit that your are using fopen() to open the streams,
you should be getting NULL back when you have a failed open and errno
will be set to the error number. What error is returned this way?

> 2) My second problem probably derives from the first, but I will mention
> it anyway in case anyone can help me solve it; it is with the Y threads.
> They are each calculated to have non-overlapping sections of the drive
> that they do their reads and writes on; but, when the program tries open a
> second file pointer to the same file, I get a seg fault (as above with
> file pointers to different files).

You're right -- this sounds suspiciously related.

Sorry I can't give any better suggestions, but if we knew why the open
was failing (if you are not using fopen(), what are you using and does
it have error handling?) we might be able to think of something else.

Cheers,
Malcolm

-- 
Everything is _not_ based on faith... take my word for it.



More information about the Techtalk mailing list