[prog] dual-core processors and threads

mhf at berlios.de mhf at berlios.de
Sun Mar 13 01:15:33 EST 2005


On Wednesday 09 March 2005 02:38, Cynthia Kiser wrote:
> Quoting mhf at berlios.de <mhf at berlios.de>:
> > As to programming, I love threads and use
> > multi-threaded programming for more than 20 years in
> > embedded applications.
>
> Do you have a good reference that discusses the issues
> involved with multi-threaded programming? Web page or
> book? Some of the tools I use are in the process of

No, I don't because I may not have read many since seeing 
the first multitasking kernel written in PL/M (for 8085)
round about 1979. Thereafter I invented my own in Z80 and
later x86 assembler, which I still use - little upgraded
since about '85.

> moving to being multi-threaded, so I vaguely hear
> questions about 'thread safety' but don't really have
> enough background to follow the discussion and understand
> whether I will be affected by the answers.

I suppose google will have more info - I feel insecure
about the term 'thread safety' - is it about proper locking?
Locking is very important in order to prevent multiple
tasks from accessing a resource at the same time. Bad 
or missing locks have nasty effects. Imagine telling
the IDE drive to write a block with 10us of of data 
and 3 us later "overwrite" that request with a read, 
what a mess...

I am not confident about the latest terminology, but
one thing I am confident about is the concept of
multi-threading, (I tend to call it multi-tasking,
which seems out of fashion)

The two main concepts of multi-tasking are:

In embedded applications it is mainly event-processing
where often a single CPU/memory system handles multiples
tasks. An example is a digital calculator watch where the
(4bit) CPU manages the (LCD) display, keys and does
computation. 1K instructions and 64 nibbles (32 Bytes)
can make a nice calculator watch with extra features, 
and programming/testing it takes a week or so ;)

In general computing concurrency of execution is of
interest, such as running multiple applications or
to split work loads among multiple resources
(CPU's/memory). A simple example for the latter is
distcc which farms out compiles to multiple computers
on the network.

A practical computer setup is a combination of both with
the former taking care of IO and the later of number-
crunching.

To explain it in simple terms, a common requirement
is to share the computing resource(s) (CPU/memory(s)/IO)
among 'tasks'.

Each task executes until it either blocks (eg waiting for a
key or disk drive or network-chip), or it's allocated time
slice is up; then the scheduler schedules another task which
is ready to run. The scheduler usually uses priorities to
decide which is the next task to run when multiple tasks
are ready to run.

IMO, a fun way to get a feel for it would be to connect
a several-digit 7-segment LED display in a matrix and
some keys in a matrix to the parallel printer port and
to write a simple program in C consisting of:

- A task to refresh the display
- A task to read the key matrix
- A task for the user interface
- ... some tasks to do something.

... of course the HW can be simulated with X saving all
the soldering :-)

Back to general computing, in a GUI-app, one got to process
events and then send the response out to the display.
The nicest way to do this is to decouple the
input-event (mouse, keyboard) side from the response
side and stick processing in-between.

An extra task for debugging can make life so much easier
albeit few people use it in practice ;(

Then, one can add a task for testing which simulates
events...

Please bear in mind that all tasks run semi-independent
and just communicate using semaphores and thus are much 
easier to test and debug and maintain.

See why I love multi-tasking?...

	Michael


More information about the Programming mailing list