[Courses] One last diversion on binary, and then I'll shut up for a while.

Christopher Howard christopher.howard at frigidcode.com
Wed Mar 7 19:54:01 UTC 2012


On 03/07/2012 10:18 AM, Kevin Cole wrote:
> 
>> It would be great if you explain the significance of 64 bit processors in
>> *your style*. Your rambling is welcome. :-)
>>
> 
> Interesting choice of words there: "significance" as in you almost answered
> your own question. ;-) Really not a lot to say here. More bits = more
> "significant" digits in mathematical calculations, a larger potential
> vocabulary for your machine language, and the ability to address more
> memory, because now the processor "goes to 11" ;-) Since it can "count
> higher" it can handle addresses that are further away from location 0, and
> if each possible value is a potential code for an instruction, well, more
> instructions are possible.  So, operations that used to require multiple
> machine language instructions to get a job done may now require only one,
> because the designers added the equivalent of a contraction to the
> vocabulary list.
> 
> The architectures are designed to "chunk" groups of bits together as
> "words" to handle work more efficiently.  The size of the chunks are
> limited by the laws of physics and economics -- how small can the
> components be made, how much heat will they generate, how much does it cost
> to manufacture them? That said, if you have something that exceeds the
> limits of the the "word size" computers generally find a kludge to handle
> such problems... up to a point.  Shrinking back down to our more managable
> model of the 4-bit word on our eensy-weensy computer that can handle
> numbers from 0 to 15, if the language is designed to accommodate you, when
> you want to work with numbers up to, oh say, 31, it has the ability to do
> summersaults while juggling to use two words together.  Generally slower
> than a system designed to handle larger word sizes.

This is, of course, true on the fundamental level. Note than in an
actual 64 bit architecture, speaking specifically of the amd64
architecture, the improvements are more profound. In amd64, you get 15
of the 256-bit YMM registers, meaning it is possible to operate on 256
bits of data at once if you know the right instructions to pass in.

Furthermore (again, in the case of the amd64 architecture) there are a
number of additional improvements that are quite significant in terms of
code efficiency. In particular, the amd64 architecture gets an
additional 8 general purpose registers (GPRs) and 8 SSE registers. This
is really helpful as it means that more working data (e.g., function
variables and parameters) can be stored in these really fast registers,
rather than being constantly pulled in and out of memory (which is
relatively speaking quite slow). Seven of the new GPRs are not affected
implicitly by any other process operations, meaning they are really
convenient to use if you happen to be programming in assembly.

Some of these advantages are automatically leveraged by your compiler.
However, some are not and require you to know how to use them.

-- 
frigidcode.com
indicium.us



More information about the Courses mailing list