[Courses] One last diversion on binary, and then I'll shut up for a while.

Sachin Divekar ssd532 at gmail.com
Wed Mar 7 18:48:44 UTC 2012


On Wed, Mar 7, 2012 at 11:06 PM, Kevin Cole <dc.loco at gmail.com> wrote:

> [On Tue, Mar 6, 2012 at 20:52, Sachin Divekar <ssd532 at gmail.com> wrote...
> and I replied.]
>
> While on the subject of bits, so far I've only talked about integers.  For
> non-integers, you can represent values a number of different ways. The
> simplest to understand is to use an integer and "pretend" there's a decimal
> point  in the middle somewhere.  But that's not typically how it's done.
> Instead, there's the complicated scheme where a portion of the number is
> used to represent the exponent and a portion is used to represent the
> base.  I haven't had to understand the specifics in over 30 years, so in
> Arthur C. Clark's words, the technology has become "magic".  It involved
> incantations of "excess-64 notation" as I recall, but that was based on
> mainframe architecture, and I would doubt that the spells and enchantments
> remain the same today.
>
> A clue as to how to BEGIN thinking about "floating point" math for those
> of you who really have nothing better to do, is to think about working the
> exponents backwards in binary.  If, instead of 0001, you have 0.0001, and
> you didn't have to mess around with all the magic of "characteristics" and
> "mantissas" then it would work out like this:
>
> 0*2^0 + 0*2^-1 + 0*2^-2 + 0*2^-3 + 1*2^-4
> 0*1 + 0*1/2 + 0*1/4 + 0*1/8 + 1*1/16
> 1/16 a.k.a.  0.0625
>
> But since you do have to worry about characteristics, mantissas,
> exponents, excess-64 notation (or whatever it is these days), the above is
> VERY rough.  Don't try it at home, kids.  Do not use while drowsy, or
> driving. Contents may settle while shipping. "Void" where prohibited by law
> ("Float" elsewhere).;-)
> ---------------------------------------------------------------------------
> A much simpler use of binary is boolean logic: 1's are true, 0's are
> false.  Since the integer -1 always works out to a bunch of binary 1's (16
> in 16-bit arithmetic, 64 in 64-bit math, etc.), it's often convenient to
> use -1 as "true".  (Handy when you have multiple true/false conditions and
> you want to represent them in a single variable.)
> ---------------------------------------------------------------------------
> Printable representations of numbers, letters, etc: 1 is an integer and
> stored in memory as 1 (00000001 binary). "1" is a string internally encoded
> as the integer value 49 (00110001 binary). The American Standard Code for
> Information Interchange (ASCII) originally allowed for 128 unique symbols
> -- letters (upper and lower case), numbers, punctuation, and non-printable
> "control characters" like tab, carriage return, line feed, form feed,
> vertical tab, and lots of signals that meant more to old telegraph systems
> than to anyone else.  Only 7 bits needed to accommodate all of that. When
> 8-bit architectures became common, that extra bit got used for different
> things. An extended ASCII accommodated some European characters, and some
> other symbols allowing for 256 unique symbols. But "it's a small world
> after all" and 128 & 256  just doesn't cut it any more.
>
> So, doubling the number of bits to 16 gives us a staggering 65,536
> possibilities: Unicode, which allows for Hebrew, Arabic, Mandarin,
> Cherokee, Klingon, Tengwar, Angerthas, Tectonese, etc. ;-)  There are
> variations on Unicode like UTF-8 that allow for "variable length"
> characters meaning that they don't all use the same number of bits. I defer
> to Wikipedia for such explanations.
> ---------------------------------------------------------------------------
> And finally, it's all contextual: The same value in memory represents
> different things depending upon how you look at it.  In addition to all the
> possibilities mentioned prior to this, a set of bits can represent an
> instruction to the computer itself: perform a mathematical calculation,
> compare two values, move a value from one memory location to another, wake
> the screen, ask the keyboard if there's been a keypress since the last time
> we asked.  This is "machine code" or "machine language" and varies from
> architecture to architecture. The machine code for an Motorola phone varies
> from that of an Apple PowerPC, which is different from the codes used by
> machines with Intel chipsets.  Aside from the problem of asking folks to
> remember lots of numbers instead of remembering "if" and "printf" and
> "scanf", the fact that the numbers differ from machine to machine makes it
> difficult to "port" an application from your computer to your phone.
> Hence, higher level languages that can be "compiled" down into machine code
> executables.
> ---------------------------------------------------------------------------
> And I'll stop rambling on now.  For real.  I promise.
>
>
Dear Kevin,

I have found all of your explanations very easy to understand. I am really
enjoying reading your descriptions. And the reference to Arthur C. Clarke's
quote, "Any sufficiently advanced technology is indistinguishable from
magic."
is perfect. Many technical concepts, we just consider they are
there and they are right, we never think about the theory behind them.

Your explanation of processor architectures and compilers in the last
paragraph
is also very easy to understand.

It would be great if you explain the significance of 64 bit processors in
*your style*. Your rambling is welcome. :-)

--
Regards,
Sachin Divekar


More information about the Courses mailing list