64 bits of heaven
The new "big news" in the desktop market this year
is of course that we are finally moving from 32 bit to 64 bit
processors. It has been on the cards for some time - Intel
announced the stupid sounding Itanium project with HP years
ago but it is really only in the last year that it has really
arrived in the desktop market.
Apple are currently
crowing about how they have the first 64 bit desktop machine,
the imaginatively named 'G5', although history students may
remember a product line powered by the DEC Alpha before it was
willfully destroyed. Intel are busy touting their upcoming 64
bit offerings but are in a bit of a panic since the upstarts
at AMD have stolen their limelight with the Athlon64.
In the Amiga market, we have
waited almost ten years to move to the 32 bit PPC processors,
the G3 and G4, but in typical fashion, before they are even
out, we are hearing people clamor for the move to the 64 bit
G5 - ironic since Amigans spent many years defending elegance
over brute force, when our old custom chipset and integrated
motherboards in the classic could still beat the steam roller
approach of the x86 boards.
To control something you first have to understand
it, and so before we get all excited about the G5 for the
AmigaOS4 family (and potentially any 64 bit processor for
AmigaOS5), let's first make sure we all understand the "itzy
bitzy'' issue.
Computers are effectively number crunchers. They
move numbers around, manipulate them, check them and use them
to map out digital space. Humans use decimal because we
started off counting on our fingers (many of us still do) but
computers don't have fingers. They have transistors, which can
be in two states, on or off. This is perfect to support the
binary number system, which uses two values 0 and 1.
To count up to big numbers in decimal we have
units, tens, hundreds and thousands, columns in a big number
that go up by a factor of ten each time you move one column to
the left. Binary is the same. Big binary numbers have columns
as well but being binary or base 2, instead of decimal or base
10, each column goes up by a magnitude of 2, thus we have
units, twos, fours, eights, sixteens and so on.
The first mainstream processors were the 8 bit
processors, which meant that they had 8 columns in their
binary numbers, meaning a maximum unsigned number of 255
(11111111). Obviously there isn't a lot of mathematics that
you can do in a number range of 0-255. To get around this,
sums had to be broken up into chunks which made for slow
performance. The move to 16 bit and then 32 bit processing
made a huge difference in this respect because it meant that
bigger sums could be done in smaller steps, and in most cases
just a single step. The maximum 32-bit number in decimal is
4,294,967,295 (2^32-1), which is big enough for almost all
standard mathematical operations. The maximum number for a
64-bit processor is so huge that there is an argument that it
may not be worth the move from 32 to 64 bits just for
calculations.
However, this isn't the main reason for wanting to
move to 64 bit processors. As I mentioned before, processors
also use numbers to map out their digital space, giving an
address to each byte (which is still only 8 bits). The biggest
32-bit number can only map out 4,294,967,296 bytes, which
comes out to just over 4 Gigabytes. In other words, if you
think of the processor as a robot doing lots of work inside a
closed room, that room is limited to a certain size, its
address space and it can only access what is in that room. For
a 32-bit processor with a 32-bit address space, that means the
room can only be just over 4 gigabytes in size.
In the past, when we were lucky to have 64MB of
RAM in our computers, no one cared, but as is always the case,
what seems a distant dream one year rapidly becomes a
restrictive barrier. Computers are now shipping with 512MB and
even 1GB as standard. It won't be too long before we smack up
against the 4 GB limit.
So what though? We store our content on big hard
drives and don't need to bring it into memory. That may have
been true in the past but digital content is becoming bigger.
Why play a game at 640*480 when you can play it at 6400*4800?
Why have 8 bit color images when you can have 32 bit images?
Why have to stream audio from a physical drive when you can
just keep it in memory and do it from there? Why can't I have
instant switching between my 20 running tasks rather than
having the hard drive turn into a Geiger counter as it swaps
data in and out? Just like an increasing salary, requirements
will grow to fill up the new capabilities - one reason why the
increase in computer performance seems to be out of step with
our sense of increase in computer experience.
The biggest advantage in the move to 64 bit
processors is that this 4 gigabyte address space for memory is
suddenly exploded apart, which means more memory on the
computer and more content being stored in main memory where it
can be accessed many times faster than it can be when it has
to be pulled from a hard drive.
Of course just because this is the maximum size of
the address space doesn't mean that it has to be used and
indeed the current Athlon64 only offers a 40-bit hardware bus,
restricting the address space size to 1,024 GB. There are many
reasons for this, but the flexibility is there to open up
those other 24 bits in a future revision.
The operating system and the application software
still has to be written to take advantage of the 64-bit
processor. Apple may claim a 64 bit desktop computer but
Panther 10.3 is still only a 32 bit OS. Sure developers can
write their own custom code to take advantage of the new
processor capabilities but, like any extension, this will
remain the exception until the OS itself supports,
encapsulates and presents these new capabilities; then they
will become the norm. Most of the 64 bit processors (G5,
Athlon, Opteron) can run 32 bit at full speed, which is how
Apple can make the claims they do.
One of the exciting advantages of the 64-bit
address space however lies in a possible move to what is
called orthogonal persistence. Consider the current case,
where we have a 4GB limit for main memory whilst we have hard
drives now pushing 120GB. Traditional operating systems
separate the static (persisted) state of content from the
dynamic state. Content has to be loaded into memory, each time
to different places in memory, with the application code and
operating system having to manage the difference between these
two states. With that 4GB address limit gone, we can use a
64bit memory management unit to remove this separation. In
essence, content doesn't have to be loaded or saved. It is
always there, just like in the real world because it is always
at the same place in the address space.
In the
end, the true advantages of the 64-bit processor lie in the
future. At the moment, we have to remember that a processor is
just one part of a computer and there are many places where
performance can be enhanced. PCI Express is one exciting
development whilst integrating the Northbridge into the
processor itself (as in the AMD64 range) is another. Of course
the increase in the number of instructions that can be done
per second as well as the deeper issues of mapping real life
application to machine code are also very important to the
overall equation.
Amigans
shouldn't be too eager to get onto a G5 processor. For a year
or so it will just be an expensive and underused poser item.
It is more important that we do the real work of completing
the move from the 68k to the PPC and setting up a foundation
for the new Amiga Generation 2 technology. Remember, the other
platforms HAVE to have these expensive processors to offset
their Operating Systems. We don't. |