by Dan East » Apr 11, 2002 @ 4:07pm
The fact is that the MIPS processor has to be milked and cajoled into performing. No one mentioned its rigid memory alignment requirements, which greatly hurts performance unless an application is carefully written with this in mind, in addition to requiring more RAM since data has to be padded out for alignment. Ever notice how all MIPS executables are 20-30% larger than their ARM and SH3 counterparts? That is because they require more instructions to accomplish the same tasks. More instructions require more clock cycles, which results in slower performance. A Pentium at 133 mhz will outperform a RISC processor at 200 mhz. Why? Because the RISC processor is based on a Reduced Instruction Set. That means all tasks have to be reduced into smaller, simplistic steps that the processor can handle. So even though the RISC processor is "faster", it has to do more work than the pentium that can do complex operations with a single op-code.
No one has discussed the Cache either. Unless I'm mistaken, all ARM processors have at least twice the cache size as MIPS, which is a huge factor for performance.
I posted about this in the Quake forum a year ago or so, but the whole 64 bit thing is greatly overrated. A 32 bit unsigned integer can hold a value from 0 to 4,294,967,295. That is plenty of range for programmers to store the values they typically deal with, even for games. 64 bits are usually only needed to store intermediate values while performing math on 32 bit values, such as dividing two fixed point integers. Other "normal" apps, like File Explorers, Word Processors, Spreadsheets and Browsers, would be extremely hard pressed to ever make use of 64 bit integers at all. In fact, they typically only use 16 bits at a time, which represent Unicode characters (String processing is one of the biggest chores performed by the above apps). 32 bit is the magic number that corresponds with most real-world information, such as storing a color value with the maximum precision that can be detected by the human eye. So yes, there was a huge jump in performance when processors leaped from 16 bit to 32 bit. But that does not mean there is a similar jump in performance from 32 bit to 64 bit for real-world applications.
Dan East