Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
I am unsure about the historical reasons for moving from 32-bit to 64-bit, but wouldnt the address space be a significantly larger factor? Like you said, CPUs have had vectoring instructions for a long time, and we wouldn’t move to 128-bit architectures just to be able to compute with numbers of those size. Memory bandwidth is, also as you say, limited by the bus widths and not the processor architecture.
IMO, the most important reason that we transitioned to 64-bit is primarily for the larger address address space without having to use stupidly complex memory mapping schemes. There are also some types of numbers like timestamps and counters that profit from 64-bit, but even here I am not sure if the conplex architecture would yield a net slowdown or speedup.
To answer the original question: 128 bits would have no helpful benefit for the address space (already massive) and probably just slow everyday calculations down.
8-bit machines didn’t stop dead at 256 bytes of memory. Address length and bus width are completely independent. 1970s machines were often built with bit-slice memory, with however many bits of addressing, and one-bit output. If you wanted 8-bit memory then you’d wire eight chips in parallel - with the same address lines. Each chip would deliver a different part of the same logical byte.
64-bit math doesn’t need 64-bit hardware, either. Turing completeness says any computer can run the same code - memory and time allowing. As an object example, Javascript exclusively used 64-bit double floats, even when it was defined in the late 1990s, and ran exclusively on 32-bit machines.
Clearly you can address more bytes than your data bus width. But then why all the “hacks” on 32-bit architectures? Like the 36-bit address bus via memory mapping on SPARCv8 instead of using paired index registers ( or ARMv7 width LPAE). From a perfomance perspective using an address width that is not the native register width/ internal data bus width is an issue. For a significant subset of operations multiple instructions are required instead of one.
Also is your comment about turing completeness to be taken seriously? We are talking about performance and practicality. Go ahead and crunch on some 64-bit floats using purely 8-bit arithmetic operations (or even using vector registers). Of course you can, but the point is that a suitable word size is more effective for certain computational tasks. For operations that are done frequently, they should ideally be done at native data-bus width. Vectored operations will also cost performance.
Indeed, because those two things were only exemplary, meaning they would be indicative of your system having a bottleneck in almost all types workloads. Supported by the generally higher perforance in 64-bit mode.
I am unsure about the historical reasons for moving from 32-bit to 64-bit, but wouldnt the address space be a significantly larger factor? Like you said, CPUs have had vectoring instructions for a long time, and we wouldn’t move to 128-bit architectures just to be able to compute with numbers of those size. Memory bandwidth is, also as you say, limited by the bus widths and not the processor architecture. IMO, the most important reason that we transitioned to 64-bit is primarily for the larger address address space without having to use stupidly complex memory mapping schemes. There are also some types of numbers like timestamps and counters that profit from 64-bit, but even here I am not sure if the conplex architecture would yield a net slowdown or speedup.
To answer the original question: 128 bits would have no helpful benefit for the address space (already massive) and probably just slow everyday calculations down.
8-bit machines didn’t stop dead at 256 bytes of memory. Address length and bus width are completely independent. 1970s machines were often built with bit-slice memory, with however many bits of addressing, and one-bit output. If you wanted 8-bit memory then you’d wire eight chips in parallel - with the same address lines. Each chip would deliver a different part of the same logical byte.
64-bit math doesn’t need 64-bit hardware, either. Turing completeness says any computer can run the same code - memory and time allowing. As an object example, Javascript exclusively used 64-bit double floats, even when it was defined in the late 1990s, and ran exclusively on 32-bit machines.
Clearly you can address more bytes than your data bus width. But then why all the “hacks” on 32-bit architectures? Like the 36-bit address bus via memory mapping on SPARCv8 instead of using paired index registers ( or ARMv7 width LPAE). From a perfomance perspective using an address width that is not the native register width/ internal data bus width is an issue. For a significant subset of operations multiple instructions are required instead of one.
Also is your comment about turing completeness to be taken seriously? We are talking about performance and practicality. Go ahead and crunch on some 64-bit floats using purely 8-bit arithmetic operations (or even using vector registers). Of course you can, but the point is that a suitable word size is more effective for certain computational tasks. For operations that are done frequently, they should ideally be done at native data-bus width. Vectored operations will also cost performance.
If timestamps and counters represent a bottleneck, you have problems larger than bit depth.
Indeed, because those two things were only exemplary, meaning they would be indicative of your system having a bottleneck in almost all types workloads. Supported by the generally higher perforance in 64-bit mode.