My floating point adder/subtractor (FPA) is about 90% complete and functional, so I thought I'd give everyone a preview. First, the obvious question: What is this? Floating point is a way of representing an extremely broad range of numbers using the formula: Significant digits x BaseExponent. Single-precision means we do this using 32 bits - 1 sign bit, 8 exponent bits, and 23 significant digit (mantissa) bits. I'll be updating this thread with information about each component of the FPAS and how it works. Enjoy!
Single precision:
The "far more usable in Minecraft" half precision:
The steps in floating point addition/subtraction are: 1. Mantissa Alignment: You need to align the mantissa of each number.
2. Addition/Subtraction: Pretty straightforward - normal addition and two's complement subtraction.
3. Normalization: Putting the result back into a normalized form. This means left shifting so that the 20 is on and adjusting the exponent.
4. Rounding: During the alignment shift rounding bits (known as the guard, round, and sticky bits) are generated. These allow you to get some extra precision through rounding.
5. Re-normalization: There's one situation where rounding generates a sum that needs to be renormalized and this can be accomplished by a single right shift and adding 1 to the exponent.
Components
Organization:
Alignment Control: This is the logic that controls the alignment step, telling the shifter how many positions to shift (in this case, right shift). To do this I simply subtract one exponent from the other, however sometimes this generates a negative result. In that case a two's complement converter is turned on so that the absolute value is the output. Alignment control also selects the greater exponent and sends it to the exponent normalization adder. Finally, if any exponents are on a 24th mantissa bit (called the hidden bit) is turned on.
Mantissa Multiplexing: You always want the mantissa of the smaller number to go through the alignment shifter. This can be accomplished by "double multiplexing" the A and B inputs so that depending on the state of the exponents either mantissa A or B can go through the alignment shifter.
Barrel Shifter: Both the mantissa alignment and normalization steps are performed by a barrel shifter. This shifter shifts its inputs by a number of positions defined by its inputs - in this case the inputs come from the alignment control. The barrel shifter I used was built by Redstonewarrior and Anomalouscobra.
Sign Bits: The sign bits controls whether addition or subtraction occurs and whether the output is positive or negative.
Adder: I threw together a simple insta-carry adder for this. The only thing notable is that I prevent piston block-dropping with a two tick repeater. It's also hooked up so that it can subtract the mantissa that comes through the alignment shifter if the sign bit specifies.
Leading Zero Counter (Normalization Shifter Control): This component determines how many 0's there are before the first most significant 1 in the adder output. It then tells the normalization shifter how many positions to shift. I'm going to expand upon this section with more details about the logic for this because it's pretty cool.
Normalization Shifter: This takes the output of the leading zero counter and shifts the adder output that many positions to the left. Note that there's an intrinsic right shift. to normalize for the carry-out of the adder.
Exponent Normalizer: This is just an adder that subtracts the "output-of-the-leading-zero-counter - 1" from the output of the alignment normalizer to generate the exponent output.
Rounding Logic: This determines how the FPA will round based on the guard, round, and sticky bits. IEEE 754 specifies that there be four rounding modes - so that's what I did. They are:
Round to the nearest (ties go to even) - This is the default mode. If the guard bit and the round or sticky bits are on it rounds up. When there's a tie (only the guard bit is on) it rounds to even - so if the unit in the last place (ULP) is on it rounds up but if the ULP is 0 it remains 0.
Round towards zero (truncation) - This mode disregards the guard, round, and sticky bits and never rounds up.
Round towards positive infinity - Rounding towards positive infinity means you always round up. When a number is positive and any of the three rounding bits are on it rounds up by adding 1. When a number is negative and any of these bits are on it rounds up by adding 0 (adding one would make this more negative).
Round towards negative infinity - This is the opposite of rounding towards positive infinity. If the number is positive and any of the three rounding bits are on 0 is added. If the number is negative and any of these bits are on it adds one.
Here's a diagram of the logic I came up with:
There is one situation where rounding will overflow - in this case 1 is added to the exponent and the mantissa sum is right shifted to renormalize.
At this point all I have to do is add some exceptions and my FPA will be compliant with the IEEE 754 standard. I'll post some more updates when I do this.
A big thanks goes to Redstonewarrior and Anomalouscobra for use of their barrel shifter!
While a single-precision FPU isn't really feasible in Minecraft, a half-precision FPU should be doable, so that's my plan. At this point it shouldn't be all that difficult to assemble (though I won't be using the standard method of division because it's not really ideal in Minecraft) because I have most of the components already made or at the very least, designed. I'll be updating you on my progress here.
Haha yeah, I realized that a lot of people probably wouldn't even look at this because they have no idea what floating point is (I had never heard of it before I started learning about all of this).
Yup, it's all binary - the inputs and outputs are 1 sign bit (floating point numbers use sign/magnitude notation), 8 exponent bits, and 23 significand bit (though there's a 24th "hidden" significand bit). You picked an interesting example because you particular operation results in an exception. The first thing to say is that the exponent uses a notation that's offset by 127, so "0" is written as 01111111. This type of notation makes the normalization process very easy. In the normalization process for the numbers you listed you'd end up doing 00000000-00000001 for the exponent, which wouldn't make any sense in this notation - so you have an exception that sets the exponent to 00000001 and get the result: Sign 0 Exp 00000001 Significand (usually called the mantissa in floating point) 00000000000000000000000. This probably doesn't make complete sense right now, but I think it'll be easier to understand once I write something up about how the whole process works.
Wow, ímpressive Is this compliant to the IEEE floating point standard? I mean does it have plus and minus zero and positive and negative infinity? Does it have NaNs (or alternatively asked what happens at over- and underflow)?
Thanks! I'm working toward making it compliant with the IEEE standard - It's pretty close right now, I just need to add a bit more logic to deal with some of the exceptions (like the NaNs you mentioned, but it's all pretty straightforward).
This is quite incredible, OP!
I've been waiting quite a while to finally see a basic FPU!
This machine, if you allow others to use this design, could really improve redstone computation!
Thanks guys! I expanded my rounding logic tonight - now it supports all four rounding modes required by IEEE 754: round to the nearest (ties go to even), round to zero (truncation), round towards negative infinity, and round towards positive infinity. The rounding also adjusts based on whether the sum is positive or negative - now all I need to do is add in the last few exceptions and it's ready to go. I'm going to do some optimization tomorrow and then I'll start updating my post with explanations for each step of the process. Building the adder/subtractor was a lot of fun because implementing both functions into one circuit means there's a decent amount of extra logic to deal with. Once this is complete I'll begin working on the divider using this approach. It's not the standard approach to floating point division, but in Minecraft it makes a lot more sense than using lookup tables.
Rounding logic along with two test bits (most of the fun logic is underneath ):
I'll make a logisim diagram of this at some point. The only part of the rounding logic that isn't attached is a single position right shifter to handle the rare occasion where rounding up requires a renormalization step.
Edit: The thing in the background is the half-precision version that'll be used in the FPU.
I've updated the first post with much more information. At this point I've done quite a bit of optimization and have built a half-precision version. All I have left to do is add a few exceptions.
HOLY S*** IT'S HUGE! Does this make the game lag at all?
I have a challenge for you now: Make a double precision one.
Haha, the single precision one definitely causes lag when I change the inputs - I haven't tried changing all of the inputs at once, but I think it'd be pretty ridiculous. The half-precision one isn't bad at all.
And yeaaaaah, I think I'll pass on that challenge. Double precision would be absurd.
Hmm, this is actually pretty interesting. I'm not that much into CPUs etc but i like me some algorithms like this and other mathematical things. Got any block diagrams for me to stare at?
Back to my question earlier, say:
111,1111,1111,1111,1111,1111x2^01111111 + 000,0000,0000,0000,0000,0001x2^01111111.
When you enter the mantissa you only enter 23 bits and the 24th implicit bit is always a 1? Then the "Normalization step" has the default single shift right to account for this? This gives the answer 100,0000,0000,0000,0000,0000x2^10000000 which is interpreted with the implicit 24th mantissa bit as 1?
So that's 1.99999...5 x 2^0 + 1.0000..5 x 2^0 = 1.5 x 2^1 ?!
Yup, that's essentially what's going on. I'll see if I can throw a block diagram together later today.
Single precision:
The "far more usable in Minecraft" half precision:
The steps in floating point addition/subtraction are:
1. Mantissa Alignment: You need to align the mantissa of each number.
2. Addition/Subtraction: Pretty straightforward - normal addition and two's complement subtraction.
3. Normalization: Putting the result back into a normalized form. This means left shifting so that the 20 is on and adjusting the exponent.
4. Rounding: During the alignment shift rounding bits (known as the guard, round, and sticky bits) are generated. These allow you to get some extra precision through rounding.
5. Re-normalization: There's one situation where rounding generates a sum that needs to be renormalized and this can be accomplished by a single right shift and adding 1 to the exponent.
Organization:
While a single-precision FPU isn't really feasible in Minecraft, a half-precision FPU should be doable, so that's my plan. At this point it shouldn't be all that difficult to assemble (though I won't be using the standard method of division because it's not really ideal in Minecraft) because I have most of the components already made or at the very least, designed. I'll be updating you on my progress here.
Yup, it's all binary - the inputs and outputs are 1 sign bit (floating point numbers use sign/magnitude notation), 8 exponent bits, and 23 significand bit (though there's a 24th "hidden" significand bit). You picked an interesting example because you particular operation results in an exception. The first thing to say is that the exponent uses a notation that's offset by 127, so "0" is written as 01111111. This type of notation makes the normalization process very easy. In the normalization process for the numbers you listed you'd end up doing 00000000-00000001 for the exponent, which wouldn't make any sense in this notation - so you have an exception that sets the exponent to 00000001 and get the result: Sign 0 Exp 00000001 Significand (usually called the mantissa in floating point) 00000000000000000000000. This probably doesn't make complete sense right now, but I think it'll be easier to understand once I write something up about how the whole process works.
Thanks!
I've been waiting quite a while to finally see a basic FPU!
This machine, if you allow others to use this design, could really improve redstone computation!
Rounding logic along with two test bits (most of the fun logic is underneath
I'll make a logisim diagram of this at some point. The only part of the rounding logic that isn't attached is a single position right shifter to handle the rare occasion where rounding up requires a renormalization step.
Edit: The thing in the background is the half-precision version that'll be used in the FPU.
I have a challenge for you now: Make a double precision one.
GitHub: dalapo | Minecraft/Curseforge: ProfessorLucario
Did you know that 83.1% of statistics are made up on the spot?
Haha, the single precision one definitely causes lag when I change the inputs - I haven't tried changing all of the inputs at once, but I think it'd be pretty ridiculous. The half-precision one isn't bad at all.
And yeaaaaah, I think I'll pass on that challenge. Double precision would be absurd.
Yup, that's essentially what's going on. I'll see if I can throw a block diagram together later today.