Is there a bug with converting to/from the “decimal” datatype?
I’m dividing a “long” integer by 100.0 to convert it from cents to dollars, but in some cases it loses one cent??
Fortunately we have a workaround right now: use the “number” datatype instead of “decimal”. But what did I do wrong?
Please see the attached graph.
Thanks!
Jus
Then calculation is then done in Decimal domain and therefore precisely. In sample you send is done in Number (float point) domain and therefore not precisely.
Sample:
9695684/100=96956.83999 (because of http://en.wikipedia.org/wiki/Floating_p … y_problems) and that is rounded to 96956.83 when converted to Decimal. You can see precious result by increasing scale of your decimal field.
You can cache str2decimal(“100.0”) result if performance is issue for you.
Which indicates, that the 100.0 constant should be taken as a decimal number (thus the ‘D’) instead of float (the default). The reason being is that the float arithmetics is faster, but can loose accuracy.