This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.
I fail this test because our implementation gives the result as 2.718281828459045e0 Looking at the representation of the values using System.BitConverter.DoubleToInt64Bits (.NET) or java.lang.Double.doubleToRawLongBits (Java) I get: Expected result = 4613303445314885482 Actual result = 4613303445314885481 The bit pattern for System.Math.E (.NET) or java.lang.Math.E (Java) is 4613303445314885481. <test-case name="math-exp-003"> <description>Evaluate the function exp() with the argument set to 1</description> <created by="O'Neil Delpratt, Saxonica" on="2010-12-10"/> <test>math:exp(1)</test> <result> <assert-eq>2.7182818284590455e0</assert-eq> </result> </test-case>
It appears that in Java, the expression Math.exp(1.0e0) == Math.E returns false ;-( I'm not sure exactly what the IEEE spec has to say about the required precision of the result: it needs a lot of study to decipher the meaning. Pragmatically, it would appear to make sense for our expected result to tolerate the value that the Java library delivers, as well as what would appear to be the "correct" result. So I propose to change the test to allow an epsilon variation in the result.
Changed test to allow a variation of 1e-15 in the result.
Confirmed fixed. Thanks.