This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.
This concerns the validity of Core tests infoset07 and wellformed03 In these tests, a document is created with an attribute that contains the character ࢎ --> an invalid XML 1.0 and XML 1.1 character <setAttribute obj="docElem" name='"LegalNameࢎ"' value='"foo"'/> [1]http://www.w3.org/TR/2004/REC-xml-20040204/#sec-references [2]http://www.w3.org/TR/2004/REC-xml11-20040204/#sec-references
The intent was to pick a value that was a legal XML 1.1 name character but not a legal XML 1.0 character. For some unknown reason, I chose a character (x2190) that was not legal for either. I have changed the character to x218F which does appear in the XML 1.1 name production. The value is arbitrary, so if there is a problem with that value, please suggest another.
Created attachment 358 [details] Changed 0x2190 to 0x218F
Didn't get enough sleep last night, I misread ࢎ as ←. U+088E (decimal 2190) should be a XML 1.1 name character but not a XML 1.0 name character. The test definition expresses this as <createAttribute var="attr" obj="doc" name='"LegalNameࢎ"'/> and that should generate code equivalent to: attr = doc.createAttribute("LegalName\u088E"); I'm reverting the changes (which didn't compile). If you still believe there is an error, you need to be painfully clear what you believe the problem is since I don't see it.