<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<!DOCTYPE bugzilla SYSTEM "https://www.w3.org/Bugs/Public/page.cgi?id=bugzilla.dtd">

<bugzilla version="5.0.4"
          urlbase="https://www.w3.org/Bugs/Public/"
          
          maintainer="sysbot+bugzilla@w3.org"
>

    <bug>
          <bug_id>18802</bug_id>
          
          <creation_ts>2012-09-07 21:59:00 +0000</creation_ts>
          <short_desc>Web pages in Spanish are ignored</short_desc>
          <delta_ts>2012-09-10 06:14:21 +0000</delta_ts>
          <reporter_accessible>1</reporter_accessible>
          <cclist_accessible>1</cclist_accessible>
          <classification_id>1</classification_id>
          <classification>Unclassified</classification>
          <product>LinkChecker</product>
          <component>checklink</component>
          <version>unspecified</version>
          <rep_platform>PC</rep_platform>
          <op_sys>Windows NT</op_sys>
          <bug_status>RESOLVED</bug_status>
          <resolution>INVALID</resolution>
          
          
          <bug_file_loc>http://www.amelox.com</bug_file_loc>
          <status_whiteboard></status_whiteboard>
          <keywords></keywords>
          <priority>P2</priority>
          <bug_severity>normal</bug_severity>
          <target_milestone>---</target_milestone>
          
          
          <everconfirmed>1</everconfirmed>
          <reporter name="Ralph Seabrook">mlx75</reporter>
          <assigned_to name="Ville Skyttä">ville.skytta</assigned_to>
          
          
          <qa_contact name="qa-dev tracking">www-validator-cvs</qa_contact>

      

      

      

          <comment_sort_order>oldest_to_newest</comment_sort_order>  
          <long_desc isprivate="0" >
    <commentid>73450</commentid>
    <comment_count>0</comment_count>
    <who name="Ralph Seabrook">mlx75</who>
    <bug_when>2012-09-07 21:59:00 +0000</bug_when>
    <thetext>All our web pages are properly link-checked.
However, a few days ago I added two more with the content in the Spanish languge. There will be more in the future.
They are ignored.  Is there any reason for this?</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>73471</commentid>
    <comment_count>1</comment_count>
    <who name="Ville Skyttä">ville.skytta</who>
    <bug_when>2012-09-08 08:36:36 +0000</bug_when>
    <thetext>More information is needed to investigate: exactly which documents or links are being ignored?</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>73475</commentid>
    <comment_count>2</comment_count>
    <who name="Ralph Seabrook">mlx75</who>
    <bug_when>2012-09-08 15:31:20 +0000</bug_when>
    <thetext>These URLs are listed and evaluated:
www.amelox.com/orderpage.html
www.Amelox.com/Tutor-Start.html

These URLs are not listed nor evaluated:
www.amelox.com/orderpage-ES.html
www.Amelox.com/Tutor-Start-ES.html

The difference between the two is that the content in the latter is in Spanish.
But the HTML5 in both is in English.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>73476</commentid>
    <comment_count>3</comment_count>
    <who name="Ralph Seabrook">mlx75</who>
    <bug_when>2012-09-08 15:34:17 +0000</bug_when>
    <thetext>sorry, the computer capitalized amelox because there is a period in front.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>73478</commentid>
    <comment_count>4</comment_count>
    <who name="Ville Skyttä">ville.skytta</who>
    <bug_when>2012-09-08 18:51:54 +0000</bug_when>
    <thetext>What is the URL of the document that contains those links? http://www.amelox.com/ (the &quot;root&quot; page) does not contain any of those four.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>73481</commentid>
    <comment_count>5</comment_count>
    <who name="Ralph Seabrook">mlx75</who>
    <bug_when>2012-09-08 20:29:17 +0000</bug_when>
    <thetext>(In reply to comment #4)
&gt; What is the URL of the document that contains those links?
&gt; http://www.amelox.com/ (the &quot;root&quot; page) does not contain any of those four.

Yes, the first two are in the directory. The Link-Checker finds them, too. Please look again. The server is capitalization sensitive.

These URLs are listed and evaluated:
www.amelox.com/orderpage.html
www.amelox.com/Tutor-Start.html

These URLs are not listed nor evaluated:
www.amelox.com/Orderpage-ES.html
www.amelox.com/Tutor-Start-ES.html

Thank you,
Rolf</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>73488</commentid>
    <comment_count>6</comment_count>
    <who name="Ville Skyttä">ville.skytta</who>
    <bug_when>2012-09-09 07:19:49 +0000</bug_when>
    <thetext>What&apos;s needed is the URL of the document that links to the documents you mentioned. The main page does not:

$ curl -s http://www.amelox.com/ | grep -Pi &apos;(orderpage|tutor-start)&apos;
(produces no output)</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>73494</commentid>
    <comment_count>7</comment_count>
    <who name="Ralph Seabrook">mlx75</who>
    <bug_when>2012-09-09 17:44:48 +0000</bug_when>
    <thetext>Thank you.
I see now what is &apos;wrong&apos;:  Each page needs a referrer page. I fixed that even though I had not intended it at this time.

I also noted that sub-directory stand-alone pages can be checked separately.
That was not the case previously and it solves the problem. 

Thank you,
Rolf</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>73512</commentid>
    <comment_count>8</comment_count>
    <who name="Ville Skyttä">ville.skytta</who>
    <bug_when>2012-09-10 06:14:21 +0000</bug_when>
    <thetext>Yes, if there are no links to a page, the link checker has no way of knowing that it exists.</thetext>
  </long_desc>
      
      

    </bug>

</bugzilla>