<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<!DOCTYPE bugzilla SYSTEM "https://www.w3.org/Bugs/Public/page.cgi?id=bugzilla.dtd">

<bugzilla version="5.0.4"
          urlbase="https://www.w3.org/Bugs/Public/"
          
          maintainer="sysbot+bugzilla@w3.org"
>

    <bug>
          <bug_id>4442</bug_id>
          
          <creation_ts>2007-03-31 17:54:56 +0000</creation_ts>
          <short_desc>output-format should be easy grep&apos;able</short_desc>
          <delta_ts>2013-11-03 07:35:10 +0000</delta_ts>
          <reporter_accessible>1</reporter_accessible>
          <cclist_accessible>1</cclist_accessible>
          <classification_id>1</classification_id>
          <classification>Unclassified</classification>
          <product>LinkChecker</product>
          <component>checklink</component>
          <version>4.3</version>
          <rep_platform>All</rep_platform>
          <op_sys>All</op_sys>
          <bug_status>NEW</bug_status>
          <resolution></resolution>
          
          
          <bug_file_loc></bug_file_loc>
          <status_whiteboard></status_whiteboard>
          <keywords></keywords>
          <priority>P4</priority>
          <bug_severity>enhancement</bug_severity>
          <target_milestone>---</target_milestone>
          
          
          <everconfirmed>1</everconfirmed>
          <reporter name="Oliver Bandel">oliver</reporter>
          <assigned_to name="This bug has no owner yet - up for the taking">dave.null</assigned_to>
          <cc>gonzo1lee</cc>
    
    <cc>oliver</cc>
    
    <cc>sporosbe</cc>
          
          <qa_contact name="qa-dev tracking">www-validator-cvs</qa_contact>

      

      

      

          <comment_sort_order>oldest_to_newest</comment_sort_order>  
          <long_desc isprivate="0" >
    <commentid>14614</commentid>
    <comment_count>0</comment_count>
    <who name="Oliver Bandel">oliver</who>
    <bug_when>2007-03-31 17:54:56 +0000</bug_when>
    <thetext>Hello,

today I installed and used your link-checker the
first time.

It&apos;s a fine tool, but it could be much better,
if the output would be easy to grep.

On possible format could be
&quot;LINK_OK: http://....&quot;
or
&quot;LINK_NOT_OK: &lt;error-description&gt; http://.....&quot;

Then with
  grep LINK_NOT_OK &lt;linkcheck.log&gt;
all links that must be fixed
could easily be extracted.

Especially for large sites this would be
a very convenient way to work.

otherwise each entry must be viewed in detail,
and this needs a lot of time (or someone
must implement a parser for your output that
is much more complicated; this wouldn&apos;t make sense,
when a grep would be enough).

TIA,
  Oliver Bandel</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>14619</commentid>
    <comment_count>1</comment_count>
    <who name="Ville Skyttä">ville.skytta</who>
    <bug_when>2007-04-01 17:25:04 +0000</bug_when>
    <thetext>Don&apos;t the --summary and/or --quiet options meet your needs?

BTW, the suggested format would not work; at least in recursive mode it would have to include the URL of a document where the broken link/fragment was found.

See also bug 382 which requests a somewhat similar feature.</thetext>
  </long_desc>
      
      

    </bug>

</bugzilla>