We all hate broken links. Some folks have said "URLs are broken. We need URNs. URNs will solve the problem of broken links." I doubt it.
URNs will play an important role in publishing on the web (along with copyright enforcement mechanisms, payment mechanisms, etc. see URLs as catalog numbers) but I doubt they will increase reliability (or quality of service) for the vast majority of web links, because URNs will impose administrative overhead (e.g. registration, digital signatures), or at least work-flow restrictions (e.g. once you've made a document available under a URN, you can never change it). (see [STANF] for an excellent discussion)
I suspect there are propogation techniques that will increase reliability without the cost of human intervention. There are ways to distribute replicas of resources such that availability will be sufficiently high and latency sufficiently low.
Another mechanism to increase reliability is automated link maintenance for referential integrity.
If we look at link traversal as a case of the information retrieval problem, we can start to measure reliability only after we've defined "success."
Successful link traversal generally means finding a resource with perfect precision and recall, and retrieving an authentic representation of the resource in a timely fashion, i.e. with sufficiently low latency.
If a resource is replicated to increase availability or decrease bandwidth consumption, it is important that the various replicas are in sync (or close -- some applications may be willing to accept out of date information some small percentage of the time.)
@@Replication mechanisms are complicated by the need to support access control and tracking policies.