News

Add new post

Geographic Coordinate Phone Book

Hi All.  This is my first leap into the W3 forums.

I’m looking to start a new working group to help shape and implement my idea.  I thought it best to float it here first to measure reaction.

The idea is this:  creating an equivalent to DNS translation, but for geographic coordinates.

A specification could be formulated to allow people to register coordinates which tie to a simple address.

Eg. L.thisismymybusiness.t resolves to a particular pair.

I’ve thought a fair amount on the details for this but am looking for anything from you so I won’t cloud your thoights by expanding here.

Thanks!

HTML Resource and Archive APIs proposal

Hi everyone,
this is Mr. “no one” and is here to propose two new HTML APIs; one to manage network resources inside JavaScript programs, and another one that allows I/O operations with compressed files. For example.

<resource src="file:///C:/images.zip" dst="http://www.domain.com" id="images" onLoadEnd="download_Or_Upload_Complete_Stopped();" load></resource>

For more details visit bit.ly/1phZv0U; it is an HTML document with examples of the Resource and Archive APIs.

However, to propose it in a formal way there is to open a new group  with at least four people. Do you want to help? Do you want to improve it? Do you…; what do you want to do?

Thanks

Mr. “no one”

Sir Tim’s dream … in the building industry?

DREAM – 2001

“I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers.”

I saw in this dream an opportunity to really improve the building industry and eventually came up with the basics for the scenario described below. It seems to me that W3C works well with Big Data collected by large institutions and businesses whereas most small and medium sized enterprises are concerned with dealing with large amounts of small data in different formats and from different sources. The dream does not really work unless all the data on the web can be analysed and the dream cannot be realised unless those who know what data is needed are intimately involved with its selection and analysis.

So my idea of posting the scenario is see if there is any support for the development of what I suggest or whether there are other suggestions how to enable building industry computers (among others) to analyse data on the web. Personally I feel we owe it to all the Internet Pioneers to use their gifts to automate much more for much more of society.

SCENARIO

John is part of the design team for a large condominium.

He uses his phone to upload the project file to the registry so that he can use its assembler to create a machine in his phone’s display.

samsung

He is in the studio so he can more conveniently display the machine’s output on the big screen on the wall. He wants to double check on his selections for sanitary-ware in a typical bathroom.

He had previously marked up those values returned by the registry’s analyzer that were clearly not appropriate and had run quite a number of reruns on the rest adjusting criteria for price, material, delivery times and sustainability parameters before settling on a combination that seemed to work well enough within the space, budget, quality expected by the developer and his company’s reputation for good design.

REGISTRATION

This scenario relies on data and options being collected from multiple sources. Collection is via market-oriented registration rather than algorithms. Processing is by freely available universal online assemblers and analyzers rather than software applications.

The assembler is a simple application that accepts a ‘json’ specially punctuated plain text file, converts it into hyperlinks that automatically arrange themselves after each selection in accordance with associations set in the file, and returns the amended file at session end. Although made with the same code as a web page the display is more machine than document, just as an eBanking web page is as much part of a banking machine as the screen of an ATM.

Rather than return thousands of web site addresses, the analyzer (being developed) extracts and returns unique options from data links provided by its registrants. The analyzer inherently builds and grades lists of prompts and options used in particular contexts within the industry. The lists are used for registration as well as searching, filtering and flagging returns.

class=”” error

Hi,

In the source of my html page is the following code :
<a class=”” href=”http://www…

Such code leads to a W3c error :
“Sp 121 Syntax of attribute value does not conform to declared value”

Could you give me some advise on the way to solve this error ?

Thank you in advance for any reply.

Patrick

how to add CSS Class for HTML Element?

i have a worst hosting provider.
due to some restrictions i can`t use ID
For Example i want to change change of heading.also,note this is not correct code but POSSIBLE code which is not working.
<!DOCTYPE html>
<html>
<body>

<h1>My Heading 1</h1>
<button type=”button”
onclick=”document.getElementByName(‘h1’…
Click Me!</button>

</body>
</html>

How to update the html5 appcache files only when the user want to update

Hi,

I am working on a offline Html5 web application. In this i use the HTML5 APPCACHE for offline purpose. I want to handle this cache functionality on user will. Means whenever a user visit this web application he will get the promt msg to change version if he choose to no then this application should work on old version of appcache (* After refreshing or reloading this web page should work on old version) Now if user choose yes the his application must use the new version………

I have search a lot for this put i can only listen the events of this appcache see this….

http://www.html5rocks.com/en/tutorials/appcache/beginner/

Please if anyone can help me please do it i need it seriously….

Thanks.

proposed http/2 fix to improve web statefulness

Abstract:  Simple http2.0 fix would remove the GET size limitation.

Problem example: Say you have a search form, and instead of having to pass 100 characters to the report server, you need to pass 1,000 to GET a report.

Problem:  If state information passed from page to page is small, (e.g. a simple web page request) then GET works fine.  But when state information is larger than will fit in a URI, (i.e. more complex web requests), then either a session variable or POST must be used to pass the state information.  Session variables time out and POST is idempotent and thus not suited to only passing state.  Either way the most basic idea of web statefulness for complex non-idempotent page requests is currently broken.

This is what commonly causes the widely discussed IE “Webpage has expired” error when the browser back button is used on a complex page request.  It should be noted that  IE is faithfully following the standards, but that the standard itself is limiting.

Other browsers (FireFox, Chrome, Opera, and Safari) return the previously cashed page rather than report the possible problem which makes complex pages work, but is not really the best thing either because it puts the user at risk of double purchases. Furthermore, for pages to be browser independent they must be dumbed down to suit the IE method.

Proposed solution:  Add a new non-idempotent method (similar to GET), for the moment let’s call it “STATE” to signify that you are simply passing some state data, and not making a data base update.  Have it functionally work like “POST” in how it passes the variables in the header, i.e. with no size constraint, but be designated like GET in being treated as non-idempotent by browsers, so as to not cause expired warnings.

Usage example<form action=”MakeFlyer.php” method=”state”>

A good problem reference:  See the very bottom of this page relating to passing data between web pages, where it talks about: “If the form data set is large – say, hundreds of characters – then METHOD=”GET” may cause practical problems with implementations which cannot handle that long URLs. …

Issues:  This will not allow bookmarking the page request as you can do with a GET request.  For applications that need that capability GET will still exist.  It is not reasonable to simply up the size of a URI string as it has too many other implications.
_______
Hope this makes sense to a few of you.  It’s my first suggestion to W3C after 17 years of web development.