News

Add new post

class=”” error

Hi,

In the source of my html page is the following code :
<a class=”” href=”http://www…

Such code leads to a W3c error :
“Sp 121 Syntax of attribute value does not conform to declared value”

Could you give me some advise on the way to solve this error ?

Thank you in advance for any reply.

Patrick

how to add CSS Class for HTML Element?

i have a worst hosting provider.
due to some restrictions i can`t use ID
For Example i want to change change of heading.also,note this is not correct code but POSSIBLE code which is not working.
<!DOCTYPE html>
<html>
<body>

<h1>My Heading 1</h1>
<button type=”button”
onclick=”document.getElementByName(‘h1’…
Click Me!</button>

</body>
</html>

How to update the html5 appcache files only when the user want to update

Hi,

I am working on a offline Html5 web application. In this i use the HTML5 APPCACHE for offline purpose. I want to handle this cache functionality on user will. Means whenever a user visit this web application he will get the promt msg to change version if he choose to no then this application should work on old version of appcache (* After refreshing or reloading this web page should work on old version) Now if user choose yes the his application must use the new version………

I have search a lot for this put i can only listen the events of this appcache see this….

http://www.html5rocks.com/en/tutorials/appcache/beginner/

Please if anyone can help me please do it i need it seriously….

Thanks.

proposed http/2 fix to improve web statefulness

Abstract:  Simple http2.0 fix would remove the GET size limitation.

Problem example: Say you have a search form, and instead of having to pass 100 characters to the report server, you need to pass 1,000 to GET a report.

Problem:  If state information passed from page to page is small, (e.g. a simple web page request) then GET works fine.  But when state information is larger than will fit in a URI, (i.e. more complex web requests), then either a session variable or POST must be used to pass the state information.  Session variables time out and POST is idempotent and thus not suited to only passing state.  Either way the most basic idea of web statefulness for complex non-idempotent page requests is currently broken.

This is what commonly causes the widely discussed IE “Webpage has expired” error when the browser back button is used on a complex page request.  It should be noted that  IE is faithfully following the standards, but that the standard itself is limiting.

Other browsers (FireFox, Chrome, Opera, and Safari) return the previously cashed page rather than report the possible problem which makes complex pages work, but is not really the best thing either because it puts the user at risk of double purchases. Furthermore, for pages to be browser independent they must be dumbed down to suit the IE method.

Proposed solution:  Add a new non-idempotent method (similar to GET), for the moment let’s call it “STATE” to signify that you are simply passing some state data, and not making a data base update.  Have it functionally work like “POST” in how it passes the variables in the header, i.e. with no size constraint, but be designated like GET in being treated as non-idempotent by browsers, so as to not cause expired warnings.

Usage example<form action=”MakeFlyer.php” method=”state”>

A good problem reference:  See the very bottom of this page relating to passing data between web pages, where it talks about: “If the form data set is large – say, hundreds of characters – then METHOD=”GET” may cause practical problems with implementations which cannot handle that long URLs. …

Issues:  This will not allow bookmarking the page request as you can do with a GET request.  For applications that need that capability GET will still exist.  It is not reasonable to simply up the size of a URI string as it has too many other implications.
_______
Hope this makes sense to a few of you.  It’s my first suggestion to W3C after 17 years of web development.

Making typed array processing really fast

Hello all,

We all know that the computational performance of the code generated by modern ECMAScript JIT/AOT engines is very high, and for most purposes “good enough”. In fact, I have encountered several situations where my JavaScript code runs as fast as (or even faster than) the corresponding optimized compiled C++ code (g++ -O3).

Still, there are situations when this is not enough. For instance, in real time audio processing you want to minimize latencies and CPU load as far as possible.

One way to increase the performance and reduce latencies is to utilize the instruction level parallelism that is available in modern CPU architectures through SIMD instructions. On the other hand, it is very difficult to make use of these instructions in a platform agnostic language such as ECMAScript (see [1] and [2], for instance).

The River Trail proposal from Intel solves the parallelism issue by introducing the new data type ParallelArray, upon which fairly generic ECMAScript operations can operate. While this is a nice and quite generic solution, it adds some fairly heavy requirements on the ECMAScript compiler.

I decided to make an attempt at creating a partial solution that is easy to integrate into current Web clients, yet powerful enough to solve many problems (especially related to signal processing).

At this point, there is an unofficial draft specification, and a JavaScript polyfill and demos. You can also find an open source C++ implementation of most of the required functionality.

For continuing this work, I’ve proposed the community group Web Array Math. If you are interested in participating, feel free to support it.

Regards,

Marcus Geelnard, Opera Software ASA

[1] http://blog.aventine.se/post/16318162396/simd

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=644389

Offline Web Model

Situations of interrupted work caused by accidental loss of connectivity or by intentional offline work are very frequent. Concerned by the negative effects of interruptions in users’ activities, we are investigating a new approach for the design and development of Web applications resilient to interruptions. In order to help users to recover from interruptions whilst navigating Web sites we propose a model-based approach that combines explicit representation of end-user navigation, local information storage (i.e. Web browser caching mechanism) and polices for client-side adaptation of Web sites.

One of the key elements of our approach is the automatic generation of HTML. This HTML code includes specific attributes for adapting Web pages to cope with offline navigation.

Our aim  is to discuss the use of such us attributes and / or the inclusion of new HTML elements within the standard for supporting offline interaction with web sites resilient to interruptions.

What do you think about this topic?

Thanks in advance, best regards,

Félix

Schema.org vs. W3C?

Hello,

I’m following in recent months the Schema.org (http://schema.org/) developed by Bing, Google and Yahoo! Schema.org is a metadata vocabulary to mark up content web pages.

In my opinion, the project has many possibilities to success because has the support of the three most important web engines. But it also has some controversial aspects I would like to share with you.

On is Schema.org is developed outside the framework of W3C, when W3C is leading the Semantic Web from many years, and also Microsoft, Google and Yahoo! are members of the consortium. This apparently contradiction is reflected in the choice of Microdata (http://www.w3.org/TR/microdata/) as mark-up syntax instead of RDFa (http://www.w3.org/TR/rdfa-primer/), the syntax promoted by W3C.

Maybe the reason is that Schema.org and Linked Data (W3C) have different goals to mark-up web content, but on the other hand Schema.org is collaborating with W3C and RDFa Lite as we can see in http://blog.schema.org/2011/11/using-rdfa-11-lite-with-schemaorg.html. Then, maybe their goals aren’t so different?

What are your thoughts?

Andreu Sulé
University of Barcelona
Spain