Linux Weekly News published a recent story called “Encrypted Media Extensions and exit conditions”, Cory Doctorow followed by publishing “W3C DRM working group chairman vetoes work on protecting security researchers and competition”. While the former is a more accurate account of the status, we feel obligated to offer corrections and clarifications to the latter, and to share a different perspective on security research protection, consensus at W3C, W3C’s mission and the W3C Process, as well as the proposed Technology and Policy Interest Group.
There have been a number articles and blog posts about the W3C EME work but we’ve not been able to offer counterpoints to every public post, as we’re focusing on shepherding and promoting the work of 40 Working Groups and 14 Interest Groups –all working on technologies important to the Web such as: HTML5, Web Security, Web Accessibility, Web Payments, Web of Things, Automotive, etc.
TAG statement on the Web’s security model
In his recent article, Cory wrote:
For a year or so, I’ve been working with the EFF to get the World Wide Web Consortium to take steps to protect security researchers and new market-entrants who run up against the DRM standard they’re incorporating into HTML5, the next version of the key web standard.
First, the W3C is concerned about risks for security researchers. In November 2015 the W3C Technical Architecture Group (TAG), a special group within the W3C, chartered under the W3C Process with stewardship of the Web architecture, made a statement (after discussions with Cory on this topic) about the importance of security research. The TAG statement was:
The Web has been built through iteration and collaboration, and enjoys strong security because so many people are able to continually test and review its designs and implementations. As the Web gains interfaces to new device capabilities, we rely even more on broad participation, testing, and audit to keep users safe and the web’s security model intact. Therefore, W3C policy should assure that such broad testing and audit continues to be possible, as it is necessary to keep both design and implementation quality high.
W3C TAG statements have policy weight. The TAG is co-Chaired by the inventor of the Web and Director of W3C, Tim Berners-Lee. It has elected representatives from W3C members such as Google, Mozilla, Microsoft and others.
This TAG statement was reiterated in an EME Factsheet, published before the W3C Advisory Committee meeting in March 2016 as well as in the W3C blog post in April 2016 published when the EME work was allowed to continue.
Second, EME is not a DRM standard. W3C does not make DRM. The specification does not define a content protection or Digital Rights Management system. Rather, EME defines a common API that may be used to discover, select and interact with such systems as well as with simpler content encryption systems. We appreciate that to those who are opposed to DRM, any system which “touches” upon DRM is to be avoided, but the distinction is important. DRM is on the Web and has been for many years. We ask pragmatically what we can do for the good of the Web to both make sure a system which uses protected content insulates users as much as possible, and ensure that the work is done in an open, transparent and accessible way.
A several-month TF to assess EFF’s proposed covenant
Cory further wrote, about the covenant:
As a compromise that lets the W3C continue the work without risking future web users and companies, we’ve proposed that the W3C members involved should agree on a mutually acceptable binding promise not to use the DMCA and laws like it to shut down these legitimate activities — they could still use it in cases of copyright infringement, just not to shut down activity that’s otherwise legal.
The W3C took the EFF covenant proposal extremely seriously. Made as part of EFF’s formal objection to the Working Group’s charter extension, the W3C leadership took extraordinary effort to resolve the objection and evaluate the EFF proposed covenant by convening a several month task force. Hundreds of emails were exchanged between W3C Members and presentations were made to the W3C Advisory Committee at the March 2016 Advisory Committee meeting.
While there was some support for the idea of the proposal, the large majority of W3C Members did not wish to accept the covenant as written (the version they voted on was different from the version the EFF made public), nor a slightly different version proposed by another member.
Member confidentiality vs. transparent W3C Process
The LWN writeup is an excellent summary of the events so far, but parts of the story can’t be told because they took place in “member-confidential” discussions at the W3C. I’ve tried to make EFF’s contributions to this discussion as public as possible in order to bring some transparency to the process, but alas the rest of the discussion is not visible to the public.
W3C works in a uniquely transparent way. Specifications are largely developed in public and most groups have public minutes and mailings lists. However, Member confidentiality is a very valuable part of the W3C process. That business and technical discussions can happen in confidence between members is invaluable to foster broader discussion, trust and the opportunity to be frank. The proceedings of the HTML Media Extensions work are public however, discussions amongst Advisory Committee members are confidential.
In his post, Nathan Willis quoted a June 6 blog post by EFF’s Cory Doctorow, and continued:
Enough W3C members endorsed the proposed change that the charter could not be renewed. After 90 days’ worth of discussion, the working group had made significant progress, but had not reached consensus. The W3C executive ended this process and renewed the working group’s charter until September.
Similar wording is found in an April EFF blog post, attributing the renewal to “the executive of the W3C.” In both instances, the phrasing may suggest that there was considerable internal debate in the lead-up to the meeting and that the final call was made by W3C leadership. But, it seems, the ultimate decision-making mechanism (such as who at W3C made the final decision and on what date) is confidential; when reached for comment, Doctorow said he could not disclose the process.
Though the Member discussions are confidential, the process itself is not.
In the W3C process, charters for Working Groups go to the Advisory Committee for review at different stages of completion. That happened in this case. The EFF made an objection. By process, when there are formal objections the W3C then tries to resolve the issue.
As part of the process, when there is no consensus, the W3C generally allows existing groups to continue their work as described in the charter. When there is a “tie-break” needed, it is the role of the Director, Tim Berners-Lee, to assess consensus and decide on the outcome of formal objections. It was only after the overwhelming majority of participants rejected the EFF proposal for a covenant attached to the EME work that Tim Berners-Lee and the W3C management felt that the EFF proposal could not proceed and the work would be allowed to continue.
Next steps within the HTML Media Extensions Working Group
Cory also wrote:
The group’s charter is up for renewal in September, and many W3C members have agreed to file formal objections to its renewal unless some protection is in place. I’ll be making an announcement shortly about those members and suggesting some paths for resolving the deadlock.
The group is not up for charter renewal in September but rather, its specifications are progressing on the time-line to “Recommendation“. A Candidate Recommendation transition will soon have to be approved, and then the spec will require interoperability testing, and Advisory Committee approval before it reaches REC. One criteria for Recommendation is that the ideas in the technical report are appropriate for widespread deployment and EME is already deployed in almost all browsers.
To a lesser extent, we wish to clarify that veto is not part of the role of Working Group chairs; indeed Cory wrote:
Linux Weekly News reports on the latest turn of events: I proposed that the group take up the discussion before moving to recommendation, and the chairman of the working group, Microsoft’s Paul Cotton, refused to consider it, writing, “Discussing such a proposed covenant is NOT in the scope of the current HTML Media Extensions WG charter.”
As Chair of the HTML Media Extensions Working Group, Paul Cotton’s primary role is to facilitate consensus-building among Group members for issues related to the specification. A W3C Chair leads the work of the group but does not decide for the group; work proceeds with consensus. The covenant proposal had been under wide review with many lengthy discussions for several months on the W3C Advisory Committee mailing lists. Paul did not dismiss W3C-wide discussion of the topic, but correctly noted it was not a topic in line with the chartered work of the group.
In the April 2016 announcement that the EME work would continue, the W3C reiterated the importance of security research and acknowledged the need for high level technical policy discussions at W3C – not just for the covenant. A few weeks prior, during the March 2016 Advisory Committee meeting the W3C announced a proposal to form a Technology and Policy Interest Group.
The W3C has, for more than 20 years, focused on technology standards for the Web. However, recognizing that as the Web gets more complex and its technology is increasingly woven into our lives, we must consider technical aspects of policy as well. The proposed Technology and Policy Interest Group, if started, will explore, discuss and clarify aspects of policy that may affect the mission of W3C to lead the Web to its full potential. This group has been in preparation before the EME covenant was presented, and will be address broader issues than anti-circumvention. It is designed as a forum for W3C Members to try to reach consensus on the descriptions of varying views on policy issues, such deep linking or pervasive monitoring.
While we tried to find common ground among our membership on the covenant issue, we have not succeeded yet. We hope that EFF and others will continue to try. We recognize and support the importance of security research, and the impact of policy on innovation, competition and the future of the Web. Again, for more ample information on EME and frequently asked questions, please see the EME Factsheet, published in March 2016.
At An Event Apart in Boston, I had the pleasure of meeting Hannah Birch from Pro Publica. It turns out that she was a copy editor in a previous life. I began gushing about the pleasure of working with a great editor.
When I think back on happy memories of working with world-class editors, I always a remember a Skype call about an article I was writing for The Manual. I talked with my editor for hours about the finer points of wordsmithery, completely losing track of time. It was a real joy. That editor was Carolyn Wood.
Carolyn is going through a bad time right now. A really bad time. A combination of awful medical problems combined with a Kafkaesque labyrinth of health insurance have combined to create a perfect shitstorm. I feel angry, sad, and helpless. At least I can do something about that last part. And you can too.
Planet Mozilla — Mozilla Awards $385,000 to Open Source Projects as part of MOSS “Mission Partners” Program
For many years people with visual impairments and the legally blind have paid a steep price to access the Web on Windows-based computers. The market-leading software for screen readers costs well over $1,000. The high price is a considerable obstacle to keeping the Web open and accessible to all. The NVDA Project has developed an open source screen reader that is free to download and to use, and which works well with Firefox. NVDA aligns with one of the Mozilla Manifesto’s principles: “The Internet is a global public resource that must remain open and accessible.”
That’s why, at Mozilla, we have elected to give the project $15,000 in the inaugural round of our Mozilla Open Source Support (MOSS) “Mission Partners” awards. The award will help NVDA stay compatible with the Firefox browser and support a long-term relationship between our two organizations. NVDA is just one of eight grantees in a wide range of key disciplines and technology areas that we have chosen to support as part of the MOSS Mission Partners track. This track financially supports open source software projects doing work that meaningfully advances Mozilla’s mission and priorities.
Giving Money for Open Source Accessibility, Privacy, Security and More
Aside from accessibility, security and privacy are common themes in this set of awards. We are supporting several secure communications tools, a web server which only works in secure mode, and a distributed, client-side, privacy-respecting search engine. The set is rounded out with awards to support the growing Rust ecosystem and promote open source options for the building of compelling games on the Web. (Yes, games. We consider games to be a key art-form in this modern era, which is why we are investing in the future of Web games with WebAssembly and Open Web Games.)
MOSS is a continuing program. The Mission Partners track has a budget for 2016 of around US$1.25 million. The first set of awards listed below total US$385,000 and we look forward to supporting more projects in the coming months. Applications remain open both for Mission Partners and for the Foundational Technology track (for projects creating software that Mozilla already uses or deploys) on an ongoing basis.
We are greatly helped in evaluating applications and making awards by the MOSS Committee. Many thanks again to them.
And The Winners Are….
The first eight awardees are:
Tor: $152,500. Tor is a system for using a distributed network to communicate anonymously and without being tracked. This award will be used to significantly enhance the Tor network’s metrics infrastructure so that the performance and stability of the network can be monitored and improvements made as appropriate.
Tails: $77,000. Tails is a secure-by-default live operating system that aims at preserving the user’s privacy and anonymity. This award will be used to implement reproducible builds, making it possible for third parties to independently verify that a Tails ISO image was built from the corresponding Tails source code.
Caddy: $50,000. Caddy is an HTTP/2 web server that uses HTTPS automatically and by default via Let’s Encrypt. This award will be used to add a REST API, web UI, and new documentation, all of which make it easier to deploy more services with TLS.
Mio: $30,000. Mio is an asynchronous I/O library written in Rust. This award will be used to make ergonomic improvements to the API and thereby make it easier to build high performance applications with Mio in Rust.
DNSSEC/DANE Chain Stapling: $25,000. This project is standardizing and implementing a new TLS extension for transport of a serialized DNSSEC record set, to reduce the latency associated with DANE and DNSSEC validation. This award will be used to complete the standard in the IETF and build both a client-side and a server-side implementation.
Godot Engine: $20,000. Godot is a high-performance multi-platform game engine which can deploy to HTML5. This award will be used to add support for Web Sockets, WebAssembly and WebGL 2.0.
PeARS: $15,500. PeARS (Peer-to-peer Agent for Reciprocated Search) is a lightweight, distributed web search engine which runs in an individual’s browser and indexes the pages they visit in a privacy-respecting way. This award will permit face-to-face collaboration among the remote team and bring the software to beta status.
NVDA: $15,000. NonVisual Desktop Access (NVDA) is a free, open source screen reader for Microsoft Windows. This award will be used to make sure NVDA and Firefox continue to work well together as Firefox moves to a multi-process architecture.
This is only the beginning. Stay tuned for more award announcements as we allocate funds. Open Source is a movement that is only growing, both in numbers and in importance. Operating in the open makes for better security, better accessibility, better policy, better code and, ultimately, a better world. So if you know any projects whose work furthers the Mozilla Mission, send them our way and encourage them to apply.
The web platform is capable of amazing things. Thanks to the ongoing hard work of standards bodies, browser vendors, and web developers, web standards are feature-rich and continuously improving. The WebKit project in particular emphasizes security, performance, and battery life when evaluating and implementing web standards. These standards now include most of the functionality needed to support rich media and interactive experiences that used to require legacy plug-ins like Adobe Flash. When Safari 10 ships this fall, by default, Safari will behave as though common legacy plug-ins on users’ Macs are not installed.
On websites that offer both Flash and HTML5 implementations of content, Safari users will now always experience the modern HTML5 implementation, delivering improved performance and battery life. This policy and its benefits apply equally to all websites; Safari has no built-in list of exceptions. If a website really does require a legacy plug-in, users can explicitly activate it on that website.
If you’re a web developer, you should be aware of how this change will affect your users’ experiences if parts of your websites rely on legacy plug-ins. The rest of this post explains the implementation of this policy and touches on ways to reduce a website’s dependence on legacy plug-ins.
How This Works
By default, Safari no longer tells websites that common plug-ins are installed. It does this by not including information about Flash, Java, Silverlight, and QuickTime in
navigator.mimeTypes. This convinces websites with both plug-in and HTML5-based media implementations to use their HTML5 implementation.
Of these plug-ins, the most widely-used is Flash. Most websites that detect that Flash isn’t available, but don’t have an HTML5 fallback, display a “Flash isn’t installed” message with a link to download Flash from Adobe. If a user clicks on one of those links, Safari will inform them that the plug-in is already installed and offer to activate it just one time or every time the website is visited. The default option is to activate it only once. We have similar handling for the other common plug-ins.
When a website directly embeds a visible plug-in object, Safari instead presents a placeholder element with a “Click to use” button. When that’s clicked, Safari offers the user the options of activating the plug-in just one time or every time the user visits that website. Here too, the default option is to activate the plug-in only once.<figure class="widescreen mattewhite" style="padding-top: 0; padding-bottom: 0;"></figure>
Safari 10 also includes a menu command to reload a page with installed plug-ins activated; it’s in Safari’s View menu and the contextual menu for the Smart Search Field’s reload button. All of the settings controlling what plug-ins are visible to web pages and which ones are automatically activated can be found in Safari’s Security preferences.
Whenever a user enables a plug-in on a website, it’ll remain enabled as long as the user regularly visits the website and and website still uses the plug-in. More specifically, Safari expires a user’s request to activate a plug-in on a particular website after it hasn’t seen that plug-in used on that site for a little over a month.
Recommendations for Web Developers
Before Safari 10 is released this fall, we encourage you to test how these changes impact your websites. You can do that by installing a beta of macOS Sierra. There will be betas of Safari 10 for OS X Yosemite and OS X El Capitan later this summer.
To avoid making your users have to explicitly activate a plug-in on your website, you should try to implement features using technologies built into the web platform. You can use HTML5
<video>, the Audio Context API, and Media Source Extensions to implement robust, secure, customized media players. New in Safari 10, text can be cut or copied to the clipboard using execCommand, which was previously only possible using a plug-in. A host of CSS features, including animations, backdrop filters, and font feature settings can add some visual polish to a site. And WebGL is great for creating interactive 2D or 3D content, like games.
If you serve a different version of your website to mobile browsers, it may already implement its media playback features using web standards. As browsers continue to transition away from legacy plug-ins, you can preserve the rest of your users’ experiences by serving those same implementations to all visitors of your site.
If you can’t replace a plug-in-based system in the short term, you may want to teach your users how to enable that plug-in for your website in Safari. In an enterprise setting, system administrators can deploy managed policies to enable a plug-in on specific websites, if necessary.
Help Us Help You
If you find that you can’t implement parts of your websites without using legacy plug-ins, you can help yourself and other developers by telling us about it. In general, any time the web platform falls short of your needs, we want to know about it. Your feedback has and will continue to shape the priorities of the WebKit project and the Safari team. To send that type of feedback, please write email to or tweet at Jonathan Davis.
And if you have questions about Safari’s policies for using Flash or other plug-ins, feel free to reach me on Twitter at @rmondello.
What do an American actor, a British sitcom character and an HTML attribute have in common? If you’ve ever watched Mary Poppins and winced at Dick Van Dyke’s attempt at an English accent, or found yourself laughing at Delboy Trotter trying to speak French in Only Fools and Horses, you may well guess the answer.
lang attribute is used to identify the language of text content on the web. This information helps search engines return language specific results, and it is also used by screen readers that switch language profiles to provide the correct accent and pronunciation.
To set the primary language for a document, you use the
lang attribute on the
<html lang="en"> ... </html>
lang attribute takes an ISO language code as its value. Typically this is a two letter code such as “en” for English, but it can also be an extended code such as “en-gb” for British English.
lang attribute must also be used to identify chunks of text in a language that is different from the document’s primary language. For example:
<html lang="en"> ... <body> <p>This page is written in English.</p> <p lang="fr">Sauf pour ce qui est écrit en mauvais français.</p> </body> </html>
lang attribute is forgotten surprisingly often, perhaps because it makes no apparent difference unless you use a screen reader or you are a search engine. If you’re in any doubt at all what a difference it makes though, listen to this screen reader demo!
In the last week, we landed 85 PRs in the Servo organization’s repositories.
That number is a bit low this week, due to some issues with our CI machines (especially the OSX boxes) that have hurt our landing speed. Most of the staff are in London this week for the Mozilla All Hands meeting, but we’ll try to look at it.
Planning and Status
This week’s status updates are here.
- glennw upgraded our GL API usage to rely on more GLES3 features
- ms2ger removed some usage of
- nox removed some of the dependencies on crates that are very fragile to rust nightly changes
- nox reduced the number of fonts that we load unconditionally
- larsberg added the ability to open web pages in Servo on Android
- anderco fixed some box shadow issues
- ajeffrey implemented the beginnings of the top level browsing context
- izgzhen improved the implementation and tests for the file manager thread
- edunham expanded the
./mach packagecommand to handle desktop platforms
- daoshengmu implemented
- pcwalton fixed an issue with receiving mouse events while scrolling in certain situations
- danlrobertson continued the quest to build Servo on FreeBSD
- manishearth reimplemented XMLHttpRequest in terms of the Fetch specification
- kevgs corrected a spec-incompatibility in
- fduraffourg added a mechanism to update the list of public suffixes
- farodin91 enabled using
WindowProxytypes in WebIDL
- bobthekingofegypt prevented some unnecesary echoes of websocket quit messages
There were no new contributors this week.
Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!
No screenshots this week.
Hello, SUMO Nation!
I wonder how many football fans do we have among you… The Euro’s coming! Some of us will definitely be watching (and being emotional) about the games played out in the next few weeks. If you’re a football fan, let’s talk about it in our forums!
Welcome, new contributors!
Most recent SUMO Community meeting
The next SUMO Community meeting…
- …is most likely happening after the London Work Week (which is happening next week)
- Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
- Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
- Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
- If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.
- PLATFORM REMINDER! We are currently testing Lithium’s capabilities. You can learn more about Lithium here. Please note that this does not mean we have reached a final decision regarding the future of SUMO’s technical backend. We will be sharing more information with you shortly. If you’re interested in playing with a sandbox environment, let Michał know.
- LONDON WORK WEEK! If you are there, look out for SUMO folks – we’re happy to meet in person! We do have a Telegram channel for comms, by the way.
- We are not 100% sure at this stage if we’ll be able to stream or record any of the meetings there, but we’ll do our best to keep you updated – and there will be a report-out once we’re back to our headquarters again.
- Speaking of England… Shout-outs to Yousef, who’s been around for quite a while, and has also supported SUMO.
Ongoing reminder #1: if you think you can benefit from getting a second-hand device to help you with contributing to SUMO, you know where to find us.
- Ongoing reminder #2: we are looking for more contributors to our blog. Do you write on the web about open source, communities, SUMO, Mozilla… and more? Do let us know!
- Ongoing reminder #3: want to know what’s going on with the Admins? Check this thread in the forum.
- We a
- Ongoing reminder: We have a training out there for all those interested in Social Support – talk to Madalina or Costenslayer on #AoA (IRC) for more information.
- Huge thanks to everyone who contributed to the troubleshooting of an erroneously repeated email notification sent out last week to one of our new contributors.
- for Android
- Version 47 launched – woohoo!
- Final reminder:
- Version 47 launched – woohoo!
- for Desktop
- Version 47 support discussion thread.
- The new version includes
- Add-ons using FUEL are no longer supported.
- The whitelist for activating plugins through clicking has been removed, except for Flash.
- More details about all of the above (and more – web encryption, security fixes) can be found in the release notes.
- Version 47 support discussion thread.
- for iOS
- Firefox for iOS 5.0 should be with us in approximately 3 weeks! It should be out
That’s it for this week – next week the blog post may not be here… but if you keep an eye open for our Twitter updates, you may see a lot of smiling faces.
What a brilliant idea!
For the longest time HTML5 specified, and advised developers, that it no longer mattered what the number (1 to 6) was in a heading element (when used in conjunction with sectioning elements). What mattered was the nesting level of the H1-h6 in sectioning elements, just like the X<H>TML promised land, but better as it recycled current heading elements. This concept was embraced by many a web standards afficianado and has been spread far and wide by web standards evangelists, in speeches, articles and books.
How the outline should work: using nested
<body> <h1>top level heading (parent sectioning element is body)</h1> <section> <h1>2nd level heading (nested within one sectioning element)</h1> <section> <h1>3rd level heading (nested within 2 sectioning elements)</h1> </section> </section> </body>
→ top level heading → → 2nd level heading → → → 3rd level heading
Trouble in outline nerdvana
Document outline semantics exposed by browsers and assistive technology:
→ top level heading → top level heading → top level heading
Brilliant as it is, this idea as specified has not been taken up by user agents. So after 7 years or more we have a concept without interoperable implementations (super sad face).
For the last few years, the HTML5 specification has included a warning about the lack of implementations and has suggested that the document outline algorithm not be relied upon to convey heading semantics to users. Recently this has been taken a step further. Now the HTML 5.1 specification requires developers to use
h1-h6 to convey document structure. The simple reason for this change is that the HTML5 document outline is not implemented and despite efforts to get it implemented, the general response from user agent developers has not been enthusiastic. You can read the updated advice and requirements in the HTML 5.1 specification
Comments or questions? Bring ’em on!
Update 21/06/16 – Heading-level outline view
You can now check the heading-level outline of a page using the W3C HTML checker or the W3C markup validation service (same output different UI) with thanks to Mike[TM]Smith. It is provided alongside the structural outline, so you can compare semantic reality and theory.
Como @Pochy había anunciado, ya se encuentra disponible la descarga de una actualización de Firefox. Tras 6 semanas de estar en fase beta, en el día de ayer fue liberada una nueva versión de Firefox, por lo que es tiempo de actualizar. En este lanzamiento sobresale el empleo de Google’s Widevine CDM para reproducir contenido a través del plugin Silverlight que contenga protección DRM, la posibilidad de visualizar y buscar fácilmente las pestañas sincronizadas en otros dispositivos, importantes cambios relacionados con los complementos, mejoras en la seguridad y mucho más.
¿Qué hay de nuevo?
Google Widevine CDM
El año pasado Mozilla introdujo en Firefox la posibilidad de reproducir contenidos protegidos bajo DRM a través de Adobe’s Primetime CDM y ahora se ha añadido Google Widevine CDM en Windows y Mac. Widevine es una alternativa para la para la transmisión de los servicios que en la actualidad se basan en Silverlight y son protegidos por DRM. Es válido recordar que Widevine se activará cuando los usuarios interactúen por primera vez con un sitio web que requiera Widevine.
Widevine CDM se ejecuta en un entorno seguro y abierto en Firefox, permitiendo mejor seguridad que los plugins NPAPI y a su vez significa un importante paso en los planes de Mozilla para eliminar los plugins NPAPI. Algunas páginas web utilizan un tipo de DRM que no admiten los módulos de cifrado de contenido de Adobe Primetime ni de Google Widevine. Para poder verlo, quizás necesites un plugin NPAPI de terceros, como Microsoft Silverlight.
Pestañas sincronizadas a la vista
Desde la inclusión de Sync en el navegador puedes compartir tu información y preferencias (como marcadores, contraseñas, pestañas abiertas, lista de lectura y complementos instalados) con todos tus dispositivos para mantenerte actualizado y no perderte nada.
Con este lanzamiento, ver las pestañas abiertas en otros dispositivos será mucho más fácil e intuitivo pues al sincronizar inmediatamente se mostrará el botón en la barra de herramientas, el cual te permite acceder mediante un panel a estas páginas en tu equipo con tan solo un clic. También, mientras busques en la barra de direcciones, las pestañas serán mostradas en la lista desplegable.
Si nunca has configurado Sync y deseas hacerlo, puedes leer este artículo en la Ayuda de Mozilla. ¡Es muy fácil y rápido!
De videos y otros temas
La decodificación del códec VP9 ha sido habilitada solo para usuarios que cuenten con máquinas rápidas y los videos embebidos de YouTube ahora se reproducen a través de HTML5 si Flash no esta instalado. Esto significa que los videos se reproducirán de una forma más fluida, empleando menos ancho de banda y extendiendo la vida útil de tu batería.
El idioma Latgalu ha sido incorporado y se suma a la larga lista de lenguas soportadas por el panda rojo.
Cambios en los complementos
FUEL (Firefox User Extension Library) ha sido eliminada y debido a ello, los complementos que la incorporen no funcionarán más en el navegador, a menos que sus desarrolladores actualicen su código.
La lista blanca para ejecutar plugins fue suprimida, por lo que todos los plugins estarán inhabilitados por defecto para ejecutarse y debemos permitir según su nivel de confianza u otro aspecto personal.
Por otra parte, la preferencia
browser.sessionstore.restore_on_demand ha sido establecida a su valor original (true) para evitar problemas de rendimiento en e10s.
Novedades en Android
- Adicionada la opción “Mostrar/Ocultar fuentes web” en la configuración avanzada para reducir datos y mejorar el ancha de banda.
- Esta será la última versión con soporte para Android 2.3.x (Gingerbread).
- La opción “Abrir múltiples enlaces” en General, ha sido renombrada y ahora se llama “Pestañas en cola”.
- Eliminado el soporte para Android web runtime (WebRT).
- Eliminados los favicons de las páginas para prevenir engaños HTTPS.
Lo nuevo para desarrolladores
Si deseas conocer más detalles de las novedades para desarrolladores puedes leer este artículo publicado en el blog de Mozilla Hispano donde abordan más el tema.
- Cambios en la compatibilidad de complementos.
- Ahora podemos comenzar, parar y depurar los Service Workers registrados.
- Simulación de la inserción de mensajes en la herramienta Service Workers.
- En la vista de diseño adaptable se puede personalizar el User Agent a utilizar.
- Añadido el soporte para la suites criptográficas ChaCha20/Poly1305.
- Entradas multi-líneas inteligentes en la Consola Web.
- WebCrypto: PBKDF2 ahora soporta los algoritmos de suma SHA-2.
- WebCrypto: añadido el soporte a las firmas RSA-PSS.
Si prefieres ver la lista completa de novedades, puedes llegarte hasta las notas de lanzamiento (en inglés).
Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Android, Linux, Mac y Windows. Si te ha gustado, por favor comparte con tus amigos esta noticia en las redes sociales. No dudes en dejarnos un comentario.
Application Cache is—as Jake so infamously described—not a good API. It was specced and shipped before developers had a chance to figure out what they really needed, and so AppCache turned out to be frustrating at best and downright dangerous in some situations. Its over-zealous caching combined with its byzantine cache invalidation ensured it was never going to become a mainstream technology.
There are very few use-cases for AppCache, but I think I hit upon one of them. Six years ago, A Book Apart published HTML5 For Web Designers. A year and a half later, I put the book online. The contents are never going to change. There’s a second edition of the book out now but if you want to read all the extra bits that Rachel added, you’re going to have to buy the book. The website for the original book is static and unchanging. That’s what made it such a good candidate for using AppCache. I could just set it and forget.
Except that’s no longer true. AppCache is being deprecated and browsers are starting to withdraw support. Chrome is already making sure that AppCache—like geolocation—no longer works on sites that aren’t served over HTTPS. That’s for the best. In retrospect, those APIs should never have been allowed over unsecured HTTP.
I mentioned that I spent the weekend switching all my book websites over to HTTPS, so AppCache should continue to work …for now. It’s only a matter of time before AppCache is removed completely from many of the browsers that currently support it.
Seeing as I’ve got the HTML5 For Web Designers site running on HTTPS now, I might as well go all out and make it a progressive web app. By far the biggest barrier to making a progressive web app is that first step of setting up HTTPS. It’s gotten cheaper—thanks to
Let’s Encrypt —but it still involves mucking around in the command line with root access; I never wanted to become a sysadmin. But once that’s finally all set up, the other technological building blocks—a Service Worker and a manifest file—are relatively easy.
In this case, the Service Worker is using a straightforward bit of logic:
- On installation, cache absolutely everything: HTML, CSS, images.
- When anything is requested, grab it from the cache.
- If it isn’t in the cache, try the network.
- If the network doesn’t work, show an offline page (or image).
Basically I’m reproducing AppCache’s overzealous approach. It works for this site because the content is never going to change. I hope that this time, I really can just set it and forget it. I want the site to be an historical artefact, available at the same URL for at least my lifetime. I don’t want to have to maintain it or revisit it every few years to swap out one API for another.
Which brings me back to the way AppCache is being deprecated…
The Firefox team are very eager to ditch AppCache as soon as possible. On the one hand, that’s commendable. They’re rightly proud of shipping Service Workers and they want to encourage people to use the better technology instead. But it sure stings for the suckers (like me) who actually went and built stuff using AppCache.
In a weird way, I think this rush to deprecate AppCache might actually hurt the adoption of Service Workers. Let me explain…
At last year’s Edge Conference, Nolan Lawson gave a great presentation on storing data in the browser. He enumerated the many ways—past and present—that we could store data locally: WebSQL, Local Storage, IndexedDB …the list goes on. He also posed the question: why aren’t more people using insert-name-of-latest-API-here? To me it seemed obvious why more people weren’t diving into using the latest and greatest option for local data storage. It was because they had been burned before. The developers who rushed into trying previous solutions end up being mocked for their choice. “Still using that ol’ thing? Pffftt!”
You can see that same attitude on display from Mozilla as they push towards removing AppCache. Like in a comment that refers to developers using AppCache in production as “the angry hordes”. Reminds me of something Tom said:
Developer relations is like 20% telling people about what’s new and 80% building trust and making people feel heard.— Tom Dale (@tomdale) April 3, 2016
In that same Mozilla thread, Soledad echoes Tom’s point:
As a member of the devrel team: I think that this should be better addressed in a blog post that someone from the team responsible for switching AppCache off should write, so everyone can understand the reasons and ask questions to those people.
I’d rather warn people beforehand, pointing them to that post and help them with migration paths than apply emergency mitigation strategies when a lot of people find their stuff stopped working in the newer Firefox…
Bravo! That same approach should have also been taken by the Chrome team when it came to their thread about punishing
display:browser in manifest files. There was absolutely no communication with developers about this major decision. I only found out about it because Paul happened to mention it to me.
Withholding the “add to home screen” prompt like that has a whiff of blackmail about it.
I can confirm that smell. When I was making the manifest file for HTML5 For Web Designers, I really wanted to put
display: browser because I want people to be able to copy and paste URLs (for the book, for individual chapters, and for sections within chapters). But knowing that if I did that, Android users would never see the “add to home screen” prompt made me question that decision. I felt strong-armed into declaring
display: standalone. And no, I’m not mollified by hand-waving reassurances that the Chrome team will figure out some solution for this. Figure out the solution first, then punish the saps like me who want to use
display: browser to allow people to share URLs.
Anyway, the website for HTML5 For Web Designers is now using AppCache and Service Workers. The AppCache part will probably be needed for quite a while yet to provide offline support on iOS. Apple are really dragging their heels on Service Worker support, with at least one WebKit engineer actively looking for reasons not to implement it.
There’s a lot of talk about making apps work offline, but I think it’s just as important that we consider making information work offline. Books are a great example of this. To use the tired transport tropes, the website for a book is something you might genuinely want to access when you’re on a plane, or in the underground, or out at sea.
I really, really like progressive web apps. But I also think it’s important that we don’t fall into the trap of just trying to imitate native apps on the web. I love the idea of taking the best of the web—like information being permanently available at a URL—and marrying that up with the best of native—like offline access. I also like the idea of taking the best of books—a tome of thought—and marrying it up with the best of the web—hypertext.
I’d love to see more experimentation around online/offline hypertext/books. For now, you can visit HTML5 For Web Designers, add it to your home screen, and revisit it whenever and wherever you like.
Since we published the Working on HTML5.1 post, we’ve made progress. We’ve closed more issues than we have open, we now have a working rhythm for the specification that is getting up to the speed we want, and we have a spec we think is a big improvement on HTML5.
Now it’s time to publish something serious.
We’ve just posted a Call For Consensus (CFC) to publish the current HTML5.1 Working Draft as a Candidate Recommendation (CR). This means we’re going into feature freeze on HTML5.1, allowing the W3C Patent Policy to come into play and ensure HTML5.1 can be freely implemented and used.
While HTML5.1 is in CR we may make some editorial tweaks to the spec – for instance we will be checking for names that have been left out of the Acknowledgements section. There will also be some features marked “at risk”, which means they will be removed from HTML5.1 if we find during CR that they do not work in at least two shipping browsers.
Beyond this, the path of getting from CR to W3C Recommendation is an administrative one. We hope the Web Platform WG agrees that HTML5.1 is better than HTML5, and that it would benefit the web community if we updated the “gold standard” – the W3C Recommendation. Then we need W3C’s membership, and finally W3C Director Tim Berners Lee to agree too.
The goal is for HTML5.1 to be a W3C Recommendation in September, and to achieve that we have to put the specification into feature freeze now. But what happens between now and September? Are we really going to sit around for a few months crossing legal t’s and dotting administrative i’s? No way!
We have pending changes that reflect features we believe will be shipped over the next few months. And of course there are always bugs to fix, and editorial improvements to make HTML at W3C more reliable and usable by the web community.
In the next couple of weeks we will propose a First Public Working Draft of HTML5.2. This will probably include some new features, some features that were not interoperable and so not included in HTML5.1, and some more bug fixes. This will kick off a programme of regular Working Draft releases until HTML5.2 is ready to be moved to W3C Recommendation sometime in the next year or so
… on behalf of the chairs and editors, thanks!
In the weeks following Google IO there was a lot of discussion about progressive web apps, Android instant Apps and the value and role of URLs and links in the app world. We had commentary, ponderings, Pathos, explanation of said Pathos after it annoyed people and an excellent round-up on where we stand with web technology for apps.
My favourite is Remy Sharp’s post which he concludes as:
I strongly believe in the concepts behind progressive web apps and even though native hacks (Flash, PhoneGap, etc) will always be ahead, the web, always gets there. Now, today, is an incredibly exciting time to be build on the web.
PWAs beat anything we tried so far
As a card-carrying lover of the web, I am convinced that PWAs are a necessary step into the right direction. They are a very important change. So far, all our efforts to crack the advertised supremacy of native apps over the web failed. We copied what native apps did. We tried to erode the system from within. We packaged our apps and let them compete in closed environments. The problem is that they couldn’t compete in quality. In some cases this might have been by design of the platform we tried to run them on. A large part of it is that “the app revolution” is powered by the age old idea of planned obsolesence, something that is against anything the web stands for.
I made a lot of points about this in my TEDx talk “The Web is dead” two years ago:
We kept trying to beat native with the promises of the web: its open nature, its easy distribution, and its reach. These are interesting, but also work against the siren song of apps on mobile: that you are in control and that you can sell them to an audience of well-off, always up-to-date users. Whether this is true or not was irrelevant – it sounded great. And that’s what we need to work against. The good news is that we now have a much better chance than before. But more on that later.
Where to publish if you need to show quick results?
Consider yourself someone who is not as excited about the web as we are. Imagine being someone with short-term goals, like impressing a VC. As a publisher you have to make a decision what to support:
- iOS, a platform with incredible tooling, a predictable upgrade strategy and a lot of affluent users happy to spend money on products.
- Android, a platform with good tooling, massive fragmentation, a plethora of different devices on all kind of versions of the OS (including custom ones by hardware companies) and users much less happy to spend money but expecting “free” versions
- The web, a platform with tooling that’s going places, an utterly unpredictable and hard to measure audience that expects everything for free and will block your ads and work around your paywalls.
If all you care about is a predictable audience you can do some budgeting for, this doesn’t look too rosy for Android and abysmal for the web. The carrot of “but you can reach millions of people” doesn’t hold much weight when these are not easy to convert to paying users.
To show growth you need numbers. You don’t do that by being part of a big world of links and resources. You do that by locking people in your app. You do it by adding a webview so links open inside it. This is short-sighted and borderline evil, but it works.
And yes, we are in this space. This is not about what technology to use, this is not about how easy it is to maintain your app. This is not about how affordable developers would be. The people who call the shots in the app market and make the money are not the developers. They are those who run the platforms and invest in the companies creating the apps.
The app honeymoon period is over
The great news is that this house of cards is tumbling. App download numbers are abysmally low and the usage of mobiles is in chat clients, OS services and social networks. The closed nature of marketplaces works heavily against random discovery. There is a thriving market of fake reviews, upvotes, offline advertising and keyword padding that makes the web SEO world of the last decade look much less of the cesspool we remember. End users are getting tired of having to install and uninstall apps and continuously get prompts to upgrade them.
This is a great time to break into this model. That Google finally came up with Instant Apps (after promising atomic updates for years) shows that we reached the breaking point. Something has to change.
Growth is on mobile and connectivity issues are the hurdle
Here’s the issue though: patience is not a virtue of our market. To make PWAs work, bring apps back to the web and have the link as the source of distribution instead of closed marketplaces we need to act now. We need to show that PWAs solve the main issue of the app market: that the next users are not in places with great connectivity, and yet on mobile devices.
And this is where progressive web apps hit the sweet spot. You can have a damn small footprint app shell that gets sweet and easy to upgrade content from the web. You control the offline experience and what happens on flaky connections. PWAs are a “try before you buy”, showing you immediately what you get before you go through the process of adding it to your home screen or download the whole app. Exactly what Instant Apps are promising. Instant Apps have the problem that Android isn’t architected that way and that developers need to change their approach. The web was already built on this idea and the approach is second nature to us.
PWAs need to succeed on mobile first
The idea that a PWA is progressively enhanced and means that it could be a web site that only converts in the right environment is wonderful. We can totally do that. But we shouldn’t pretend that this is the world we need to worry about right now. We can do that once we solved the issue of web users not wanting to pay for anything and show growth numbers on the desktop. For now, PWAs need to be the solution for the next mobile users. And this is where we have an advantage over native apps. Let’s use that one.
Of course, there are many issues to consider:
- How do PWAs work with permissions? Can we ask permissions on demand and what happens when users revoke them? Instant apps have that same issue.
- How do I uninstall a PWA? Does removing the icon from my homescreen free all the data? Should PWAs have a memory management control?
- What about URLs? Should they display or not? Should there be a long-tap to share the URL? Personally, I’d find a URL bar above an app confusing. I never “hacked” the URL of an app – but I did use “share this app” buttons. With a PWA, this is sending a URL to friends, and that’s a killer feature.
- How do we deal with the issue of iOS not supporting Service Workers? What about legacy and third party Android devices? Sure, PWAs fall back to normal HTML5 apps, but we’ve seen them not taking off in comparison to native apps.
- What are the “must have” features of native apps that PWAs need to match? Those people want without being a hurdle or impossible to implement?
These are exciting times and I am looking forward to PWAs being the wedge in the cracks that are showing in closed environments. The web can win this, but we won’t get far if we demand features that only make sense on desktop and are in use by us – the experts. End users deserve to have an amazing, form-factor specific experience. Let’s build those. And for the love of our users, let’s build apps that let them do things and not only consume them. This is what apps are for.
I’ve been updating my book sites over to HTTPS:
First off, I’m using Let’s Encrypt. Except I’m not. It’s called Certbot now (I’m not entirely sure why).
I installed the Let’s Encertbot client with this incantation (which, like everything else here, will need root-level access so if none of these work, retry using
sudo in front of the commands):
wget https://dl.eff.org/certbot-auto chmod a+x certbot-auto
Seems like a good idea to put that
certbot-auto thingy into a directory like
mv certbot-auto /etc
Rather than have Certbot generate conf files for me, I’m just going to have it generate the certificates. Here’s how I’d generate a certificate for
/etc/certbot-auto --apache certonly -d yourdomain.com
The first time you do this, it’ll need to fetch a bunch of dependencies and it’ll ask you for an email address for future reference (should anything ever go screwy). For subsequent domains, the process will be much quicker.
The result of this will be a bunch of generated certificates that live here:
Now you’ll need to configure your Apache gubbins. Head on over to…
If you only have one domain on your server, you can just edit
default.ssl.conf. I prefer to have separate conf files for each domain.
Time to fire up an incomprehensible text editor.
There’s a great SSL Configuration Generator from Mozilla to help you figure out what to put in this file. Following the suggested configuration for my server (assuming I want maximum backward-compatibility), here’s what I put in.<script src="https://gist.github.com/adactio/f0e13a2f8b9f9f084676bb2a901c5c95.js"></script>
Make sure you update the
/path/to/yourdomain.com part—you probably want a directory somewhere in
/var/www or wherever your website’s files are sitting.
To exit the infernal text editor, hit ctrl and o, press
enter in response to the prompt, and then hit ctrl and x.
yourdomain.com.conf didn’t previously exist, you’ll need to enable the configuration by running:
Time to restart Apache. Fingers crossed…
service apache2 restart
If that worked, you should be able to go to
https://yourdomain.com and see a lovely shiny padlock in the address bar.
Assuming that worked, everything is awesome! …for 90 days. After that, your certificates will expire and you’ll be left with a broken website.
Not to worry. You can update your certificates at any time. Test for yourself by doing a dry run:
/etc/certbot-auto renew --dry-run
You should see a message saying:
And then, after a while:
** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates below have not been saved.) Congratulations, all renewals succeeded.
You could set yourself a calendar reminder to do the renewal (without the
--dry-run bit) every few months. Or you could tell your server’s computer to do it by using a
cron job. It’s not nearly as rude as it sounds.
You can fire up and edit your list of
cron tasks with this command:
This tells the machine to run the renewal task at quarter past six every evening and log any results:
15 18 * * * /etc/certbot-auto renew --quiet >> /var/log/certbot-renew.log
(Don’t worry: it won’t actually generate new certificates unless the current ones are getting close to expiration.) Leave the cronrab editor by doing the ctrl o, enter, ctrl x dance.
Hopefully, there’s nothing more for you to do. I say “hopefully” because I won’t know for sure myself for another 90 days, at which point I’ll find out whether anything’s on fire.
If you have other domains you want to secure, repeat the process by running:
/etc/certbot-auto --apache certonly -d yourotherdomain.com
And then creating/editing
I found these useful when I was going through this process:
- Certbot documentation
- How to secure your Apache using Certbot SSL
- Mozilla SSL Configuration Generator
- SSL Server Test
That last one is good if you like the warm glow of accomplishment that comes with getting a good grade:
You know, I probably should have said this at the start of this post, but I should clarify that any advice I’ve given here should be taken with a huge pinch of salt—I have little to no idea what I’m doing. I’m not responsible for any flame-bursting-into that may occur. It’s probably a good idea to back everything up before even starting to do this.
Yeah, I definitely should’ve mentioned that at the start.
I’m writing this as a short commentary on Stuart Langridge’s post The Importance of URLs which you should read (he’s surprisingly clever, although he looks like the antichrist in that lewd hat).
I approve of the Lighthouse team’s idea that you don’t qualify as an add-to-home-screen-able app if you want a URL bar
Opera’s implementation of Progressive Web Apps differs from Chrome’s here (we only take the content layer of Chromium; we implement all the UI ourselves, precisely so we can do our own thing). Regardless of whether the developer has chosen
display: standalone or
display: fullscreen in order to hide the URL bar, Opera will display it if the app is served over HTTP because we think that the user should know exactly where she is if the app is served over an insecure connection. Similarly, if the user follows a link from your app that goes outside its domain, Opera spawns a new tab and forces
display: browser so the URL bar is shown.
But I take Jeremy Keith’s point:
I want people to be able to copy URLs. I want people to be able to hack URLs. I’m not ashamed of my URLs …I’m downright proud.
One of the superpowers of the Web is URLs, and fullscreen progressive web apps hide them (deliberately). After our last PWA meeting with the Chrome team in early February, I was talking about just this with Andreas Bovens, the PM for Opera for Android. We mused about some mechanism (a new gesture?) that would allow the user to see and copy (if they want) the URL of the current page. I’ve already heard of examples when developers are making their own “share this” buttons — and devs re-implementing browser functionality is often a klaxon signalling something is missing from the platform.
When I mentioned our musings on Twitter this morning, Alex Russell said “we’ve been discussing the same.” It is, as Chrome chappie Owen Campbell-Moore said “a difficult UX problem indeed”, which is one reason that Andreas and I parked our discussion. One of Andreas’ ideas is long press on the current page, and then get an option to copy/share the URL of the page you’re currently viewing (this means that a long press is not available as an action for site owners to use on their sites. Probably not a big deal?)
What do you think? How can we best allow the user to see the current URL in a discoverable way?
Today was my first day as an Outreachy intern with Mozilla! What does that even mean? Why is it super exciting? How did I swing such a sweet gig? How will I be spending my summer non-vacation? Read on to find out!
What is Outreachy?
Outreachy is a fantastic initiative to get more women and members of other underrepresented groups involved in Free & Open Source Software. Through Outreachy, organizations that create open-source software (e.g. Mozilla, GNOME, Wikimedia, to name a few) take on interns to work full-time on a specific project for 3 months. There are two internship rounds each year, May-August and December-March. Interns are paid for their time, and receive guidance/supervision from an assigned mentor, usually a full-time employee of the organization who leads the given project.
Oh yeah, and the whole thing is done remotely! For a lot of people (myself included) who don’t/can’t/won’t live in a major tech hub, the opportunity to work remotely removes one of the biggest barriers to jumping in to the professional tech community. But as FOSS developers tend to be pretty distributed anyway (I think my project’s team, for example, is on about 3 continents), it’s relatively easy for the intern to integrate with the team. It seems that most communication takes place over IRC and, to a lesser extent, videoconferencing.
What does an Outreachy intern do?
Anything and everything! Each project and organization is different. But in general, interns spend their time…
Coding (or not)
A lot of projects involve writing code, though what that actually entails (language, framework, writing vs. refactoring, etc.) varies from organization to organization and project to project. However, there are also projects that don’t involve code at all, and instead have the intern working on equally important things like design, documentation, or community management.
As for me specifically, I’ll be working on the project Test-driven Refactoring of Marionette’s Python Test Runner. You can click through to the project description for more details, but basically I’ll be spending most of the summer writing Python code (yay!) to test and refactor a component of Marionette, a tool that lets developers run automated Firefox tests. This means I’ll be learning a lot about testing in general, Python testing libraries, the huge ecosystem of internal Mozilla tools, and maybe a bit about browser automation. That’s a lot! Luckily, I have my mentor Maja (who happens to also be an alum of both Outreachy and RC!) to help me out along the way, as well as the other members of the Engineering Productivity team, all of whom have been really friendly & helpful so far.
Interns receive a $500 stipend for travel related to Outreachy, which is fantastic. I intend, as I’m guessing most do, to use this to attend conference(s) related to open source. If I were doing a winter round I would totally use it to attend FOSDEM, but there are also a ton of conferences in the summer! Actually, you don’t even need to do the traveling during the actual 3 months of the internship; they give you a year-long window so that if there’s an annual conference you really want to attend but it’s not during your internship, you’re still golden.
At Mozilla in particular, interns are also invited to a week-long all-hands meet up! This is beyond awesome, because it gives us a chance to meet our mentors and other team members in person. (Actually, I doubly lucked out because I got to meet my mentor at RC during “Never Graduate Week” a couple of weeks ago!)
One of the requirements of the internship is to blog regularly about how the internship and project are coming along. This is my first post! Though we’re required to write a post every 2 weeks, I’m aiming to write one per week, on both technical and non-technical aspects of the internship. Stay tuned!
How do you get in?
I’m sure every Outreachy participant has a different journey, but here’s a rough outline of mine.
Step 1: Realize it is a thing
Let’s not forget that the first step to applying for any program/job/whatever is realizing that it exists! Like most people, I think, I had never heard of Outreachy, and was totally unaware that a remote, paid internship working on FOSS was a thing that existed in the universe. But then, in the fall of 2015, I made one of my all-time best moves ever by attending the Recurse Center (RC), where I soon learned about Outreachy from various Recursers who had been involved with the program. I discovered it about 2 weeks before applications closed for the December-March 2015-16 round, which was pretty last-minute; but a couple of other Recursers were applying and encouraged me to do the same, so I decided to go for it!
Step 2: Frantically apply at last minute
Applying to Outreachy is a relatively involved process. A couple months before each round begins, the list of participating organizations/projects is released. Prospective applicants are supposed to find a project that interests them, get in touch with the project mentor, and make an initial contribution to that project (e.g. fix a small bug).
But each of those tasks is pretty intimidating!
First of all, the list of participating organizations is long and varied, and some organizations (like Mozilla) have tons of different projects available. So even reading through the project descriptions and choosing one that sounds interesting (most of them do, at least to me!) is no small task.
Then, there’s the matter of mustering up the courage to join the organization/project’s IRC channel, find the project mentor, and talk to them about the application. I didn’t even really know what IRC was, and had never used it before, so I found this pretty scary. Luckily, I was RC, and one of my batchmates sat me down and walked me through IRC basics.
However, the hardest and most important part is actually making a contribution to the project at hand. Depending on the project, this can be long & complicated, quick & easy, or anything in between. The level of guidance/instruction also varies widely from project to project: some are laid out clearly in small, hand-holdy steps, others are more along the lines of “find something to do and then do it”. Furthermore, prerequisites for making the contribution can be anything from “if you know how to edit text and send an email, you’re fine” to “make a GitHub account” to “learn a new programming language and install 8 million new tools on your system just to set up the development environment”. All in all, this means that making that initial contribution can often be a deceptively large amount of work.
Because of all these factors, for my application to the December-March round I decided to target the Mozilla project “Contribute to the HTML standard”. In addition to the fact that I thought it would be awesome to contribute to such a fundamental part of the web, I chose it because the contribution itself was really simple: just choose a GitHub issue with a beginner-friendly label, ask some questions via GitHub comments, edit the source markup file as needed, and make a pull request. I was already familiar with GitHub so it was pretty smooth sailing.
Once you’ve made your contribution, it’s time to write the actual Outreachy application. This is just a plain text file you fill out with lots of information about your experience with FOSS, your contribution to the project, etc. In case it’s useful to anyone, here’s my application for the December-March 2015-16 round. But before you use that as an example, make sure you read what happened next…
Step 3: Don’t get in
Unfortunately, I didn’t get in to the December-March round (although I was stoked to see some of my fellow Recursers get accepted!). Honestly, I wasn’t too surprised, since my contributions and application had been so hectic and last-minute. But even though it wasn’t successful, the application process was educational in and of itself: I learned how to use IRC, got 3 of my first 5 GitHub pull requests merged, and became a contributor to the HTML standard! Not bad for a failure!
Step 4: Decide to go for it again (at last minute, again)
Fast forward six months: after finishing my batch at RC, I had been looking & interview-prepping, but still hadn’t gotten a job. When the applications for the May-August round opened up, I took a glance at the projects and found some cool ones, but decided that I wouldn’t apply this round because a) I needed a Real Job, not an internship, and b) the last round’s application process was a pretty big time investment which hadn’t paid off (although it actually had, as I just mentioned!).
But as the weeks went by, and the application deadline drew closer, I kept thinking about it. I was no closer to finding a Real Job, and upheaval in my personal life made my whereabouts over the summer an uncertainty (I seem never to know what continent I live on), so a paid, remote internship was becoming more and more attractive. When I broached my hesitation over whether or not to apply to other Recursers, they unanimously encouraged me (again) to go for it (again). Then, I found out that one of the project mentors, Maja, was a Recurser, and since her project was one of the ones I had shortlisted, I decided to apply for it.
Of course, by this point it was once again two weeks until the deadline, so panic once again set in!
Step 5: Learn from past mistakes
This time, the process as a whole was easier, because I had already done it once. IRC was less scary, I already felt comfortable asking the project mentor questions, and having already been rejected in the previous round made it somehow lower-stakes emotionally (“What the hell, at least I’ll get a PR or two out of it!”). During my first application I had spent a considerable amount of time reading about all the different projects and fretting about which one to do, flipping back and forth mentally until the last minute. This time, I avoided that mistake and was laser-focused on a single project: Test-driven Refactoring of Marionette’s Python Test Runner.
From a technical standpoint, however, contributing to the Marionette project was more complicated than the HTML standard had been. Luckily, Maja had written detailed instructions for prospective applicants explaining how to set up the development environment etc., but there were still a lot of steps to work through. Then, because there were so many folks applying to the project, there was actually a shortage of “good-first-bugs” for Marionette! So I ended up making my first contributions to a different but related project, Perfherder, which meant setting up a different dev environment and working with a different mentor (who was equally friendly). By the time I was done with the Perfherder stuff (which turned out to be a fun little rabbit hole!), Maja had found me something Marionette-specific to do, so I ended up working on both projects as part of my application process.
When it came time to write the actual application, I also had the luxury of being able to use my failed December-March application as both a starting point and an example of what not to do. Some of the more generic parts (my background, etc.) were reusable, which saved time. But when it came to the parts about my contribution to the project and my proposed internship timeline, I knew I had to do a much better job than before. So I opted for over-communciation, and basically wrote down everything I could think of about what I had already done and what I would need to do to complete the goals stated in the project description (which Maja had thankfully written quite clearly).
In the end, my May-August application was twice as long as my previous one had been. Much of that difference was the proposed timeline, which went from being one short paragraph to about 3 pages. Perhaps I was a bit more verbose than necessary, but I decided to err on the side of too many details, since I had done the opposite in my previous application.
Step 6: Get a bit lucky
Spoiler alert: this time I was accepted!
Although I knew I had made a much stronger application than in the previous round, I was still shocked to find out that I was chosen from what seemed to be a large, competitive applicant pool. I can’t be sure, but I think what made the difference the second time around must have been a) more substantial contributions to two different projects, b) better, more frequent communication with the project mentor and other team members, and c) a much more thorough and better thought-out application text.
But let’s not forget d) luck. I was lucky to have encouragement and support from the RC community throughout both my applications, lucky to have the time to work diligently on my application because I had no other full-time obligations, lucky to find a mentor who I had something in common with and therefore felt comfortable talking to and asking questions of, and lucky to ultimately be chosen from among what I’m sure were many strong applications. So while I certainly did work hard to get this internship, I have to acknowledge that I wouldn’t have gotten in without all of that luck.
Why am I doing this?
Last week I had the chance to attend OSCON 2016, where Mozilla’s E. Dunham gave a talk on How to learn Rust. A lot of the information applied to learning any language/new thing, though, including this great recommendation: When embarking on a new skill quest, record your motivation somewhere (I’m going to use this blog, but I suppose Twitter or a vision board or whatever would work too) before you begin.
The idea is that once you’re in the process of learning the new thing, you will probably have at least one moment where you’re stuck, frustrated, and asking yourself what the hell you were thinking when you began this crazy project. Writing it down beforehand is just doing your future self a favor, by saving up some motivation for a rainy day.
So, future self, let it be known that I’m doing Outreachy to…
- Write code for an actual real-world project (as opposed to academic/toy projects that no one will ever use)
- Get to know a great organization that I’ve respected and admired for years
- Try out working remotely, to see if it suits me
- Learn more about Python, testing, and automation
- Gain confidence and feel more like a “real developer”
- Launch my career in the software industry
I’m sure these goals will evolve as the internship goes along, but for now they’re the main things driving me. Now it’s just a matter of sitting back, relaxing, and working super hard all summer to achieve them! :D
Got any more questions?
Are you curious about Outreachy? Thinking of applying? Confused about the application process? Feel free to reach out to me! Go on, don’t be shy, just use one of those cute little contact buttons and drop me a line. :)
Not a song this week, but just a documentary to remind me that some sites are overly complicated and there are strong benefits and resilience in chosing a solid simple framework for working. Not that it makes easier the work. I think it's even the opposite, it's basically harder to make a solid simple Web site. But that the cost is beneficial on the longterm. Tune of the week: The Depth of simplicity in Ozu's movie.
Progress this week:
Today: 2016-05-16T10:12:01.879159 354 open issues ---------------------- needsinfo 3 needsdiagnosis 109 needscontact 30 contactready 55 sitewait 142 ----------------------
In my journey in getting the contactready and needscontact lower, we are making progress. You are welcome to participate
Reorganizing a bit the wiki so it better aligns with our current work. In Progress.
Good news on the front of
appearance in CSS.
The CSSWG just resolved that
"appearance: none"should turn checkbox & radio
<input>elements into a normal non-replaced element.
Learning on how to do mozregression
We are looking at creating a mechanism similar to Opera browser.js into Firefox. Read and participate to the discussion.
(a selection of some of the bugs worked on this week).
- Pizza being unresponsive or more to the point being slow. Issue with React not being reactive? When I see the markup, I start crying. Why do we hurt ourselves like this?
- Finally closed the Firefox Gmail on Android issue. Happiness.
- irresponsible responsive: 353 requests 25 Mo…. The number of insane Web sites be on mobile or desktop… It's like the hummer SUV to drive around your neighborhood to buy milk.
- OffShore Panama CSS
- Intent to Use Counter: Everything
- My URL isn’t your URL
Cache-Control: immutable. Good stuff.
- Compression, Anti-virus, MITM and Facebook: recipes for disaster.
- Being where the people are:
Vendor Prefixes are dead but with them mass author involvement in early stage specifications. The history of Grid shows that it is incredibly difficult to get people to do enough work to give helpful feedback with something they can’t use – even a little bit – in production.
Follow Your Nose
- Document how to write tests on webcompat.com using test fixtures.
- ToWrite: rounding numbers in CSS for
- ToWrite: Amazon prefetching resources with
<object>for Firefox only.
Rust is a language that gives you:
- uncompromising performance and control;
- prevention of entire categories of bugs, including classic concurrency pitfalls;
- ergonomics that often rival languages like Python and Ruby.
It’s a language for writing highly reliable, screamingly fast software—and having fun doing it.
And yesterday, Rust turned one year old.
Rust in numbers
A lot has happened in the last 365 days:
- 11,894 commits by 702 contributors added to the core repository;
- 88 RFCs merged;
- 18 compiler targets introduced;
- 9 releases shipped;
- 1 year of stability delivered.
On an average week this year, the Rust community merged two RFCs and published 53 brand new crates. Not a single day went by without at least one new Rust library hitting the central package manager. And Rust topped the “most loved language” in this year’s StackOverflow survey.
Speaking of numbers: we recently launched a survey of our own, and want to hear from you whether you are an old hat at Rust, or have never used it.
One place where our numbers are not where we want them to be: community diversity. We’ve had ongoing local outreach efforts, but the Rust community team will soon be launching a coordinated, global effort following the Bridge model (e.g. RailsBridge). If you want to get involved, or have other ideas for outreach, please let the community team know.
Rust in production
This year saw more companies betting on Rust. Each one has a story, but two particularly resonated.
First, there’s Dropbox. For the last several years, the company has been secretively working on a move away from AWS and onto its own infrastructure. The move, which is now complete, included developing custom-build hardware and the software to drive it. While much of Dropbox’s back-end infrastructure is historically written in Go, for some key components the memory footprint and lack of control stood in the way of achieving the server utilization they were striving for. They rewrote those components in Rust. In the words of Jamie Turner, a lead engineer for the project, “the advantages of Rust are many: really powerful abstractions, no null, no segfaults, no leaks, yet C-like performance and control over memory.”
Second, there’s Mozilla. They’ve long been developing Servo as a research browser engine in Rust, but their first production Rust code shipped through a different vehicle: Firefox. In Firefox 45, without any fanfare, Rust code for mp4 metadata parsing went out to OSX and 64-bit Linux users; it will hit Windows in version 48. The code is currently running in test mode, with its results compared against the legacy C++ library: 100% correctness on 1 billion reported executions. But this code is just the tip of the iceberg: after laying a lot of groundwork for Rust integration, Firefox is poised to bring in significant amounts of new Rust code, including components from Servo—and not just in test mode.
We’re hearing similar stories from a range of other shops that are putting Rust into production: Rust helps a team punch above its weight. It gives many of the same benefits as traditional systems languages while being more approachable, safer and often more productive.
These are just a few stories of Rust in production, but we’d love to hear yours!
Of course, Rust itself hasn’t been standing still. The focus in its first year has been growing and polishing its ecosystem and tooling:
Ecosystem. The standard library has steadily expanded, with growth focused on filesystem access, networking, time, and collections APIs—and dramatically better documentation coverage. There’s good support for working with C libraries via the libc, winapi, and gcc crates. And new libraries for low-level async io, easy parallelism, lock-free data structures, Rails-like object-relational mapping, regular expressions, and several parsing libraries, including html5ever, a unique HTML5 parser that leverages Rust’s macro system to make the code resemble the spec as closely as possible. These are just scratching the surface, of course, and ecosystem growth, curation and coherence—particularly around async IO and the web stack—will continue to be a major focus in the coming year.
Platforms and targets. Rust’s memory footprint is not much bigger than C’s, which makes it ideal for using in all kinds of places. Over the last year, Rust gained the ability to work directly with the native MSVC toolchain on Windows, to target musl (thereby creating a binary that can be used with zero dependencies on any variety of Linux), to target Android and ARM devices, and many more platforms. The new rustup tool makes it a breeze to manage and compile to these various targets. As of Rust 1.6, you can use Rust without its full standard library, limiting to a core library that does not require any OS services (and hence is suitable for writing OSes in Rust). Finally, there are an increasing number of libraries for embedding Rust code into other contexts, like node.js, Ruby and Go.
Tools. Because Rust looks just like C on the outside, it’s instantly usable with a wide range of existing tools; it works out of the box with lldb, gdb, perf, valgrind, callgrind, and many, many more. Our focus has been to enrich the experience for these tools by adding Rust-specific hooks and workflows. Another major priority is providing full IDE support, in part by providing daemonized services from the compiler; we made good progress on that front this year, and thanks to the Racer project, numerous IDE plugins are already providing some semantic support for Rust. At the same time, the rustfmt code formatting tool has matured to the point that the Rust community is ready to produce an official style. And the beating heart of Rust’s workflow, Cargo, gained numerous abilities this year, most notably the install subcommand.
Compiler. We’ve seen some across-the-board improvements to compile times, and now offer parallelized code generation for further speedups. But the biggest wins will come from the ongoing work on incremental compilation, which will minimize the amount of work the needed when recompiling code after editing it. A vital step here was the move to a custom intermediate representation, which has many other benefits as well. Another focus has been errors, including detailed explanations of most errors, and ongoing work to improve the “at a glance” readability of errors. Expect to hear more on both fronts soon.
Core language. We’ve kept one list purposefully short this year: growth in the core language. While we have some important features in the pipeline (like improved error handling, more flexible borrowing rules and specialization), Rust users by and large are happy with the core language and prefer the community to focus on the ecosystem and tooling.
There’s a lot more to say about what’s happened and what’s coming up in the Rust world—over the coming months, we’ll be using this blog to say it.
Rust in community
It turns out that people like to get together and talk Rust. We had a sold out RustCamp last August, and several upcoming events in 2016:
- September 9-10, 2016: the first RustConf in Portland, OR, USA;
- September 17, 2016: RustFest, the European community conference, in Berlin, Germany;
- October 27-18, 2016: Rust Belt Rust, a Rust conference in Pittsburgh, PA, USA;
- 71 Rust-related meetup groups worldwide.
And that’s no surprise. From a personal perspective, the best part about working with Rust is its community. It’s hard to explain quite what it’s like to be part of this group, but two things stand out. First, its sheer energy: so much happens in any given week that This Week in Rust is a vital resource for anyone hoping to keep up. Second, its welcoming spirit. Rust’s core message is one of empowerment—you can fearlessly write safe, low-level systems code—and that’s reflected in the community. We’re all here to learn how to be better programmers, and support each other in doing so.
There’s never been a better time to get started with Rust, whether through attending a local meetup, saying hello in the users forum, watching a talk, or reading the book. No matter how you find your way in, we’ll be glad to have you.
Happy birthday, Rust!
Hello! I am someone who is associated with Vortex Studio and I would like to release some information about their new product. The reason why I want to do this is because I found some benchmark numbers and they seem pretty impressive so I want you're input on the benchmarks. All of the benchmark images will be uploaded here. The browser at it current position isn't that impressive. It doesn't have a lot of the features that the original idea had. The original idea of the browser is the most secured web browser where it automatically connects to a virtual private network (if you want it too) and hides all of you're information while you can access websites that are blocked (like Facebook or Yahoo, basically like the Tor browser.) This browser code name was the Shadow Browser (most likely to change.) The reason why it was called the Shadow Browser is too keep the idea that the browser is supposed to be highly optimized for security and not for performance but still is really good with performance and can match up with a browser like Opera.
So now going into the information of what the browser is, here are some of the benchmarks. The name of the browser in these benchmarks are Shadow Browser Alpha, again this name will most likely change because the creator of this web browser named this browser quickly.
Average Position in all of the benchmarks : http://s32.postimg.org/lunvml3d1/Average_Position.jpg
Base battle Benchmark : http://s32.postimg.org/t9d7eyp8l/Basebattle_Benchmark.jpg
Browserscope Security Benchmark : http://s32.postimg.org/3ueoi7b5x/Browserscope_Security_Benchmark.jpg
HTML5 Test Benchmark : http://s32.postimg.org/xsu1e7whx/HTML5_Test_Benchmark.jpg
Jet Stream Benchmark : http://s32.postimg.org/xespemced/Jet_Stream_Benchmark.jpg
Octane 2.0 Benchmark : http://s32.postimg.org/txqnbndc5/Octane_2_0_Benchmark.jpg
Peace Keeper Benchmark : http://s32.postimg.org/sc6s63dqd/Peace_Keeper_Benchmark.jpg
Speed Battle Benchmark : http://s32.postimg.org/u6o81kq4l/Speed_Battle_Benchmark.jpg
SpeedoMeter Benchmark : http://s32.postimg.org/ro2j0w4ed/Speedo_Meter_Benchmark.jpg
V8 Benchmark : http://s32.postimg.org/wt3qiihc5/V8_Benchmark.jpg
For a new browser in development for about 3 weeks now, the benchmarks are pretty good if they are not focusing performance. What do you guys think?
For a new browser in development for about 3 weeks now, the benchmarks are pretty good if they are not focusing performance. What do you guys think? If you have any questions please list them below. I know a lot about the browser because I know who was developing it but I am pretty sure he changed a lot of it. So I will answer your questions to the best of my ability.