Every year the team behind Smashing Magazine
invites web professionals from all over the globe to join them in
their home town of Freiburg, Germany for 2 days of talks and
discussion about the web hosted by Jeremy Keith.
On this page:
- Roundup: Smashing Conference 2014 [mobiForge blog]
- This Old House [Brad Frost Web » Brad Frost Web | Web Design, Speaking, Consulting, Music, and Art]
- UX Discovery Session [Brad Frost Web » Brad Frost Web | Web Design, Speaking, Consulting, Music, and Art]
- Owncloud Benchmarking - Raspberry Pi, Banana Pi, NUC [Martin's Mobile Technology Page]
- Why Every Media Website Redesign Looks the Same [Brad Frost Web » Brad Frost Web | Web Design, Speaking, Consulting, Music, and Art]
- Mobile web traffic: a dive into the data [mobiForge blog]
- Mobile Miscellany, 23 Sept 2014. Free Stand at Apps World, Bursaries for The Mobile Academy, Round up from Demo Night [MobileMonday London]
- Beat(s) It: What’s Up, Apple? [Volker on Mobile]
- Don’t use <picture> (most of the time) [Cloud Four Blog]
- What happened at Demo Night, 16th September 2014 [MobileMonday London]
- The Web vs Apps Outcome [Mobile Phone Development]
- RWD Podcast with Justin Avery [Brad Frost Web » Brad Frost Web | Web Design, Speaking, Consulting, Music, and Art]
- The Third Incarnation Of My Network Technology Book Now Published! [Martin's Mobile Technology Page]
- iOS 8 and iPhone 6 for web developers and designers: next evolution for Safari and native webapps [Mobile Web Programming]
- Small Data: A Deterministic and predictive approach [Open Gardens]
- ApplePay vs Osaifu-Keitai – CNBC interview [Eurotechnology.japan]
- Benchmarking That 1 Gbit/s FTTH Connection [Martin's Mobile Technology Page]
- Updating responsive image guidelines in preparation for AEA Austin [Cloud Four Blog]
- Apple Pay vs Japan’s Osaifu-keitai [Eurotechnology.japan]
- Raspberry Pi Power Consumption - Measured [Martin's Mobile Technology Page]
September 29, 2014
Every year the team behind Smashing Magazine
invites web professionals from all over the globe to join them in
their home town of Freiburg, Germany for 2 days of talks and
discussion about the web hosted by Jeremy Keith.
September 28, 2014
I wrote some thoughts about how I’m increasingly wary to jump on board with the latest trends and technologies. I want to make things that last.
In a previous post I wrote down my first impressions on the Banana Pi and how it would fare as a platform for Owncloud. While the general message was 'quite well' there was no room in the post for more detailed benchmark results. So here we go:
To see how the Banana Pi costing €75 with casing and SD card fares compares to a slower Raspberry Pi costing €50 (with casing and SD card) and a faster Intel NUC based Owncloud installation costing around €200, I ran a number of everyday use cases on all three of them. The picture on the left shows my setup for a direct comparison between the Banana Pi and the Raspberry Pi. The NUC was not on the table for the test but connected over Wi-Fi. Obviously that reduces the data transfer speeds to and from the NUC somewhat but probably not by much.
On all three systems I've used https. It doesn't seem to have a big impact, however as the multiple file upload test described below didn't show a performance difference between the use http and https. On the Banana Pi, all data was put on the SD card, i.e. no external SATA drive was connected. This would have perhaps made the Banana Pi faster but my usage scenario is to have a small Owncloud box without external hardware to keep cost and space requirements down.
My first test focused on how long it takes to log into an Owncloud user account after the Owncloud server has been rebooted. The time it takes is similar to the time it takes to log in after having logged out. I ran the same test three times on the Raspberry Pi and the Banana Pi to show that there is a certain variance.
- NUC: 5 seconds
- Raspberry Pi: 43 seconds, 41 seconds, 48 seconds
- Banana Pi: 7 seconds, 7 seconds, 14 seconds
Speed-up compared to the Raspi: approx. 6x
For this test I uploaded 38 images with a size of around 3 MB each on each Owncloud instance. After each upload an icon was generated that is shown next to the filename. The second picture on the left shows a side by side comparison of the upload progress on the Raspberry Pi and the Banana Pi respectively.
- NUC: 65 seconds (1 minute 5 seconds)
- Raspberry Pi: 420 seconds (7 minutes)
- Banana Pi: 142 seconds (2 minutes 20 seconds)
Speed-up compared to the Raspi: approx. 3x
Link download screen shown after reboot
Another important scenario is how long it takes for a download page to be shown to someone to whom I've sent a link for a shared file. To make sure there is no buffering, I rebooted the two Pis before each run.
- NUC: 5 seconds, 4 seconds
- Raspberry Pi: 32 seconds, 26 seconds
- Banana Pi: 8 seconds, 6 seconds
Speed-up compared to the Raspi: approx. 4x
The results show quite nicely that the Banana Pi runs an Owncloud instance much faster than a Raspberry Pi. The time it takes to log in, to upload files and to open a shared link is not as short as on a NUC but from my point of view it is still fast enough to be usable.
September 24, 2014
I chatted with the folks at Mashable about why responsive media sites tend to look similar.
“It’s sort of the same way that all cars look more or less the same. There’s only so many ways you can design a doorknob to where it’s going to be effective,” said Brad Frost, a web designer that has worked on the websites for TechCrunch and Entertainment Weekly.
Ultimately, I think there are few reasons for this. The first is that (as mentioned above) there’s only so many ways to arrange an interface so that it will be an effective, enjoyable experience. The second is that different categories (i.e. media sites, or higher ed, or e-commerce) tend to look closely at what their competitors are doing. The larger the organization, the harder it is to push boundaries and do something totally new.
September 23, 2014
We all know that web site traffic from mobile devices is increasing rapidly. But what does it actually consist of? What devices are more popular or how large is the fraction of "non-human" traffic? There have been many reports and analysis done on web traffic in general, but there are also a couple of good reasons to look inside mobile web traffic data, that is, traffic to websites optimized for mobile device use.
Mobile Miscellany, 23 Sept 2014. Free Stand at Apps World, Bursaries for The Mobile Academy, Round up from Demo Night
WHAT HAPPENED AT DEMO NIGHT?
Did you miss Demo Night last week? If so, shame, but catch up on what you missed on the blog. Well done to all the demoers, thanks to Informa for hosting us at the Service Delivery Innovation Summit, to volunteers and a very engaged audience.
WIN A FREE STAND IN THE MOMOLO STARTUP ZONE AT APPS WORLD
We had a lot of fun at Apps World last year, so we're really happy to be returning this year, 12th and 13th November at ExCeL with a fantastic offer for startups. If you are a company or have a demo-able App that is less than 1 year old, you might like to apply for a free stand in the MoMoLo Startup Zone. (It's worth £1k!). Please do that here by 17th October.
ATTEND APPS WORLD
Entry to the Apps World exhibition is free but if you want to attend the various conference tracks you can enter MOMO15 and get a 15% discount on the ticket price. Check out the programme and book your tickets here.
BOOK THE DATE, APPS WORLD DRINKS RECEPTION, WEDNESDAY 12th NOV
We are also hosting a drinks reception at our MoMoLo Startup Zone on the 12th - you can register here - plus we get an invite to ride on the Emirates Air Line and carry on at the after party, so you might look to book the next morning off work! By registering with us, you will also get access to the exhibition.
LAST MINUTE BURSARIES FOR THE MOBILE ACADEMY STARTING TUESDAY 30TH SEPTEMBER
We have some last minute bursaries (free places!) from UCL Advances for "noble causes". Time is short so please spread the word - and get any potential candidates to email firstname.lastname@example.org asap with a few sentences about themselves.
The Mobile Academy runs for 10 weeks, 2 nights a week and is for people that want to deepen their knowledge in all aspects of taking a new product to success with a mobile audience. Along with getting the real need to know from industry experts, past participants have also told us about the invaluable networking. It's for a diverse crowd that want to know more about new product development, with a mobile twist.
Book here with code MoMoLo for a 10% discount or email Julia to ask for one of the 'good causes' bursaries.
Looking forward to catching up with everyone at Apps World - this is going to be a lot of fun!
September 22, 2014
[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Except you shouldn’t. You shouldn’t
all the things.
But you should start using picture now for responsive images. No reason to wait.
Confused? You’re not alone.
<picture> vs. picture
Standards are developed in non-linear fashion. Ideas evolve and merge. And often at the end of a lengthy process, you look back and wonder how you got here.
And in this case, where we ended up is with a specification
called ‘picture’ that contains much more than the
<picture> element. The picture specification
sizes and you can use
those attributes without using the
Knowing which use case you’re solving tells you what solution you need
You don’t need to know all of the use cases, but you do need to know the difference between the two most common use cases in order to know which part of the picture specification will solve your problems. The two common use cases are:
- Resolution switching — In the resolution switching use case, we need to select a different source of the same image because we need a different resolution for any number of reasons. These reasons include the image being used at a different size based on the size of the screen, the pixel density of the screen, or to avoid downloading unnecessarily large images.1
- Art direction — In the art direction use case, there is some reason why we need to modify the content of an image under certain conditions. Maybe we need to crop the image differently on small screens. Or perhaps we’re working with a hero image that contains text and simply providing a smaller version of the image won’t work because the text will be unreadable.
Basically, if you could resize an image without making any other changes, and have a new source that solves your responsive images needs, then you’re talking about the resolution switching use case. If you need to change anything other than resolution, you’re talking about art direction.2
For most responsive images, you won’t need the <picture> element
Unless you’re solving for art direction, you don’t need to use the
<picture>element. In fact, you’re likely doing your users a disservice by using the
The picture specification supports syntax that can be used without the
<picture>element. An example syntax, borrow from Yoav Weiss’ excellent article Native Responsive Images, might look like this:
<img src="cat_500px.jpg" srcset="cat_750px.jpg 1.5x, cat_1000px.jpg 2x" width="500px" alt="lolcat">
Which will provide the browser with different options for display density. Or in a more complex example:
<img sizes="(max-width: 30em) 100vw, (max-width: 50em) 50vw, calc(33vw - 100px)" srcset="swing-200.jpg 200w, swing-400.jpg 400w, swing-800.jpg 800w, swing-1600.jpg 1600w" src="swing-400.jpg" alt="Kettlebell Swing">
There is a lot to digest in those examples, and unfortunately, I don’t have the space to cover the syntax here. Instead, I recommend reading Yoav’s article or one of the additional resources listed below if you want to understand the details.3
When you use the
sizesattributes on an
<img>element, you are providing information that the browser can use to make an informed decision about what image is appropriate for the user based on a bunch of factors that you are unaware of.
As a web designer or developer, you have no way of knowing how much bandwidth the user currently has or if they’ve declared some sort of preference for the density of images they want. If we provide browsers with information via
sizesthen browsers can make smarter decisions about the appropriate image source.
In fact, I hope that browser makers compete on how they handle
sizesby developing new user settings or smarter algorithms to help them pick the right images to download.
But none of that is possible when you use the
<picture>element and its
<picture> <source media="(min-width: 45em)" srcset="large.jpg"> <source media="(min-width: 32em)" srcset="med.jpg"> <img src="small.jpg" alt="The president giving an award."> </picture>
When you specify the media queries for sources, you are providing rules to the browser that it must follow. The browser has no discretion to make smart decisions about what to download based on user preference, network, etc.
You should use the power to dictate what image gets downloaded sparingly. In fact, you should only use it when you’re solving for art direction, not for resolution switching.
For the majority of the images on the web, <picture> is the wrong solution
Last year, Yoav tried to figure out what percentage of responsive images fell under the art direction use case. The answer: 25%.
Responsive design is still early so we may find that the percentage changes, but it is unlikely that we’ll ever reach a point where the number of art directed responsive images out-numbers the number of resolution switching ones.
Therefore, for most responsive images, the
<picture>element is the wrong solution. You should be using
The future of the web depends on us getting this right
Getting this right matters a great deal to the future of the web. We’ve seen in the past what happens when web developers create a large installed base of suboptimal web pages. We end up with browsers adopting other browser’s css prefixes or media types such as mobile and TV ignored by all mobile phones and TVs.
If we create thousands of web pages that use the
<picture>element for resolution switching, we doom ourselves to having to specify every single image needed instead of letting the computers—the browsers—do what they do best and automatically find the right image based on a multitude of variables.
Worse, we doom our users to a future where they are unable to take advantage of whatever browser advances come because we’ve taken the browser’s discretion away and dictated what image should be downloaded.
Perhaps we need to stop referring to the picture specification
A lot of this confusion comes from the fact that we have a specification that ended up with picture in the title even though the specification covers more than just picture.
For those who have been heavily involved in the development of a solution for responsive images, picture is a nice shorthand. That’s why you see that even it referred to as <picture> in Chrome Status and Modern IE Status.
Using picture as a shorthand it creates confusion when talking to people who are just now looking to implement responsive images. I asked the Responsive Images Community how they’ve been handling this confusion.
Bruce Lawson says:
I tend to refer to “picture and friends” or, generically, “responsive images”
And Odin Hørthe Omdal adds:
I always talk about responsive images, and I think I probably say the respimg specification.
So I’m going to attempt to do the same. I’m going to break myself of the habit of referring to the picture specification and instead refer to responsive images specification even if that isn’t technically the name of the specification. I think people will understand what I mean, and it will help ensure more people understand that it isn’t all about
Simple guidelines for using the responsive images specification
- The picture specification contains more than just the
<picture>element. Think of it as the responsive images specification.
- For most responsive images, you shouldn’t use the
<picture>element. You should use
- The way to know when to use the
<picture>element is based on the use case you’re trying to solve.
- If you’re solving for the art direction use case, use the
<picture>element. Anything else, you should stick to
- Getting this right early—before we have thousands of pages
<picture>incorrectly—is critical for the future of the web.
And with that, I will amend what Marcos wrote to say, “Go forth and make all your images responsive!”
- In the responsive images use case
document, resolution switching is described with three
different examples: Resolution-based
And yes, they are all different reasons why you want to select a different resolution source, but they all involve resolution switching which is why I’m lumping them together.
You may find this article I wrote on A Framework for Discussing Responsive Images to be a good high-level overview of the use cases.
Some additional articles to help you understand the full features of the picture specification: HTML5 Rocks: Built-in Browser Support for Responsive Images, Srcset and sizes and Responsive Images Done Right: A Guide To <picture> And srcset by Eric Portis.
- The picture specification contains more than just the
We also want to thank Michał Kubacki, inventor of 5-TILES keyboard for touch screens and wearable devices. He returned as a former demoer to give us an update.
|Michał Kubacki, pictured here with James Rosewell, another former demoer who is doing great things with 51Degrees|
10 Startups Demo Innovative Approaches to Mobile Life: Mobile Monday London Demo Night, September 2014
Ten hot ticket startups took to the stage to show us what they are made of at Mobile Monday London’s Demo Night (16 September) during Informa’s 10th Annual Service Delivery Innovation Summit at Thistle Hotel, Marble Arch. Greeted by an enthusiastic audience, they showed clever innovations for our mobile lives.
“There were some great case studies on service innovations, operators and carriers.” Richard Behrendt
Karel Bourgois said: “I’ve come all the way from Paris, I go to Mobile Monday in France, and have also attended San Francisco. I like the structure of this London Mobile Monday event – it's very well organised and I have discovered lots of innovative startups. I enjoyed the audience participation.”
Daniel Balfour of Mobivate (previously worked for Virgin Mobile) said: “This is my first time in London - I have been to Mobile Monday in Sydney and really liked how there was a good amount of time for networking tonight”
Dippen Lad of Sky said: “I used to work for a WiFi start-up called The Cloud, and I’m a regular at Mobile Monday – I attend these events so that I can get ideas and understand what’s happening in the startup world. My favourite tonight has to be Pronto, I can see it taking off.”
A shout out to the demoers:
Good Food Talks
|Matt gets feedback over a nice glass of red!|
Once you realise what Swytch does, you’ll know why the audience loved this startup, represented by CEO Chris Michael. Swytch addresses a big pain point for consumers by enabling you to manage multiple numbers from one device. Step in Swytch to change your life, as you won’t need to stuff your pockets with mobile phones anymore!
Ian Masters & Albert Marshall showed us their app for quiz lovers. QuizTix is a group of original, fun and accessible quiz games that offer a familiar and friendly entertainment experience.
Martin Sandstrom & Mark Lee showed us how the App helps you to manage all your bills and expenses in your houseshare, splitting everything and showing what you owe each other.
Benjamin Bourdin of Grabyo demonstrated their real time video platform that offers broadcasters the opportunity to share video clips from live broadcast TV and video feeds across Twitter, Facebook, the web and mobile. They already have some very large clients under their belt.
Douglas Robb showed us his cloud based augmented reality platform where, with no programming skills, you can publish your own custom apps, converting your location-based data into AR-ready content, adding your own styling and extra widgets as you go.
James Roy Poulter whet our appetite with his foodie App. Pronto allows users to order & track their meals 24/7. All it takes is 3 little taps from a smart device, and BOOM, your food is ordered!
Frederick Tubiermont demoed how easy Adsy (a mobile web app that lets you create mobile web apps) was to create a mobile web app using tonight’s Mobile Monday event as the subject matter. He showed how to add maps, links, visuals and more using his simple user interface.
Matthew Bridge demonstrated how wearable technology, such as the Samsung Gear 2 smartwatch - can improve workforce efficiencies through the delivery of business critical information. Using Samsung’s API’s for notification alerts, they had connected components of their Enterprise System – allowing alerts to be sent out when events changed or statuses updated.
Matthew told us that the feedback he received was very positive reinforcing their belief that in today’s environment emails are no longer really used for their intended purpose, such as important information delivery – and an alternative approach is required. He also told us that the audience Q&A provided an excellent forum for discussion and constructive critiquing.
Mark Hill and Damon Hart-Davis represented OpenTRV – a smart radiator device that takes full control of the household boiler and its sensors. They even brought along bits of radiator to help tell the story!
There was time when some people thought the future of mobile development was the web. That thinking was based on the fact that the web was a common platform across all types of device and that would be the only way to solve fragmentation. If you look at the ‘Web Technologies’ section at the bottom of this site you will see I was sceptical.
In practice, we all know apps have dominated. While Apple and Google have improved their web browsers, they haven’t put in as much effort to allow the browser access to APIs nor improve the user experience for web-based apps. However, I believe the situation has become even worse than this.
A second problem is that there’s now no one ‘Android Browser’ upon which the WebViews are based. Niels Leenheer has a great set of slides that explains how browsers vary across Android versions, devices and phone manufacturers. The consequence of this is that getting any non-trivial WebView-based app to work across many device types is very difficult. The many 3rd party companies creating app creation tools based on web technologies face an uphill battle - as do people using their tools.
It’s ironic that the (web) platform that some people thought might solve the fragmentation problem has, arguably due to under-investment and lack of innovation by Google and Apple, become one that has security and fragmentation headaches.
- Cross Platform Tools Report
- MoMo London HTML5 vs Native
- Web vs Native and the Enterprise
- Cross Platform Tools Report
- Mobile Frameworks and Custom UIs
- HTML5 vs Native
- Native vs Web (again)
- Web-based Technologies
- Web App UI Fragmentation
- OS and Browser Fragmentation
- The Problems with Frameworks
- Runtimes, Frameworks and Fragmentation
- Is the Future of Mobile the Web?
I had a wonderful time talking with Justin Avery about all sorts of topics around responsive web design. I could talk to that guy all day!
I’m also super impressed by all the progress he’s made on the Responsive Design Is website. It’s a fantastic resource, so be sure to check it out.
September 19, 2014
It's been an exciting year so far with many projects and many great things happening and I'm very excited to announce the completion of one of my bigger projects this year: As of today, the third incarnation of my book (now) titled "From GSM to LTE-Advanced" has been published and is now available in book stores online and offline!
If you've bought a previous edition you might have noticed that the title has changed slightly to reflect that it now also contains information not only about LTE but also about LTE-Advanced. That's not the only update, however, as lots has happened with other network technologies as well. The previous edition is from 2011 and from my point of view the following things have changed quite a bit since then and hence are now included in the book:
- In Chapter 1, I've included additional information on the 3GPP Release 4 Mobile Switching Center architecture that is now used in most networks.
- In Chapter 2, only few updates were necessary because the deployed feature set of GPRS and EDGE networks have remained stable in recent years.
- Chapter 3 was significantly enhanced as High Speed Packet Access (HSPA) features such as higher order modulation, dual carrier operation and enhanced mobility management states are now in widespread use.
- While only few LTE networks were in operation at the publication of the previous edition, the technology has since spread and significantly matured. Chapter 4 was therefore extended to describe Circuit Switched Fallback (CSFB) for voice telephony in more detail.
- Additionally, a section on Voice over LTE (VoLTE) was added to give a solid introduction to standardized voice over IP telephony in LTE networks. Furthermore, a description of LTE-Advanced features was added at the end of the chapter.
- As the global success of LTE has significantly reduced the importance of WiMAX, the chapter on this technology was removed from this revised edition.
- In Chapter 5 on Wi-Fi a new section was added on the new 802.11ac air interface. Also, a new section was added to describe the Wi-Fi Protected Setup (WPS) mechanism that is part of commercial products today.
- And finally, the chapter on Bluetooth has also seen some changes as some applications such as dial-up networking have been replaced by other technologies such as Wi-Fi tethering. Bluetooth has become popular for other uses, however, such as for connecting keyboards to smartphones and tablets. The chapter has therefore been extended to cover these developments.
There we go, as you can see I spent a lot of effort on the update. So whether you are a first time reader or consider 'upgrading' to the latest edition I hope you like the result!
September 18, 2014
iOS 8 is finally here while the new iPhone 6 and iPhone 6 plus will appear in a few days. New APIs appears on scene, as well as new challenges to support the new screen sizes. I’ve been playing with the final version and here you have my findings.
An overview of this article:
- Safari on iOS 8 in a nutshell
- iPhone 6 and iPhone 6 Plus
- New API support
- New Safari features
- Going Native with iOS 8
- Safari extensions
- New web design features
- Video enhancements
- Bugs and problems
As we are used with the company, Apple didn’t update any of the docs regarding Safari and iOS so all of the information here is based on my own testing and some information delivered at the WWDC.
Safari on iOS 8 in a nutshell
- HTML5 new APIs: WebGL (3D canvas), IndexedDB, Navigation Timing API and Crypto API
- Native hybrids: Improved and faster WebView
- Scroll events useful for Parallax effects
- Video playing: CSS layering, Fullscreen API and Metadata API
- HTML Template element
- Safari Extensions allowing native apps to be used as plugins inside Safari that can read and modify the DOM
- Images: Support for Image Source Sets (srcset) and Animated PNG format
- CSS: Shapes, subpixel layout (hairline borders)
- Autofill for forms, including credit card scanner with OCR
- Web and native App integration, including the ability share login data
- EcmaScript 6 partial support
- SPDY (I couldn’t find a way to test this, any suggestion?)
- File Uploads are not working anymore (bug)
- minimal-ui support removed
- The Remote Web Inspector works only from Yosemite; if you are still on previous versions of Mac you can use a Nightly WebKit build.
Compared to other mobile browsers, these are the features that didn’t come up on iOS 8:
- Resolution media queries with dppx unit
- @viewport declaration
- WebP image format
iPhone 6 and iPhone 6 Plus
The iPhone 6 and iPhone 6 Plus are coming and with new screen sizes new challenges in terms of viewport and image sizes come with them. The iPhone 6 has 4.7″ and 750×1334 physical pixels (same 326dpi as iPhone 5s) while the iPhone 6 Plus includes a bigger 5.5″ with a Full HD 1080×1920 resolution (401 dpi). These new resolutions fall into what Apple called “Retina HD” displays, meaning… well, different than the previous Retina devices.
In terms of web development, the changes are in different places: default mobile viewport (when using width=device-width), pixel ratio useful for responsive images, icon sizes and launch images. Let’s see how this looks like:
|iPhone 6||iPhone 6 Plus|
|Viewport’s device-width (in CSS pixels)||375||414|
|Viewport’s device-width on Android devices with similar display size||360||400|
|Device Pixel Ratio||2||3 (fake value)|
|Rendered Pixels (default viewport size * dpr)||750×1334||1242×2208|
There is an excellent review of iPhone 6 screen sizes at iPhone 6 Screens Demystified
If you are reading this post you probably know the viewport meta tag and the value width=device-width that we are used to to match the viewport size (the canvas where we render the page with the real screen size to avoid the zoomed out experience that we get when we open a desktop website on a mobile device.
You probably know that until today, all iPhones and iPods exposes a 320 CSS pixels width when using that viewport declaration. In a good decision, Apple has decided to bring us more space for content on iPhone 6 and iPhone 6 plus because they are wider than the previous versions. However, the values Apple is using are not matching 1:1 the current values on the market, such as on Android. While on Android 4.7-5″ devices will give you a viewport’s width of 360, iOS will deliver 375 and while on Android 5.5″ devices (such as Galaxy Note) you get a 400px viewport, on iPhone 6 plus you get a weird 414px width. This means roughly that we will see things 4% smaller on iOS compared to similar Android devices. Maybe it’s not a big deal but you should check if your websites are flexible enough to take advantage of the additional 14/15 pixels and you don’t have any visual glitch because of that.
You can emulate this viewports right now for testing purposes just changing the viewport
<meta name="viewport" content="width=375">
<meta name="viewport" content="width=414">
Device Pixel Ratio
In terms of Device Pixel Ratio, the iPhone 6 follows the same value as the previous Retina devices, using a value of 2. The iPhone 6 on the other way with 401 dpi needs a higher value. The real value should be around 2.60; however Apple has decided to give a shot on a new concept: rendered pixels, emulating a 3x device pixel ratio. If the DPR were 3x then the real physical screen (at 414 CSS pixels) should have a width of 1242 pixels but we know that is not true as the real width is 1080 pixels (13% smaller).
Therefore if you are providing 3x images for some Android devices, for example for the Galaxy S5 the image will be taken also for the iPhone 6 Plus but it will be resized by the browser before rendering it on the screen.
A new icon size is available for iPhone 6 Plus, 180×180. iPhone 6 will use the same 120×120 that you should be using today for iOS 7 and (iPhone 4/5)s.
<link rel="apple-touch-icon-precomposed" sizes="180x180" href="retinahd_icon.png">
If you have a home screen webapp with launch images, you have now two new sizes to use with media queries so the browser will take the right one: 750×1294 for iPhone 6 and 1242×2148 for iPhone 6 Plus.
<link rel="apple-touch-startup-image" href="launch6.png" media="(device-width: 375px)">
<link rel="apple-touch-startup-image" href="launch6plus.png" media="(device-width: 414px)">
Detecting iPhones 6 server-side
New API support
Two of the most expected APIs –since iOS 5- finally arrived to Safari on iOS 8: WebGL for 3D canvas and IndexedDB for NoSQL database storage. Also the Web Cryptography API and Navigation Timing API have landed on Safari for iOS.
With WebGL we can fully load a 3D environment talking directly to the video card with a non-so webby API –very similar to OpenGL-. WebGL was available on iOS since version 4.2 but it was always disabled. Now it is enabled by default in the browser, in a home screen webapp and also in the WebView adding a whole new world to web and game developers.
Here you have FishGL example from Microsoft running on iOS 8
IndexedDB is the W3C answer to the deprecated WebSQL API that didn’t arrive into some browsers, such as Internet Explorer and Firefox. Now with iOS supporting IndexedDB, we can use the same API for database on all the mobile platforms. Safari is using the unprefixed version of this API
Regarding Navigation Timing API addition, it’s good news for web performance. With this API we can measure with better precision timing, as well as receiving lots of timestamps about the loading process of the page that will be useful for tracking and for make decision in real time to improve the user experience.
New Safari features
From a user’s point of view, Safari on iOS 8 has new features that might affect how our websites are rendered.
Pinch to Zoom
On iPhone (landscape only) and iPad (all orientations) if you pinch the screen (such as if you want to zoom out the page) you will get into a tab overview mode, which is nice, but it can potentially conflicts with pages that are already using the gesturechange event to provide pinch-to-zoom operation. If you are doing that, you can use event.preventDefault() on your event handler to avoid the default browser behavior.
Just 6 months ago, Apple introduced the Minimal UI mode through the minimal-ui value on the viewport. Well, today, it was announced (no reason given) that the Minimal UI mode is removed from iOS 8 and in fact, it’s true. It’s not working anymore.
Autocomplete and credit card scan
Safari has now an autocomplete feature for forms that will work automatically. When Safari detects a credit card form it will allow us to scan a credit card and using OCR type the credit card numbers automatically. I did several test and I didn’t have luck finding how Safari realized it’s a credit card form. It is working with PayPal, Netflix, Amazon and many other sites but not sure how it works. Not sure if the decision is client side or if it’s whitelisted somewhere on Apple’s servers.
(Image from 9to5Mac.com)
UPDATE 18/9: Safari also supports now the autocomplete attribute in the latest spec. It means that if you have a login form, Safari now supports the username, current-password and new-password values that will help Safari autofill the login form with data from the Keychain. Luis Abreu has a good article on this feature and other privacy-related enhancements in iOS8.
RSS is alive!
If you have a RSS feed descriptor on your website now your users will be able to subscribe to your site just browsing on it and opening the Bookmarks panel. There, the @ symbol section (Shared Links) will manage Subscriptions including a “Add current site” button. A bit hidden, but at least is there.
New updated toolbars
Both URL bar and toolbar for iOS and Android are now semi-transparent. On iOS 7 this behavior was available on just the URL bar on iOS only. That means that now the visible viewport on the first load includes the bottom toolbar and that on iPad we get the same tint color over the URL bar.
Finally, on iPhone landscape we have a new unified toolbar (similar to the iPad) where we have the URL and the toolbar just in one place at the top.
Since iOS 7, Safari on iPhone is hiding toolbars when scrolling the page but iPad didn’t get that behavior. From iOS 8 we do have it by default on both orientations. The difference on iPad landscape is that we still see a minimal title bar at the top, while on iPhone landscape you have a 100% full screen mode.
(oh, yes, that is subliminal advertising, that’s my book :P)
Safari on iPad also now includes a Sidebar on landscape which keeps the same viewport size in a smaller area. In terms of RWD it’s only affecting aspect-ratio.
Icon for bookmarks and frequent websites
Safari for iOS, for the first time, will also use the website’s icon for favorites and bookmarks; not just for when the user adds the website to the home page.
It will also use the icon for the suggestion list while you are typing in the URL bar.
With handoff now users on Safari on iOS can continue reading the same page on a Mac with Yosemite connected to the same user. I couldn’t try this yet but you should verify that if you are using an m.* website with redirection you should redirect back the user to the desktop version from the mobile. Not sure yet how it’s going to work in terms of keeping the scroll position as it was advertised.
Going native with iOS8
First, let’s say that if you are redirecting your user from a website to the Apple Store to download your app without user’s interaction (a click for example) Safari will not allow it (fortunately).
Then, from an integration between web and native point of view we have two important news:
- Safari extensions (covered later)
- New way to share credentials between Safari and your native app so the user doesn’t need to login again
The new WebView
Finally, the greatest news for native webapps (hybrids) is the addition of a new Web View that matches the same API that is available on the Mac, meaning more features, cross-compatibility between Mac and iOS and most important:
- New classes configure the Web View similar to the power we have on Android’s Web View.
The new Web View (WKWebKit) is part of a new framework (WebKit.framework) and it’s not 100% compatible in terms of the API with the old one (UIWebView). The old one is still there, so that means that all the apps out there using the Web View today will still use the old version until they update it.
At least on the GM version there is a bug with loading local files which might defer the good news to an iOS update for hybrids, including Cordova (PhoneGap) apps.
This means that now in iOS 8 we have four web runtimes available and compatibility, and bugs, might differ:
- Web.app (the runtime used when you use a full-screen home webapp)
- UIWebView (old)
- WKWebView (new)
Here you can see a native app running HTML5Test.com on both UIWebView and WKWebView at the same time
This change will be particularly useful for pseudo-browsers and in-app browsers, such as Chrome on iOS or browsing inside Facebook app. According to MOVR Report from ScientiaMobile, 11.5% of iOS web browsing traffic comes from WebView-based apps.
Safari on iOS 8 became the first pre-installed browser on the first 3 main platforms to support extensions or plugins (Firefox OS might be the one on the long tail). From iOS8, native apps from the AppStore can extend Safari in mainly two ways: as a Share extension or as a actions. Actions can work with the DOM, which is great because it means we will be able to read and change DOM elements from within native code.
All the extensions work after user’s interaction, meaning she needs to open the Share button and click on the Extension icon to enable it. There is no automatic extension so far for Safari.
Besides the examples on the Apple’s keynote (Bing translation or social network sharing), there are a couple of good examples out there of how this will improve web browsing. For example, 1Password and LastPassword announced their apps that will allow a Safari user to log in into any website without entering any details, even with Touch ID validation if you have a 5s, 6 or 6 plus.
Extensions can have their own UI on top of Safari or they can be just silent extensions that will do something with the DOM on current page.
Here the 1password action inside Safari:
Pocket.com also announced support for custom actions
I’m looking forward to see custom actions in Safari and what great thinks we can add on top of the mobile web with it.
New web design features
Let’s first list quickly the new additions and after that let’s get deeper into some of them
- CSS Shapes
- CSS object-fit
- CSS Background Blend modes
- CSS Compositing and Blending
- Subpixel layout
- Animated PNG supported
- Parallax effects and Pull-to-refresh supported
- SVG Fragments Identifiers (for SVG Sprites)
- Image Source Set support
- HTML Template support
It seems to be just an anecdotic mention, but I think this is a huge deal. APNGs are a non-standard format similar to Animated GIFs, but based on the modern PNG format. Meaning the we can create animations that will use less bytes than current Animated GIFs and even with alpha transparent 32-bit bitmaps.
Extracted from my book Programming the Mobile Web, 2nd. edition:
APNG (Animated PNG) is an unofficial standard for using PNGs for animations. The draft specification can be found at the Mozilla wiki. At the time of this writing, only Opera Mobile and Firefox since version 14 support APNG on mobile websites. The advantage of APNG over animated GIFs is the ability to use alpha channels and 32-bit images. A quick way to create APNG files is using APNG Edit, a free Firefox for desktop plug-in available from Mozilla’s Add-Ons.
Parallax effects and pull-to-refresh thanks to scroll events
If you were wondering why those fancy effects while you are scrolling were not compatible with iOS, it was because of the lack of scroll events being fired on iOS 7. Now on iOS 8 they are back so all the Parallax JS and CSS libraries will be compatible. Use them with caution, usability not guaranteed :)
This change is also useful to create pull-to-refresh and infinite scroll lists that will start loading next items while the user is scrolling.
It is worth mentioning that on the betas (removed on the final version) new events were tested onwebkitwillreveal[top|bottom|left|right] that were useful for making infinite scrolls faster. These new events were removed from the final verison.
Safari (and the other rendering engines) are now change CSS pixels from being integer to float values, allowing drawing and getting data in decimal “px” values. That means that CSS object model getters, such as offsetTop or clientWidth will receive fractional double values. Before iOS 8, rounded integral values were returned.
It also mean you can now draw hairlines (1 physical pixel) for retina devices. @Dielout has a good article on this.
SVG Fragment identifiers
Fragment identifiers from SVG is a method to link to one specific fragment or portion of an SVG instead of the root element. This feature allow us to sprite SVG images in one file taking advantage of one HTTP request and caching. Similar to CSS Sprites, but with SVG images instead and with ids instead of positions.
CSS Compositing and Blending
This new spec from the W3C describes how shapes of different elements are combined into a single image through new CSS attributes useful for HTML and SVG: mix-blend-mode, isolation and background-blend-mode. This proposal was created by Adobe to bring some of the features of Adobe’s product (such as Flash) to HTML5. On Adobe’s website you will find examples and documentation.
From a quick test on Adobe’s examples, it seems Blending is working but not Compositing, but this requires further testing (examples and testing suites are welcome).
CSS Shapes is another new spec for web designers from Adobe. CSS Shapes allows us to add a shape to a float (shape-outside property) which will lead to content next to the float will flow around the shape. There are other attributes as well to create complex shapes and fit content in our out of them.
Image Source Set
Apple originally defined the -webkit-srcset CSS function a while back and now it’s implemented on the HTML side for semantic images. Therefore we won’t need to do tricks for delivering images on different resolutions. iOS 8 supports the Image Source Set spec that means the usage of the new srcset attribute on <img> elements.
Therefore on iOS 8 we can use:
<img src="lores.png" srcset="hires.png 2x, superhires.png 3x">
In this case the iPhone 6 Plus will load the superhires file, other Retina iphones (including the 5S and the 6) the hires.png file and all the other devices (including those not supporting srcset at all) the lores.png file.
Unfortunately, no <picture> element supported on iOS yet that is the final Responsive Images spec that works with srcset.
HTML Template Support
Video is important for Apple and iOS 8 introduces new features from a web’s perspective, including Full Screen support, Meta Data API and CSS layering.
I’ve seen reports saying that Media Source extensions is now available on iOS 8 but I couldn’t find it so I won’t say it’s there yet.
Full Screen API for video elements
Safari doesn’t support the Full Screen API and iOS 8 is not the exception. However, it’s now available in one particular situation: on <video> elements. Therefore, you can request Full Screen video playing after user’s interaction, such as a click, for example using:
<input type="button" value="Go Full screen" onclick='document.querySelector("video").webkitEnterFullScreen()'>
Video Metadata API
Safari on iOS 8 supports the preload=”metadata” property to <video> elements which will lead to the browser emiting loadedmetadata events that we can handle.
CSS layering for <video> elements means that now we can position other HTML elements on top of the video playing while working with a embedded player, mostly useful on iPad.
- ECMAScript 6 partial support, including Promises, Iterators, Maps, For-of, Weak Maps and more
- Scroll events (including onwheel, but doesn’t seem to do anything)
- Unprefixed Page Visibility API
- UPDATE 9/18: Thanks to @alex_gibson report, it seems that on some iPhone 5 the 300ms delay on click events was removed (only at Safari), but not on WebViews or Safari on other devices on the same iOS 8 build number. The delay is still there at least on iPod touch, iPads and for other people on iPhone 5’s reported on this Twitter thread. I’ll keep this post updated upon further testing later.
- window.doNotTrack was removed
- window.currentScript is available
False alarm on Push Notifications for iOS
Some of us got a big surprise on the Apple website reading “iOS Website Push Notifications” but unfortunately seems to be a bug on the website because so far Safari accepts Push notifications only on Mac OS.
Bugs and problems
As we are used to with Safari and iOS releases, good news usually are opaqued with some bad news in form of important bugs affecting users. So far, I’ve seen these bugs. I’ll update the list based on your input, so if you’ve seen another bug please use the comments section below
- On WKWebKit, local files can’t be loaded being a big problem with hybrid apps, including Cordova-based apps that need to still work with the old UIWebView until this problem is solved.
- window.prompt is doing weird things something, including crashing Safari
- Accessibility problem: Voiceover doesn’t work with input texts and labels, Basically the label is not spoken any longer when typing on a form (thanks @pauljadam for the report)
- Touch events inside iframes on home-screen apps are not being reported
- Timers and requestAnimationFrame callbacks are not being executed after a phone’s sleep on Home Screen Apps.
- Some websites are being loaded in weird scroll positions, reported by @kpeatt, example: http://www.theory.com/luxe-loungewear/luxe-loungewear,default,sc.html – Not sure yet the reason.
Did you find anything else? New features, support or problems are accepted in the comments section below.
Image source: Daniel Villatoro
In this blog/article, I expand on the idea of ‘Small data’.
I present a generic model for Small data combining Deterministic and Predictive components
Although I have presented the ideas in context of IoT(which I understand best) – the same algorithms and approach could apply to domains such as Retail, Telecoms, Banking etc
We could have a number of data sets which may be individually small but it is possible to find value at their intersection. This approach is similar to the mobile industry/ foursquare scenario of knowing the context to provide the best service/offer etc to a customer segment of one. That’s a powerful idea in itself and a reason to consider Small Data. However, I wanted to extend the deterministic aspects of Small data (intersection of many small data sets) by also considering the predictive aspects. The article describes a general approach for adding a predictive component to Small data which comprises of three steps: a) A limited set of features are extracted, b) Their dimensionality is reduced(ex using clustering) and c) finally we use a classification and a recognition method like Hidden Markov Models to recognize a higher order metric (ex walking or footfall)
Last week, I gave an invited talk on IoT and Machine Learning at the Bigdap conference organized by the Ontic project . The Ontic project is a EU FP7 project doing some interesting work on Big Data and Analytics mainly from a Telco perspective.
The audience was technical and was reflected in the themes of the event which (for example : Techniques, models and algorithms for Big data, Scalable Data Mining and Machine learning techniques and mechanisms, Big Data Security and Privacy challenges, Cleaning Big Data (noise reduction), acquisition & integration, Multidimensional Big Data, Algorithms for enhancing data quality.)
This blog post is inspired by some conversations following my talk with Daniel Villatoro (BBVA) and Dr Alberto Mozo (UPM/Ontic). It extends many of the ideas and papers I referenced in my talk.
In his talk, Daniel referred to ‘small data’ (image from Slides used with permission). In this context, as per slide, Small data refers to the intersection of various elements like customers, offers, social context etc in a small retailer context. Small data is an interesting concept and I wanted to explore it more. So, I spent the weekend thinking more about it.
When you have data elements, the concept of small data is a deterministic. It is similar to the mobile industry/ foursquare scenario of knowing the context to provide the best service/offer etc. Thus, given the right datasets, you can find value at the intersection. This works even if the individual Data sets are small as long as you find enough intersecting datasets to create a customer segment of one at their intersection.
That’s a powerful idea in itself and a reason to consider Small Data.
However, I wanted to extend the deterministic aspects of Small data (intersection of many small data sets) by also considering the predictive aspects. In the case of Predictive aspects, we want to infer insights from relatively limited data sets
In addition, I was also looking for a good use case to teach my students @citysciences. Hence, this blog will explore the predictive aspects of Small data in an IoT context
I believe the ideas I discuss could apply to any scenario (ex retail/banking) and indeed also to Big Data sets
The examples I have considered below strictly apply to Wireless Sensor Networks(WSNs). WSNs differ from IoT because there is potentially communication between the nodes. The topology of the WSNs can vary from a simple star network to an advanced multi-hop wireless mesh network. The propagation technique between the hops of the network can be routing or flooding. In contrast, IoT nodes do not necessarily communicate between each other in this way. But for the purposes of our example, the examples are valid because we are interested in the insights inferred from the Data.
Predictive characteristics of Small data
From a predictive standpoint, I propose that Small data will have the following characteristics:
1) The Data is missing or incomplete
2) The data is limited
3) Alternatively, we have Large data sets which need to be converted to a smaller data set to make it more relevant(ex a small retailer) to the problem at hand
4) The need for inferred metrics i.e. higher order metrics derived from raw data
This complements the deterministic aspects of Small data i.e. finding a number of data sets to identify the value at their intersection even if each data set itself may be small(Small data)
So, based on papers I reference below, I propose three methodologies that can be used for understanding Small data from a predictive standpoint
1) Feature extraction
2) Dimensionality reduction
3) Feature Classification and recognition
To discuss these in detail, I use the problem of monitoring physical activity for assisted living patients. These patients live in an apartment under a privacy-aware manner. Here, we use sensors and infer behaviour based on the sensor readings but yet want to protect the privacy of the patient
The papers I have referred to are (also in my talk):
- Activity Recognition Using Inertial Sensing for Healthcare, Wellbeing and Sports Applications: A Survey – Akin Avci, Stephan Bosch, Mihai Marin-Perianu, Raluca Marin-Perianu, Paul Havinga University of Twente, The Netherlands
- Robust location-aware activity recognition: Lu and Fu
This problem is a ‘small data’ problem because we have limited data, some of it is missing (not all sensors can be monitoring at all times) and we have to infer behaviour based on raw sensor readings. We will complement this with the deterministic interpretation of Small Data (where we accurately know a reading).
Small data: Assisted Living Scenario
source Robust Location-Aware Activity Recognition Using Wireless Sensor Network in an Attentive Home Ching-Hu Lu, Student Member, IEEE, and Li-Chen Fu, Fellow, IEEE
In an assisted living scenario, the goal is to recognize activity based on the observations of specific sensors. Traditionally, researchers used vision sensors for activity recognition. However, that is very privacy invasive. The challenge is thus to recognize human behaviour based on raw readings / activity from multiple sensors. In addition, in an assisted living system, the subject being monitored may have a disorder (for example Cognitive disorders or Chronic conditions).
The techniques presented below could also apply to other scenarios – ex to detect Quality of Experience in Telecoms or in general for any situation where we have to infer insights from relatively limited data sets(ex footfall)
The steps/methods for retrieving activity information from raw sensor data are: preprocessing, segmentation, feature extraction, dimensionality reduction and classification
In this post, we will consider the last three i.e. feature extraction, dimensionality reduction and classification. We could use these three techniques for situations where we want to create a predictive component for ‘small data’
Small data: Extracting predictive insights
In the above scenario, we could extract new insights using the following predictive techniques (even when we have less data)
1) Feature extraction
Feature extraction takes inputs from raw data readings and finds find the main characteristics of a data segment that accurately represent the original data. The smaller set of features can be described as abstractions of raw data. The purpose of feature extraction is to transform large quantities of input data into a reduced set of features. This smaller set of Data is represented as an n-dimensional feature vector. This feature vector is then used as an input to a classification algorithm.
2) Dimensionality Reduction
Dimensionality reduction methods aim to increase accuracy and reduce computational effort. By reducing the features involved in the classification process, less computational effort and memory are needed to perform the classification. In other words, if the dimensionality of a feature set is too high, some features might be irrelevant and do not even provide useful information for classification.The two general forms of dimensionality reduction are: feature selection and feature transform.
Feature selection methods select the features, which are most discriminative and contribute most to the performance of the classifier, in order to create a subset of the existing features. For example: SVM-Based Feature Selection select several most important features and conclude that 5 attributes would be enough to classify daily activities accurately. K-Means Clustering is a method to uncover structure in a set of samples by grouping them according to a distance metric. K-means clustering algorithms rank individual features according to their discriminative properties and their co-relationships.
Feature Transform Methods : Feature transform techniques try to map the high dimensional feature space into a much lower dimension, yielding fewer features that are a combination of the original features. They are useful in situations where multiple features collectively provide good discrimination but individually, those features would provide poor discrimination. Principal Component Analysis (PCA) PCA is a well known and widely used statistical analysis method and can be used to transform the original features into a lower dimensional space.
3) Classification and Recognition: The selected or reduced features from the dimensionality reduction process are used as inputs for the classification and recognition methods.
For example: Nearest Neighbor (NN) algorithms are used for classification of activities based on the closest training examples in the feature space. (ex k-NN algorithm)
Naïve Bayes is a simple probabilistic classifier based on Bayes’ theorem which can be used for Classification.
Support Vector Machines (SVMs) are supervised learning methods used for classification. In the assisted living scenario, SVM based activity recognition system using objects attached with sensors can be used to recognize drinking, phoning, and writing activities
Hidden Markov Models (HMMs) are statistical models that can also be used for activity recognition. I used a simple analogy to explain hidden markov analysis from a paper which explained HMM for inferring temperature in the distant past based on tree ring sizes
Gaussian Mixture Models (GMMs) can be used to recognize transitions between activities
Artificial Neural Networks can also be used to detect occurrences – ex falls.
Thus, we get a scenario as below
sensors(adapted from Activity Recognition Using Inertial Sensing for Healthcare,Wellbeing and Sports Applications: A Survey)
Small Data: Complementing the Deterministic by the predictive
Small Data could be a deterministic problem when we know a number of datasets and value lies at the intersection of these data sets. This strategy is possible with Mobile context based services and Location based services. The results so achieved could also be complemented by a predictive component of Small data.
In this case, a limited set of features are extracted, their dimensionality is reduced(ex using clustering) and finally we use a classification and a recognition method like Hidden Markov Models to actually recognize a higher order metric (ex walking, retail footfall etc)
I believe that these ideas could be adapted to many domains. Data science is engineering problem. It’s like building a Bridge where there is no fixed solution in advance. Every Bridge is different and will present a unique set of challenges. I like the blog post – Machine Learning is not a Kaggle competition . The author(Julia Evans) correctly emphasizes that we need to understand the business problem first. So, I think the above approach could apply to many business scenarios – ex in Retail (footfall), Healthcare, Airport lounges etc by inferring predictive insights from data streams
ApplePay is expected to start in October 2014 – Docomo’s Osaifu-keitai wallet phones started on July 10, 2004. Click blue arrow below to watch video: Read more details here: ApplePay vs Japan’s Osaifu-Keitai In business the first-comer does not always win the game Japan’s NTT-Docomo tested two types of wallet phones, manufactured by Panasonic and […]
September 17, 2014
When I moved to Cologne 5 years ago I upgraded from a 6 Mbit/s down - 384 kbit/s up ADSL line to a 25 Mbit/s down - 5 Mbit/s VDSL line and it felt really fast. It still does, well, sort of. That's because I could recently benchmark a 1 Gbit/s Fiber to the Home (FTTH) line in France and the results are nothing short of breathtaking.
When benchmarking such a connection it's necessary to have a server on the other end that can actually deliver such high speeds, a transit/peering connection of the fiber operator that is broad enough and a device at home that can handle data at such a high speed as well. As I couldn't go and benchmark that fiber link in person, I prepared a Banana Pi to be my remote test laboratory. A Raspberry Pi would not have done as it 'only' has a 100 Mbit/s Ethernet port and the processor can handle data transfer speeds of about 30 Mbit/s. The Banana Pi on the other hand has a Gbit Ethernet port and when I tested data transfers to and from a local server before shipping it to France I could reach speeds of 80 MB/s, i.e. 640 Mbit/s. That's not the full gigabit/s the Ethernet port is capable of but to get a feeling for the fiber line it's a good start.
To access the Banana Pi remotely I prepared it to automatically establish an SSH TCP port forwarding connection to my virtual server on th net with a public IP address. Via this little detour I could connect back to the Banana Pi despite it being behind a NAT. To test up- and download speeds I used CURL and http up- and downloads. The results are breathtaking. In the downlink direction I could reach speeds average speeds of 33 MByte/s, that's around 264 Mbit/s. A "small" 160 MB Linux distribution downloads in 6 seconds and is more than 10 times the speed of my VDSL line at home... In the uplink direction I could reach speeds of around 6 MByte/s, i.e. 48 Mbit/s which is also 10 times more than what my VDSL line can do. I ran the tests at 10 in the morning, in the evening during the busiest hours and also at 4 o'clock in the morning and always got the same results.
So which part is the bottleneck, the fiber line, the peering/transit link or the server on the other end? To find that out I ran two downloads simultaneously from two different servers, one connected to the French network via Level 3 and another one that was connected via the German Internet Exchange (DECIX). With this setup I got an aggregated 33 MByte/s. This means that the fiber link into the home was the limiting factor as otherwise I would have seen a higher aggregated speed.
It's pretty amazing what a fiber line directly to the home can do today and it also shows quite clearly that the copper cable to homes won't be able to compete for much longer in areas where fiber gets deployed.
September 16, 2014
Last year, I wrote 8 Guidelines and 1 Rule for Responsive Images based on some consulting work we had done for a client with over 800,000 images on their site.
In preparation for An Event Apart Austin, I decided to revisit the guidelines and see if they still applied in light of browsers implementing the picture specification.
In particular, I was curious how much caution we should take when implementing solutions for responsive images. Last year, I wrote:
The one and only rule for responsive images: Plan for the fact that whatever you implement will be deprecated
Is that rule dated with the browsers standardizing on the picture specification?
I asked for some feedback from the responsive images community group on the risk of the specification changing and how much we should be hedging out bets.
It is also normal that the first shipping implementations are not perfectly compliant with the spec. For instance they might have implemented a slightly out of date algorithm and missed that something was changed, or simply have bugs. Then it is fixed in a future version and that might break your code if you only tested in one implementation.
This is no different from any other feature that is shipped on the Web. To avoid issues, test in multiple implementations and validate.
Should we still be hedging our bets a little?
No, that’s not necessary.
Now, a couple of people on the list responded that they have a large set of images on the sites they manage and centralizing image handling and markup still made sense. So perhaps it isn’t a rule, but still an idea that you should consider based on the scope of the site and the number of images involved.
I’ll leave the final word on the matter to Marcos Cáceres who played a critical role on the picture specification and works for Firefox, reassured me with these words:
Once it gets into the wild and people start using it, it can’t change. Thems is the golden rule of the Web.
Spec is stable and the browsers are coming this month – go forth and <picture> all the things! Make the web beautiful again :)
As Marcos says, go forth and <picture> all the things!
September 15, 2014
What can we learn from 10+ years of mobile payments in Japan? Apple Pay mobile payments are expected to start in October 2014 Japan’s Osaifu keitai mobile payments started on July 10, 2004, after public testing during December 2003 – June 2004 Two different types of Docomo‘s “Osaifu-Keitai“, manufactured by Panasonic and by SONY, were […]
September 14, 2014
Ever since I got my first Raspberry Pi I wondered how much power it really requires in my standard configuration, i.e. only with an Ethernet cable and an SD card inserted. Recently I got myself a USB power measurement tool to find out. As you can see in the picture on the left, the Raspberry Pi draws a current of 400 mA with the OS up and running and being idle. With a measured USB voltage of 5.4 V, the resulting power consumption is 2.16 Watts. At an efficiency of 90% of the power adapter itself, the total power consumption is therefore around 2.4 Watts.