On this page:

September 01, 2015

Cloud Four Blog

The feature phone era of TVs ends next week

Three years ago after writing Head First Mobile Web with Lyza, I was burnt out on mobile and wanted to do something different. So I decided to start researching the web on TVs.

TVs seemed like the furthest thing from mobile devices. Before his death, Steve Jobs had told his biographer, Walter Isaacson, that he had “finally cracked” TVs. Rumors were flying that Apple was getting into TVs in a big way.

Photo of TV

So I set off to find out what it might be like to design and build for the web on TVs. I’ve given a few talks on TVs and it has shaped the way I look at the web.

But I’ve never found the time to write about it. With Apple rumored to release its new Apple TV next week, it seemed like the right time to share what I’ve learned.

When I’m looking at TVs and what is possible on them, it doesn’t matter much to me whether the features are being provided by a set top box, game console or the TV itself. I realize that is different when someone makes a purchasing decision.

I’ve focused on Smart TVs because Anna Debenham has done extensive testing on game consoles already and there is less known about Smart TVs. But most of what I’ve found still rings true when I’ve tested game consoles and set top boxes.

No one knows what they’re getting when they buy a Smart TV

The first thing I learned about TVs is how incredibly difficult it is to find TVs that you can test on. Best Buy has walls of TVs, but only three of them had accessible remote controls and were on the Internet.

I fared a little better at Fry’s where the TVs had remote controls, but no Internet. So I tethered the TVs to my phone and watched them start downloading long-overdue software updates:

One of the many TVs that updated firmware using my phone's connection

Eventually, I found a local store, Video Only, that had TVs on WiFi. They’ve been great. I’ve returned every year bearing a box of donuts and a series of tests to conduct.

I drove to several stores before I found one that made it easy to test the smart features of these TVs. Average consumers won’t do this. They have no idea how what their Smart TV can do nor how well it does it before they buy it.

Smart TVs are computers, but no one sells them that way

The discovery happened by accident.

I was at Best Buy using the only TV that was on the Internet that had a remote. I was digging around in some of the Smart TV settings when I happened to notice a tiny progress bar in the lower right corner:

Surprising storage usage indicator

Wait… what? This TV has a storage limit?

Of course it makes sense that the TV would have a storage capacity. The reason why this surprised me is because I had not seen a single sales tag or spec sheet list the storage capacity of the TV.

Even today TV manufacturers will brag about their app stores, they will tout their fancy Octo-core processors, but they don’t list the basic storage capacity.

Good luck even finding the name of the operating system they are using let alone what the current version is.

These TVs are computers. They have downloadable apps. They have various CPU speeds. They have storage limits.

And yet, not a single store nor manufacturer sells them as if they were computing devices. I’m not saying TVs should be sold like a Windows machine, but there is no difference between how TVs are sold now than how they were sold in the 80s.

There is no Smart TV market

Because I spent hours testing TVs in the stores, I was able to observe dozens of people shopping for TVs. In all that time, I never heard a single person ask specifically for a Smart TV.

They asked if the TV supported Netflix.

Remote control with Netflix button

Sometimes they would ask about Hulu or Amazon Video. But they’d never dive deep into the Smart TV features. Even at Video Only where the TVs were on WiFi, only a small percentage of people would ever check out the Smart TV menus.

So while it is difficult to find TVs that don’t have some Smart TV capabilities, I don’t believe you can have a Smart TV market when no one knows what they’re buying and no one is asking for Smart TV features.

Web rendering on TVs is surprisingly capable

I’m going to go into more detail about what I found when testing browsers on TVs another day, but the short version is that the rendering engines on Smart TVs are generally equivalent to same era iPhones.

Both the 2015 LG and Samsung TVs have higher scores on HTML5Test than my iPhone 6 running iOS 8.4.1 does.

2015 LG TV scores 495 on HTML5Test.com

That’s not to say that the web browsing experience is necessarily good. It can be clunky especially if the remote only features a d-pad. But in general, the problem isn’t the rendering engine.

Input remains the biggest challenge

The moment you move away from changing channels and start interacting with the Smart TV functionality, input becomes the biggest challenge. Remote controls are cumbersome and crude input devices.

Over the years I’ve seen TV manufacturers try all sorts of ways to make remote controls better including:

Sony Google TV remote featuring keyboard on back, touch pad on front, and every button you could ever want.

  • Full keyboards
  • Miniature keyboards
  • Motion detection
  • Gesture
  • Touchpads
  • Voice control
  • Smartphone apps

To their credit, TV manufacturers keep looking for ways to make input work better. But no one has cracked it yet.

The Feature Phone Era of TVs

The more I studied TVs, the more I was struck by the similarity between the TV market of today and the phone market before the iPhone was released.

Timeline of phones showing how they changed after iPhone release

Before the iPhone came out, nearly every phone manufacturer had their own operating system. It was often hard to know what the operating system was called and what version you were using.

Input was difficult, slow and frustrating. People advised those who built for phone to keep in mind the mobile context.

Companies touted the ability to install apps and browse the web, but the experience was terrible so few used those features.

The phone manufacturers believed they had a mature market and that they understood what people wanted from their phones. They laughed at the optimism for the iPhone saying that browsers have been on phones for years and no one used them.

There are echoes of each of these in the current TV landscape. And again Apple stands poised to enter the market in a big way led by what sounds like an innovative form of input.

Will Apple pull it off? I don’t know.

But I can tell you one thing, I’m ready for the feature phone era of TVs to end.


by Jason Grigsby at September 01, 2015 11:20 PM

August 28, 2015

Cloud Four Blog

Mobile Web Accessibility and Design Beyond Our Devices

Responsive Field Day is only a few short weeks away, yet we’re still announcing awesome new topics. This week is no exception, so let’s get to it!

Marcy Sutton: Mobile Web Accessibility for Developers

Marcy SuttonWe are web developers creating responsive websites and hybrid mobile apps with our HTML, CSS and JavaScript skills. How can we ensure our mobile experiences are accessible to all people, including those with disabilities? In this talk, we’ll discuss mobile web accessibility fundamentals, pain points and development tips you can use in your next project.

Ethan Marcotte: Design Beyond Our Devices

Ethan MarcotteIn this age of device diversity, we’ve been focusing less on pages, and more on patterns: reusable bits of design and content we stitch together into responsive design systems. But those patterns bring puzzles: how should they adapt, and why? And how do we, well, design with them? Let’s look at a few answers to those questions, and start moving our design practices beyond the screens in front of us.

Responsive design, performance, accessibility, modern layouts, animation, pattern libraries, progressive enhancement… you will be seriously bummed if you aren’t at Revolution Hall on September 25. Tickets are still available, you should be there!

by Tyler Sticka at August 28, 2015 04:50 PM

August 27, 2015

Cloud Four Blog

Will someone do this on mobile?

A frequent question that product owners face is whether not someone will do a particular activity on a mobile device or not.

In my experience, people often arrive at two answers based on context.

  1. When looking at your own product:
    Of course, no one would ever try to do this on a mobile device.
  2. When using someone else’s product:
    Of course, this should work on mobile.


Design accordingly.

by Jason Grigsby at August 27, 2015 05:17 PM

August 26, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Death to Bullshit

I’m pleased to introduce Death to Bullshit, a site and blog that explore the themes of information, bullshit, and craft.

We’re bombarded by more information than ever before. With the rise of all this information comes a rise of the amount of bullshit we’re exposed to. Death to Bullshit is a rallying cry to rid the world of bullshit and demand experiences that respect people and their time.

I’ve wanted to make this site ever since I gave a talk on the topic a while ago. In fact, I’ve been publishing links to the blog for years now. But I had a vision I wanted to execute (namely, a bunch of ads floating parallax-style next to the content) that I never had the time to complete.

My brother Ian (@frostyweather) started working for me just three weeks ago, and I thought this site would be a good sandbox for him to learn the ropes of HTML, CSS, and JavaScript. And I think that experiment was successful. Ian rose to the challenge and made some pretty sophisticated stuff. In fact, I’m pretty sure he wrote more JavaScript in 2 weeks than I did in the first 5 years of my career.

The idea was to create a design that used the absolute bare minimum of styling, which is a bit of commentary on the over-designed nature of a lot of sites these days.

Death to Bullshit without bullshit mode on

But the real fun begins when you click the “Turn bullshit on?” link.

Death to Bullshit with bullshit mode on

It’s got everything modern Web experiences have and then some.

  • Carousel full of banner ads? Check.
  • Fixed-positioned elements that you can’t dismiss? Check.
  • Annoying popup that forces you to either like a Facebook page or admit you are a racist? Check.
  • QR code? Of course.
  • Hideous clickbait floating in a parallax stream along-side the content? Check.
  • Infinite scroll of even more clickbait? Check.
  • Social widgets begging you to share? Check.

And on and on it goes.

My hope is that this site will lead to conversations about how we can better create experiences that respect users and their time. From the site:

As the landslide of bullshit surges down the mountain, people will increasingly gravitate toward genuinely useful, well-crafted products, services, and experiences that respect them and their time. So we as creators have a decision to make: do we want to be part of the 90% of noise out there, or do we want to be part of the 10% of signal? It’s quite simple really:

  • Respect people and their time.
  • Respect your craft.
  • Be sincere.
  • Create genuinely useful things.

 

by Brad Frost at August 26, 2015 04:00 PM

Cloud Four Blog

Responsive Images, Part 10: Conclusion

Phew. We made it! We’ve come to the end of the Responsive Images 101 series.

Before we part ways with this series, I want to pass along some tips, resources and final thoughts on where responsive images are heading next.

Responsive image audits

Were this a book, I would have devoted a chapter to responsive image audits. It is the first thing we do when we start looking at how to convert a site to responsive images.

And it is most likely your next step towards taking what you’ve learned and applying it to your sites.

Fortunately, I wrote about these audits in great detail recently. Instead of repeating it in the 101 series, I encourage you to read what I wrote about responsive image audits now. Consider that article Part 9a.

Compatibility

Browser support for responsive images standards is growing rapidly. As of August 2015, Chrome, Opera and Firefox all support picture, srcset, sizes, and type.

Microsoft Edge and Safari support srcset with display density (x) descriptors, but not the width descriptors. Microsoft has started development to support the full responsive images standard.

Apple hasn’t committed to supporting the standard yet, but Apple knows responsive images support is important and Yoav Weiss has been contributing to the WebKit implementation.

When it comes to image-set(), there is still a lot more work to be done.

PictureFill

But even if all the browsers currently supported the responsive images standards, we’d still need a way to help older browsers understand the new syntax. That’s where the PictureFill polyfill comes in.

PictureFill will allow you to use the new responsive images syntax now.

Automating your image processing

In Part 9, I said that humans shouldn’t be picking image breakpoints. Instead, we should have software doing this automatically for us.

I want to expand on this point and say that most of what we deal with when it comes to responsive images is not something that designers and developers should be thinking about on a regular basis.

The goal for most organizations should be to centralize image resizing and processing and automate as much of their responsive images as possible.

In my ideal world, a responsive images workflow for resolution switching would look something like this:

  • Where possible, use resolution independent SVG images.
  • When creating or modifying the design of templates, the template author provides the sizes attribute for the various images in the template.
  • The srcset attribute with width descriptors is inserted by the server which does all of the heavy lifting of figuring out what image breakpoints to choose for each image.
  • Content authors never worry about any of this. Their only responsibility is to upload the highest quality source available and let the image resizing service take care of the rest.

This isn’t a far-fetched scenario. Many organizations already have image resizing services. And if your organization doesn’t, I maintain a spreadsheet of image resizing services and tools that you can consider (be sure to read the explanatory blog post as well).

And many content management systems are starting to look for ways to incorporate responsive images. The Responsive Images Community Group (RICG) maintains a WordPress plugin and they are currently looking at how to add it to WordPress core. Drupal 8 will ship with a responsive images module (more details).

The only thing these image resizing services need to add is support for figuring out how many image sources to supply for a given image and to output the proper markup for those image sources. They may not even need to worry about the markup if Client Hints takes off.

(More on Client Hints soon. This is a 201 topic.)

But regardless of how you automate it, I believe that centralizing image resizing and processing is essential to maintaining your sanity. When we talk to new companies exploring responsive images, one of the first things we assess is their image workflow and how much of it we can automate.

Future of responsive images

We’re just getting started when it comes to responsive images. We have thousands of sites to update to use the new image standards. Many organizations need to update how they handle images to centralize and automate what has until now been a manual process.

Even though there’s still a lot of work ahead of us, it feels like we’re finally on the downhill slope. We’re no longer struggling to find solutions that everyone can agree on. Implementations are landing in browsers. We have PictureFill to help us fill the gaps.

And now the wider web development community is beginning to look at how they are going to implement these new standards which means that we can start learning from each other.

If you’ve read this entire 101 series, you have everything you need to get started with responsive images. I can’t wait to see what you do with the new standards. Please share what you learn!

Thank you for reading.


Responsive Images 101 Series
  1. Definitions
  2. Img Required
  3. Srcset Display Density
  4. Srcset Width Descriptors
  5. Sizes
  6. Picture Element
  7. Type
  8. CSS Responsive Images
  9. Image breakpoints
  10. Conclusion

by Jason Grigsby at August 26, 2015 03:03 PM

August 25, 2015

Cloud Four Blog

Responsive Images 101, Part 9: Image Breakpoints

I’ve dreaded writing this installment of the Responsive Images 101 series. Selecting image breakpoints is something everyone will face, and frankly, I have no good answers for you.

But sooner or later, we will all face the image breakpoints koan. We might as well start now.

What are responsive image breakpoints?

In our responsive layouts, breakpoints refer to the viewport sizes at which we make changes to the layout or functionality of a page. These typically map to media queries.

Responsive image breakpoints are similar, but slightly different. When I think about image breakpoints, I’m trying to answer two questions:

  • How many image sources do I need to provide to cover the continuum of sizes that this image will be used for?
  • Where and when should the various image sources be used?

The answers to these questions lead to different breakpoints than the criteria we use to select breakpoints for our responsive layouts. For our layouts, we follow Stephen Hay’s advanced methodology: We resize the browser until the page looks bad and then BOOOOM, we need a breakpoint.

With the exception of art direction, the main reason why we need multiple image sources has nothing to do with where the images look bad. We want to provide multiple image sources because of performance concerns, different screen densities, etc.

So we can’t simply reuse our responsive layout breakpoints for our images. Or I guess we can, but if we do so, we’re not really addressing the fundamental reasons why we wanted responsive images in the first place.

Image breakpoints for art direction is relatively easy

In situations where the image falls under the art direction use case, the art direction itself will often tell us how many image sources we need and when they should be used.

If you think back to the Nokia browser site example, we can tell when the image switches from landscape to portrait mode. When that switch occurs, we know we’re going to need a new source image.

However, this may only be part of the picture. What if one of the art directed images covers a large range of sizes. We may find that we still need to have multiple sources that don’t map to the art direction switch.

You can see an example of this in the Shopify homepage that we looked at in Part 8.

Shopify home page animated

Despite the fact that the image only has one major art direction change—from the full image to the cropped one—Shopify still provided six image sources to account for file size and display density.

<picture>
  <source srcset="homepage-person@desktop.png, homepage-person@desktop-2x.png 2x"       
          media="(min-width: 990px)">
  <source srcset="homepage-person@tablet.png, homepage-person@tablet-2x.png 2x" 
          media="(min-width: 750px)">
  <img srcset="homepage-person@mobile.png, homepage-person@mobile-2x.png 2x" 
       alt="Shopify Merchant, Corrine Anestopoulos">
</picture>

So knowing that an image falls under the art direction use case can give us some clues, but it doesn’t answer all of our questions about the necessary image breakpoints.

What about resolution switching breakpoints

This is where things really get tricky. At least art direction provides us with some hints about how many image sources might be needed.

So long as we’re downscaling flexible images, they will always look good. We can’t rely on them looking bad to tell us when we need to change image sources.

Let’s take a look at a resolution switching example:

Photo of Michelle Obama at three sizes

In this example, we have a photo of Michelle Obama where the image in the page is 400 x 602 pixels for the current viewport size. The largest size that the image is ever displayed at is 2000 x 3010 pixels. That large file is 250K.

We can simply shrink that 2000-pixel image, and it will look good. But it would be unnecessarily large. It would be better if we provided a smaller version like the one 800 x 1204 resolution image that is shown in the example. That image is only 73K.

We can all agree that when the image in the page is only 400 x 602 pixels in size, that providing an image that is 800×1204 and 73K is better than having people download the largest version of the image.

But why stop at 800×1204?

Michelle Obama example with a fourth size at 600px wide

If we provided another image source that was 600×903 pixels wide, it would only be 42K. That saves us 31K (42%) from the 800×1204 image.

Well shoot. A savings of 42% is a big deal. Maybe we should keep going. 500 pixels wide? 450 pixels wide?

Photo of Michelle Obama with examples at 450 and 500 pixels wide

Each smaller image source offers the potential for substantial savings over the previous size. If we keep on this track, we eventually end up with an image source that is the exact size of the image in the page.

So here’s the question that has vexed me about image breakpoints. How do I know when an image source is too big for the size that the image is being used in the page?

The answer is that unless the image source matches exactly the size that the image is being displayed in the page, it is always going to be too big. There is always going to be an opportunity to optimize it further by providing a smaller image.

Why not provide the exact size of the image?

At this point, you may be wondering why we simply don’t provide the exact size of that the image is going to be used in the page.

First, the whole point of flexible images in responsive design is to provide images that scale as the size of the viewport changes. If we provided images that were exactly the size used in the page, we’d likely need to download new images whenever the viewport size changes or the device was rotated.

Second, it is unrealistic to provide images at any size imaginable. Yes, we can dynamically resize images, but when we resize images, the server needs to do that work which slows down delivery of that image to the browser.

For this reason, most larger sites cache images on content delivery networks (CDN). Caching every image size possible on the CDN would be incredibly expensive.

Finally, the browser doesn’t know the exact size of the image in the page when it starts downloading. That’s what got us to new responsive images standards in the first place!

Possible ways to pick image breakpoints

As I mentioned at the beginning, I have no rock solid solutions for how to pick the number of image sources that you need. Instead, I want to describe some different ways of looking at the problem that may help inform your decisions.

Winging it (aka, matching your layout breakpoints)

Someone on your team says, “Hey, how many source images do you think we need for these product photos?”

You ponder for a moment and say, “Hmm… how about three? Small, medium and large.”

Don’t be ashamed if you’ve done this. I’m pretty sure almost every person working on responsive images has done this at some point.

Perhaps your organization still thinks about mobile, tablet and desktop which makes small, medium and large make sense.

Or maybe you take a look at the range that the image will be displayed and make your best guess. Perhaps you simply look at the number of major layout breakpoints and decide to do the same for your image breakpoints.

I completely understand. And this is better than providing one huge image for all viewports.

But it sure would be nice to have more logic behind our decisions.

Testing representative images

If guessing doesn’t seem like a sound strategy, then let’s insert a little science into the art of picking image breakpoints. We can take a look at some representative images and figure out how many breakpoints they need.

The hardest part of doing this is determining representative images, or figuring out if you have any at all.

For some sites, all the photographs may have a particular style dictated by the brand. If that is the case, finding representative images is easy. Pick a few images and then resize them and save them at sizes ranging between the largest and the smallest images until you feel like you’ve got decent coverage.

Of course, if your site has a diversity of image styles, finding representative images can be nearly impossible.

Memory usage influencing the distribution of image breakpoints

Earlier this summer, Tim Kadlec gave a presentation on Mobile Image Processing. In that presentation, he took a look at the memory usage of flexible images in responsive designs.

What Tim showed is that as an image gets bigger, the impact of resizing an image gets larger.

Memory usage of two different images

In the example above, reducing a 600×600 pixel image by 50 pixels in each direction results in 230,000 wasted bytes versus the 70,000 wasted bytes caused by reducing a 200×200 image by 50 pixels in the exact same way.

Knowing this tells us a bit about how we should pick our breakpoints. Instead of spacing out breakpoints evenly, we should have more breakpoints as the image gets larger.

graph showing more breakpoints at large sizes

Unfortunately, while this tells us that we should have more breakpoints at larger sizes, it doesn’t tell us where those breakpoints should be.

Setting image breakpoints based on a performance budget

What if we applied the idea of a performance budget to responsive images? What would that look like?

We’d start by defining a budget for the amount of wasted bytes that the browser would be allowed to download above what is needed to fit the size of the image in the page.

So say that we decided that we had a performance budget of 20K for each responsive image. That would mean that we would need to make sure that the various sources that we’ve defined for the image are never more than 20K apart.

When we do this, we find that the number of image breakpoints change wildly based on the visual diversity of the image and the compression being used.

Let’s take a look at three sample images.

Time Square — 8 Image Breakpoints

Times Square

This image has a lot of visual diversity. The variations in colors and textures means that JPEG’s lossy compression cannot do as much without damaging the image quality.

Because of this, there are eight image breakpoints—set at 20k intervals—between the smallest size of the image (320×213) and the largest size of the image (990×660).

.data-table {width: auto;overflow:scroll;border-collapse:collapse;border-spacing:0;} .data-table td, .data-table th {text-align:center;padding:5px;background:#fff;border:1px solid #ccc;width:25%;}
Breakpoint # Width Height File Size
1 320 213 25K
2 453 302 44K
3 579 386 65K
4 687 458 85K
5 786 524 104K
6 885 590 124K
7 975 650 142K
8 990 660 151K

Morning in Kettering — 3 Image Breakpoints

Morning in Kettering

Unlike the Times Square image, this image has a lot of areas with very similar colors and little variation. Because of this, JPEG can compress the image better.

On an image that can be compressed better, our 20K budget goes farther. For this image, we only need three image breakpoints to cover the full range of sizes that the image will be used at.

Breakpoint # Width Height File Size
1 320 213 9.0K
2 731 487 29K
3 990 660 40K

Microsoft Logo — 1 Image Breakpoint

Microsoft Logo

This is a simple PNG8 file. At its largest size (990×660), it is only 13K. Because of this, it fits into our 20K budget without any modifications.

Breakpoint # Width Height File Size
1 990 660 13K

Take a look at the other images on a sample page we created. See how the number of breakpoints vary even through all the images start with the same resolution end points.

Now, I’m not suggesting that you manually decide on image breakpoints for every single image. But I can envision a future where you might be able to declare to your server that you have a performance budget of 20K for responsive images and then have the server calculate the number of image sources on a per image basis.

I’ve written in more detail about performance budgets for responsive images in the past. If you end up implementing this approach, please let me know.

Setting image breakpoints based on most frequent requests

At a recent Responsive Images Community Group (RICG) meeting, Yoav Weiss and Ilya Grigorik discussed a different way of picking image breakpoints based on the most frequently requested image sizes.

For both Yoav, who works at Akamai, and Ilya, who works at Google, one of the problems they see with multiple image sources is storing all of those sources on edge servers where storage is usually more limited and costs are higher.

Not only do companies like Akamai and Google want to reduce the number of images stored at the edge, but the whole purpose of their content delivery networks is to reduce the amount of time it takes for people to render a web page.

Therefore, if they can cache the most commonly requested image sizes at the edge, they will deliver the fastest experience for the majority of their users.

For these organizations, they can tie their image processing and breakpoints logic to their analytics and change the size of the images over time if they find that new image sizes are getting requested more frequently.

When combined with the new HTTP Client Hints feature that Ilya has championed, servers could get incredibly smart about how to store images in their CDNs and do so in a way that requires little to no decision-making by designers and developers.

Humans shouldn’t be doing this

I believe that in a few years time, no one will be talking about how to pick responsive image breakpoints because no one will be doing it manually.

Sure, we may still make decisions for images that fall into the art direction use case, but even then, we’re probably not going to make finite decisions about every image source. We’ll handle the places that require our intervention and let our image processing services handle the rest.

There is a ton of benefit to either picking image sources based on a performance budget or based on the frequency with which different sizes of the image are requested. But either of these solutions are untenable as part of a manual workflow.

In the future, our typical workflow will be that we upload the highest quality image source into our content management system or image processing system and never have to think about it again.

Part 10 of a 9-part series

This started as a 9-part series, but there is always more to discuss when it comes to responsive images. Read for the conclusion of this series where I’ll provide some essential resources and talk about what the future holds for responsive images.


Responsive Images 101 Series
  1. Definitions
  2. Img Required
  3. Srcset Display Density
  4. Srcset Width Descriptors
  5. Sizes
  6. Picture Element
  7. Type
  8. CSS Responsive Images
  9. Image breakpoints
  10. Conclusion

by Jason Grigsby at August 25, 2015 03:15 PM

August 24, 2015

London Calling

The power of #SocialSerendipity (again)

As many London Calling readers know, I am a big believer in the power of social serendipity.

My “one tweet that got me to IBM” story is something I often repeat to IBM Clients as well as audiences around the world.

Today while waiting to board my British Airways flight to New York from Heathrow, I noticed a tweet from JP Rangaswami, Chief Data Officer at Deutsche Bank (@Jobsworth) showing him stuck in the traffic on the approach to Heathrow Terminal 5.

Having experienced exactly the same view only moments before, I opined that perhaps he was also travelling with British Airways from Terminal 5.

Checking my Facebook feed (JP and I are friends on Facebook, and I have known him for a number of years, from when he was Chief Scientist at BT), JP had mentioned he was sipping tea before his flight to JFK.

jp-fb

 

What are the chances that he would be on the same fight, BA 117 to JFK?

Settling in on board the flight, I now realised JP was indeed on the flight, so I messaged him on Facebook including my seat number.

jp-fb2

 

 

We pushed back, and I lost data connectivity so had no idea if JP had received my message and seat number.

I did however receive a tweet from Connor Ogle (@cmogle) saying he was also on the flight.

Imagine my surprise when a few hours into the flight, who should breeze through the dividing curtain than JP himself! He said he had cleared it with the cabin crew that we could stand near one of the emergency exits and catch up.

While JP and I were talking, another of his colleagues, Nick Doddy (Chief Innovation Officer for Deutsche Bank) stopped by and the 3 of us had an interesting chat. JP is such am amazing storyteller and he was recounting to Nick and I about how he used the power of serendipity when he was working for a well-known consulting firm and he was trying to get the attention of one of the UK’s largest banks.

I will let JP tell his own story (if you haven’t met him yet then you need to) but it is all about being in the right place at the right time.

Social allows you to do just this as my post has shown.

Will you let #socialserendipity put you in the right place at the right time?

 

If you enjoyed this blog post you may like other related posts listed below under You may also like ...

To receive future posts you can subscribe via email or RSS, download the android app, or follow me on twitter @andrewgrill.



You may also like ...

by Andrew Grill at August 24, 2015 05:01 PM

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Unfinished Business 115 ‘Extreme tool anxiety’ with Brad Frost and Stephen Hay

I had a great time talking with Andy Clarke and Stephen Hay about tools, complexity, and making stuff for the Web. When Andy reached out to us about what to talk about on the show, I suggested this would be a great topic to tackle since Andy wrote a fantastic post a while ago and Stephen focuses a lot on simplicity in design in process.

I will say the podcast description language here is a little vague and maybe a little inflammatory:

…we get right down it and discuss why it’s dangerous to bring computer science principles and heavy development tools into web design.

We end up explaining that computer science principles aren’t dangerous, but that there’s a time and place for translating things from the traditional programming world.

There’s a lot that can be can be learned from computer science principles that have been around forever. Like Stephen you were just talking about the single responsibility principle. That’s a great computer science principle which predates the Web by a long shot. The notion of object-oriented programming, that’s great and we should be adopting that stuff. But at the same time, there’s a lot of misconceptions about what the Web is, and what it all does, and how it all works at a very fundamental level, that is dangerous to have people coming in from some other traditional programming language and just go “I don’t understand why it’s not like Java so let’s just rearrange everything and make it like Java. That’s dangerous.

We end up talking about frameworks, buzzwords, and dealing with complexity. If you want to skip over all the stuff about mugs (which Andy insists is the best part) you can get to the meat of the conversation at 33 minutes.

by Brad Frost at August 24, 2015 02:56 PM

August 21, 2015

Cloud Four Blog

The Death of Progressive Enhancement(?) and Responsive Animation

We have two more exciting Responsive Field Day topics to announce this week including our first potential “hot drama”1 discussing the recent controversies over progressive enhancement.

Tom Dale: Progressive Enhancement is Dead, Long Live Progressive Enhancement

Tom DaleSome say that using a JavaScript framework means sacrificing core web principles—universal access, graceful degradation—in exchange for developer convenience. Are JavaScript peddlers like me the Pied Pipers of doom, leading the community astray from web righteousness? Let’s look at why devs are drawn to the benefits of JS frameworks—and discuss whether the rampant hand-wringing about progressive enhancement is deserved or a relic from an older age.

Val Head: Animation in Responsive Design

Val HeadAnimation often needs some space to move in, but with responsive design we know the space we have is ever-changing. Balancing those two factors can lead to some tricky design problems for web animation, especially when you throw performance and design concerns into the mix! Val will break down examples and show you how to design animations that work well at all viewport sizes without driving yourself crazy.

Responsive Field Day is now only five week away! Don’t miss your chance to hear Tom Dale, Val Head and our other fantastic speakers on Friday, September 25th in Portland’s Revolution Hall. Get your tickets while you can!


Footnotes
  1. “Hot drama” trademark Shop Talk Show. ;-)

by Jason Grigsby at August 21, 2015 09:29 PM

Dare to Repeat Yourself (At First)

It was mid-afternoon on a Wednesday when my team started finding strange bugs in older versions of Internet Explorer. At first these appeared to be unrelated… until we noticed seemingly random chunks of style appeared to be missing entirely. What was going on?

After some digging, we found the issue: Our project had exceeded old IE’s infamous CSS selector limit. Weeks prior, I’d lost an argument to resist including a sizable framework in the project. Mentally, I was already patting myself on the back. “I told you so,” I practiced saying in my own head.

Then I looked at the compiled CSS, and realized it was actually my fault.

Whoops.

I’d designed a custom interface element that was pretty complex. Because we were using Sass, I used some fancy mixins and loops to avoid repetition between a handful of breakpoint-specific modifiers. It was easy to read, maintain and modify.

It was so easy, in fact, that I failed to notice that the compiled CSS made up about 25% of the total project’s styles! Even more embarrassingly, I discovered that I could replicate the exact same functionality without most of the loops. I ended up reducing the selector count for that component from 1,207 to just 42 (seriously).

While it was great to find and fix the problem, it shook me up a little. Sass didn’t write crap code; I did. I was so focused on automating my repetitive solution that I hadn’t stopped to ask myself if it was even the right solution.

We recently started using PostCSS for a few of our projects. Every PostCSS feature is a plugin, which we include as needed. So far, we’ve yet to include plugins for nesting, mixins or loops.

Every time we’ve thought to include those features, we’ve instead found a simpler way to do the same thing. Nesting gives way to descendent class names, mixins become utilities, loops are questioned entirely. The initial pain of having to repeat ourselves motivates us to approach the problem in a different way. Repetitive selectors that survive this process are intentional, because a human being actually wrote them.

I know that’s probably silly. It’s definitely not DRY. But there’s a fine line between “smarter stylesheets” and “dumber designer.” Embracing painful repetition by nerfing my preprocessor (especially in combination with analysis tools like Parker) helps me draw that line.

by Tyler Sticka at August 21, 2015 05:11 PM

August 20, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Eating Chicken

There’s only one way to eat chicken: pick the damn thing up and eat it. Grandpa Frost

by Brad Frost at August 20, 2015 03:27 PM

Martin's Mobile Technology Page

Updating Owncloud Has Become Really Scary

For the third time in a row I ran into major difficulties when upgrading Owncloud. I really love this project and I can't overstate it's importance to me, but in its current state it won't be attractive to a lot of non-technical people for whom a robust upgrade process is essential because they can't fix things when they break without them being at fault. So instead of just helping to fix the individual issues as I've done in the past, I've decided to describe my latest upgrade experience to an open audience in the hope that some people working on Owncloud realize the state the project is in and finally take some real counter measures.

I run a pretty much standard and small Owncloud instance on an standard Ubuntu 14.04 LTS server installation on an Intel platform. Nothing fancy, just a few users, only the minimal number of apps installed such as calendar and contacts. Also, I have some of the built in apps enabled such as the PDF viewer, gallery and the 'inside document' search. That's it. So I would expect that when my Owncloud instance informs me with a nice yellow banner at the top that an update is available and that I should just press the update button everything works smoothly. Boy was I in for a surprise.

When upgrading from Owncloud 8.0.4 to 8.1.1 as suggested by the updater the upgrade failed with an error message. A look at the issues list on Github revealed a mile long thread of other people who have also gotten the message. Eventually I figured that I just need to reload the update page and the update would run through again and this time it succeed...

Next, all external apps such as calendar and contacts are disabled and you have to re-enable them. A 'nice' feature that Owncloud picked up along the way after version 6.0 and that nobody understands from a user point of view. From what I've seen there have been quite a number of people voicing their concerns over this and it's on the top of the wish list of things to change for an upcoming version.

Unfortunately the calendar and contacts apps were not longer in the list of apps that can be activated. What!? After a lot of research I found out that this is related to an error message in the admin screen that Owncloud doesn't have access to the Internet. How so, I wondered, Internet access is working fine. Again, I had a look at the Owncloud core issue list on Github and found a mile long thread from other people who had the same issue. Somewhere in the middle of the thread I read that putting an SSL certificates file one can download from the web into the Owncloud config directory fixes the issue. Unbelievable but true, it fixed this issue, no more complaints about no Internet connectivity and the calendar and contacts apps were showing up again in the list of apps that can be activated.

The next challenge I met was that when activating the calendar in the app menu I got an update error message. Now what, again no further advice!? Clicking somewhere in the web interface popped up the system updater page again which informs me that the calendar is now updated. Unfortunately that fails again with a strange error message. Again, the issue list on Github tells me I'm not the only one and rinse, wash, repeat will fix the issue. I saw the updater page again when activating a number of other apps, this time fortunately without further issues after the initial error message.

I also had trouble getting the Lucence search app updated. When doing that via the "updater button" on the web interface, Owncloud completely fell over and wouldn't even show anything in the web browser anymore. Again, the issues list on Github told me I'm not the only victim and that deleting the Lucene app directory on the server and re-installing the app fixes the issue. And indeed it did.

After that, Owncloud was finally working again for me. But I can't believe that a simple update results in such chaos!? Really, dear Owncloud community, of which I feel I'm being a (frustrated) part of, you have to get your act together and fix the update process, IT MUST NOT FAIL UNDER ANY CIRCUMSTANCES if you want this project to thrive in the future. Forget new features, forget slick UI changes, FIX THE UPDATE PROCESS...

And fix it to a point where an update via 'apt-get update' from the command interface does everything, including re-activation and updates of any Owncloud apps. While this is not working I won't even bother anymore recommending to and installing Owncloud instances for my less technical friends.

by mobilesociety at August 20, 2015 06:55 AM

August 19, 2015

Martin's Mobile Technology Page

The Recovery Partition Is Key To Android Flashing

Every now and then I flash CyanogenMod on an Android device and as there are usually weeks or months between two sessions I frequently have to look up the instructions again. To complicate matters, the procedure for flashing an alternative Android on a device also depends on the device manufacturer. Altogether a very confusing procedure if you don't do it all the time. But things get much simpler once one has understood what lies at the core of the different procedures and all the steps required.

The Recovery Partition

No matter how different the procedures are, the core is always the same: When Android starts, the bootloader on any device has several choices it can make. Usually, the bootloader just boots Android. Some manufacturers have included a proprietary flashing software that the bootloader can run instead of starting Android when the user presses a certain key combination during power up. This is proprietary, however, and is not directly related to Android. And finally, the bootloader can also start the software in Android's recovery partition. And this recovery partition is what is used on all devices to install an alternative ROM from a zip file that contains the complete Android installation.

Software For The Recovery Partition

So the first trick is to get an alternative recovery software on the recovery partition. A well known alternative recovery software is Clockworkmod (CWM).  This is the key in all flashing procedures I have come across so far. Exactly how this is can be achieved depends on the device and manufacturer and procedures vary wildly. I'll give a couple of examples below. For now, the key thing to remember is that an alternative recovery software such as CWM has to end up in the recovery partition in some way.

Getting The Alternative Android Software On The Device

Once the recovery software is on the recovery partition it can then be used to flash the alternative Android system into the system partition. Before that is done the device has to be wiped with the recovery software as configuration settings of the 'old' Android system might not be compatible with the new system. The second big task in flashing a new Android system is to get the zip file that contains the new Android system into the device so the recovery software can flash it. The easiest variant is to put it on a removable memory card. Another variant some devices support is to upload the ZIP file to the data partition of the device over USB while it is in recovery mode. One way or another, once the Android system ZIP file is on the device the alternative recovery software (e.g. CWM) is then used to install it. In the case of CyanogenMod I usually also transfer a second ZIP file to the device that contains a part of the Google software to be able to access the Play store. Transferring and flashing the second ZIP file works the same way as for the main Android system ZIP file.

So Why Are The Procedures Different For Different Device Manufacturers?

So far so good. In practice, however, different manufacturers have chosen different ways to configure the different Android storage partitions and startup procedures. Also, some companies such as Sony, for example, lock the bootloader and won't allow an alternative software on the recovery partition before the bootloader is unlocked. This is done via a code that can be obtained from their unlock website. The Samsung devices I have flashed so far do not require this step, their boot loaders are open. However, there seem to be some US network operators that have the bootloader of their device variants locked. On Sony devices the recovery software has to be sideloaded via Android debug tools. On Samsung devices, Samsung specific tools are required. And, not to forget, the key combinations to make the phone boot into the flashing software in Samsung's case or the recovery partition are also different on a device and manufacturer basis. Just like the keys to enter a PC's BIOS settings...

But in the end, all these different procedures have one goal: To get an alternative recovery software on Android's recovery partition. Keeping that in mind and the confusing multistage instructions of how to do that might seem a little bit less confusing.

 

by mobilesociety at August 19, 2015 06:36 AM

August 18, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Surfacing Invisible Elements

There’s a National Geographic article that’s been stuck in my mind for many years now. It discusses how overfishing will destroy the richness and diversity of ocean life if we don’t radically alter our behavior. The post focuses on the decimation of the giant bluefin tuna, and the significant challenges around raising awareness about the issue.

If the giant bluefin lived on land, its size, speed, and epic migrations would ensure its legendary status, with tourists flocking to photograph it in national parks. But because it lives in the sea, its majesty—comparable to that of a lion—lies largely beyond comprehension.

In other words, out of sight, out of mind. The impact of humans on our oceans’ ecosystems goes by mostly unnoticed, largely because the entire marine world lives beneath a seemingly endless blue sheen of water. Our surface-level visibility into our planet’s underwater world leads to surface-level response to the crisis.

Tracking scripts and other invisible elements

Recently there have been a slew of posts regarding tracking scripts and their detrimental effects on web experiences. In addition to the larger privacy implications around tracking scripts, including these scripts can also have a damaging effect on the end user experience by bogging down the page with a load of unnecessary bullshit.

Despite slowing down web experiences’ performance, the business side of organizations (who can be susceptible to tracking vendors’ sales teams) push to heap on script after script in an effort to “track user engagement” and other such business speak. All of the Web teams I’ve ever talked to have been opposed to force feeding this crap to their users, but they often throw up their hands and say “But what can you do?”

Making the invisible visible

I do a lot of workshops with organizations large and small, and I’ll often have teams conduct an interface inventory to round up all the UI elements that make up their web experiences. It’s a fun-yet-sobering exercise that helps teams establish a common vocabulary and get excited about establishing a pattern library for their organization. While the exercise is really helpful at documenting visible UI elements like buttons, accordions, hero areas, etc, it’s also tremendously helpful for documenting invisible elements like animations and 3rd party scripts.

It’s extremely important to account for these invisible elements in our experiences, and that’s why I think it’s important to visualize them in the context of a pattern library right along side the rest of the UI elements. Here’s what that could look like:

Hypothetical tracking scripts viewed in a pattern library

Hypothetical tracking scripts viewed in a pattern library

In order to save the bluefin tuna and the rest of the planet’s marine life, we need to make visible the underwater world that is in critical danger. That’s why aquariums, documentaries, articles, conventions, etc are so important for bringing awareness to the issue. In the land of a thousand tracking scripts, it’s up to us to make visible these tracking scripts to ensure the Web stays healthy and fast.

by Brad Frost at August 18, 2015 07:40 PM

August 17, 2015

London Calling

Internal collaboration expert Silvia Cambie joins @IBM Social Consulting

silvia-headshotI am delighted to announce that Silvia Cambie joins IBM Social Consulting today.

Silvia joins a growing team of global practitioners with deep experience and expertise in social business. Having helped transform companies from to “doing social” to becoming a true social business, Silvia will help key IBM clients understand the huge business and people benefits from social.

I have known Silvia for some years, from when she was involved with the SMILE social business conferences in London, and I had the pleasure of speaking at one of the conferences she chaired. I am delighted that she has decided to join the Social Consulting team at IBM.

Her background includes financial journalism, reporting from Central and Eastern Europe, from a base in Prague in the early 1990s, for major German and British print media (including Euromoney and Handelsblatt), as well as external and internal communication for Brussels-based international trade associations.

She is a recognized public speaker and has addressed audiences worldwide including South Africa, Russia, Malaysia, Dubai, Chile, the US, Jordan, Saudi Arabia, Spain, France and the UK. Silvia has lectured at the Sorbonne University (Paris), London Metropolitan University and San Sebastian University (Santiago, Chile).

Silvia is also the author of “International Communications Strategy – Developments in cross-cultural communications, PR and social media” published by Kogan Page.

She speaks five languages: English, German, French, Italian and Czech!

Silvia is part of a push at IBM to recruit our social business A team, and you can expect many more strategic hires such as Silvia to be announced very soon.

Silvia, welcome to IBM. I am so glad you decided to join the team!

If you enjoyed this blog post you may like other related posts listed below under You may also like ...

To receive future posts you can subscribe via email or RSS, download the android app, or follow me on twitter @andrewgrill.



You may also like ...

by Andrew Grill at August 17, 2015 05:26 AM

August 14, 2015

Cloud Four Blog

Meaningful metrics and designing globally: two more Responsive Field Day topics

By now, you’ve probably read about Responsive Field Day’s amazing speaker line up and may be wondering just what these paragons of responsive design are going to talk about. We’re happy to announce two more speaker topics:

Sophie Shepherd: Designing for a global audience (and how a pattern library can help)

Sophie ShepherdThe Ushahidi platform is a data collection tool that has been used by over 18 million people in 150 countries, and translated into 49 different languages. As a designer on Ushahidi, Sophie will talk about the challenges of designing for users around the world with varying devices and connections, languages, and digital experiences, and will explain how a pattern library has made this process easier.

Steve Souders: Metrics that matter

Steve SoudersUntil browsers add mind-reading event handlers, we have to search for an alternative way to measure how fast users think our sites are. For decades, the go-to number has been window.onload, but modern and responsive techniques have weakened the relevance of that metric. What are the metrics that DO have meaning for what your users are experiencing?

This summer has flown by and now Responsive Field Day is somehow happening next month! Don’t miss your chance to hear Sophie Shepherd, Steve Souders and our other fantastic speakers on Friday, September 25th in Portland’s Revolution Hall. Get your tickets while you can!

by Aileen Jeffries at August 14, 2015 10:00 PM

Martin's Mobile Technology Page

Old 'Byte' Magazine At Archive.Org To Experience The Past

Every now and then I read books about different aspects of computer history. Good books written on the topic obviously heavily rely on original sources and interviews and make a good plot and summary out of it. To dig deeper on specific aspects requires to get hold of the original sources. Fortunately, quite some of them are available online now, such as for example scanned originals of 'Byte' magazine, that covered 'microcomputer' topics in great depth and technical detail from 1975 to 1998.

One issue I recently took a closer look at was from August 1985, i.e. from exactly 30 years ago, as it contains a preview of the upcoming Amiga 1000. What I find very interesting reading original sources is how new developments at the time were perceived at the time and how they compare with existing technology. I had to smile, for example when comparing the graphics and multitasking capabilities the Amiga introduced to Jerry Pournelle's ravings in the same issue of the magazine about a program that 'can have WordStar, DOS, and Lotus 1-2-3 running all at the same time that would otherwise have only single-tasked on his text based Zenth ZP-150 IBM-PC clone.

Obviously that's just one of millions of stories to be discovered. For your own enjoyment, head over to  Archive.org and begin your own journey through computer (and other) history :-)

by mobilesociety at August 14, 2015 09:36 AM

This Year's Holiday Network Setup And Data Use In Austria - 18 GB

2015 data usage austria vacationI've been in Austria again this summer for a couple of weeks and thanks to Drei.at's unlimited data bundle for 18 Euros a month on a prepaid SIM I didn't have to artificially limit my data use. Last year, I ran through 16 GB of data during my 3 weeks stay and slightly topped it this time around with 18 GB because I was in the country for a couple of days longer.

I suspect it would have been much more as I mostly streamed HD content this time around if it wasn't for the fact that unlike last year the sun was shining a lot and thus there was less time to use bandwidth intensive applications, read, streaming videos. While LTE is now ramping in Austria as well, my prepaid SIM was still limited to their 3G network but there was no congestion and the network felt snappy with sustained data rates usually well beyond the 10 Mbit/s mark.

Like last year, I had a Raspberry Pi with me, this time a Pi 2, to serve as a Wi-Fi access point and VPN client gateway to tunnel the data traffic of all my devices via tethering to a Samsung Galaxy Express smartphone to a VPN server first before releasing it to the Internet. Yes, I like my privacy and I don't want to be DPI'd. Unlike last year, however, I didn't use my VPN server at home in Germany that is limited to a throughput of 5 Mbit/s due to my VDSL line's uplink speed limitation. Instead, I used my VPN server in Paris that is connected to a fiber line to be able to make full use of the available 3G bandwidth.

For streaming videos to a tablet instead of to a PC like last year, I had a second Raspberry Pi with me so I could run another Wi-Fi network and VPN tunnel to another destination to cater for regional content limitation, i.e. geoblocking. I could of course have changed the VPN endpoint on the main Pi as necessary and I have scripts to do that on the fly, but a Pi is light and small enough to fit into the suitcase so why limit myself?

The usage graph screenshot on the left is quite interesting. Unlike last year where the curve was pretty much a straight line, the curve this year is a bit different. During the first week I was traveling through the country a lot and the curve is quite flat, i.e. little video consumption. The pattern changes quite a bit about a third into the trip and one can plainly see the difference of Internet use with and without online video viewing.

I very much enjoyed the trip and wished there were more network operators like Drei in Austria that are a bit more generous with the mobile data buckets than just a couple of GBs per month.

by mobilesociety at August 14, 2015 06:40 AM

August 13, 2015

Martin's Mobile Technology Page

Did You Know That The BIOS Can Contain Windows Executables?

Wow, I'm flabbergasted, did you know that the BIOS can contain Windows executables that are run at every Windows system startup that the user can't delete and that even won't go away when you re-install Windows unless the feature is deactivated in the BIOS before re-installing?

I didn't until I read this article in Ars security about a well known company that misused the 'feature' that is supposed to help with wipe-after-theft and other security features to deploy un-deletable crapware to buyers of their machines. Another one of those 'the road to hell is paved with good intentions' features that was hijacked. Am I glad that I'm running a Linux OS on all my machines whose developer's would never even dream of supporting such a dangerous BIOS behavior.

by mobilesociety at August 13, 2015 05:08 PM

Chinese Companies Are Cutting Jobs Now, Too!

In the past 10 years I guess I wasn't the only observing that lots of European and US tech companies and network equipment vendors where laying off people and selling off parts of their technology and wondering how the story would continue. In many cases, Chinese companies picked up the parts and at least in Europe, quite a lot of people in the tech business are now working for Chinese companies. In many cases, companies regarded as 'Asian upstarts' by man in the industry only 10 years ago have become quite successful, often being on the market these days with new features well before the incumbent competition, especially in the LTE domain.

So far, this was a continuing trend. But I'm wondering if we are perhaps about to run into saturation here as well!? Today, I've seen the first article I can remember on a big Chinese tech company, Lenovo in this case, having had a massive dip in earnings and thus announcing that they will fire 3200 people or about 5% of their employees as a result. You can of course read this in many ways or perhaps it's only an isolated case but I wonder if the tech honeymoon in China is about to end as well?

by mobilesociety at August 13, 2015 04:49 PM