On this page:

March 05, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Designing Pattern Flexibility

When discussing a pattern-based design workflow, one knee-jerk reaction I hear from visual designers is that a design system will stifle creativity, leaving everything looking homogeneous and bland. I think there is some validity to this concern, and this sentiment can’t simply be chalked up to failed-artist designers who always want to ignore constraints and paint outside the lines.

It’s essential to build flexibility into interface design systems in order to make dynamic sites that promote consistency while still looking great and meeting organizational needs.

Jason Grigsby recently published a fantastic article about responsive hero images and how challenging it can be to design high-impact marketing images for responsive sites. Jason explains many scenarios where art direction of hero areas requires careful consideration in responsive environments. The whole post is well worth a read, but what really stood out to me was this image:

Image 2015-03-05 at 9.48.35 AM

When designers express their concern about pattern libraries limiting creativity, I describe a box very similar to what Jason’s drawn. It’s not that designers can’t color outside the lines and produce really high-impact designs in a pattern-based workflow, it’s about figuring out where it’s appropriate to do so.

A Time and a Place

Design systems should deliberately establish patterns that provide more creative and organizational freedom, and hero areas are great examples of these looser patterns. These patterns establish parameters around when and where it makes sense to deviate from the rest of the design system and create something unique.

These more flexible patterns also alleviate Special Snowflake Syndrome, where certain departments in the organization think that they have unique problems and therefore the rules of the design system don’t apply to them. By establishing patterns that have a little more wiggle room, you can appease those Special Snowflakes in those areas while leaving the rest of the design system intact and untouched.

Doin’ Business

You can divide interface design patterns into two categories: blank slate patterns, and doin’ business patterns.

Doin’ business patterns should remain consistent and cohesive across the entire site. Buttons, form elements, and other key interface patterns should feel the same across the entire experience. The majority of an experience should consist of doin’ business patterns if you’re to create a maintainable, scalable system.

Blank slate patterns on the other can and should feel unique, one-offish, bespoke even (I hate that word). It’s in these areas where the organization can paint outside the lines and flex their editorial muscles.

BUT. Even though blank slate patterns can be unique, it’s still extremely helpful to try to create reusable patterns within those editorial areas. One client I’m working with has many editorial areas with a lot of high-impact imagery that changes on a near-daily basis. If they were to design and build unique designs every day, they’d burn out immediately. By establishing a handful of unique hero patterns, they can swap in new text and images and be sure it works across the entire resolution spectrum. And if they find their existing solutions don’t work for a particular scenario, they can establish yet another hero pattern, throw it on the pile, and make use of it later on down the road.

At the end of the day, design systems should help promote consistency while still providing design flexibility and accounting for organizations’ diverse needs.

by Brad Frost at March 05, 2015 07:29 PM

Where Atomic Design Fell Short

Despite the doom-and-gloom title, this is a good article explaining how important it is to settle on a vocabulary that makes sense for the team.

I will say one thing in defense of the naming structure of atomic design. Atoms, molecules, and organisms does have a built-in hierarchy to it. I’ll discuss it further in my book, but my issue with terms like “modules” and “components” is that they don’t carry any sense of hierarchy, and become these amorphous clusters of interface.

 

by Brad Frost at March 05, 2015 03:25 AM

March 03, 2015

Eurotechnology.japan

Japanese acquisitions in Europe total € 6 billion in 2015

Overcoming Japan’s “Galapagos syndrome” Globalizing Japan’s high-tech industries At the Bank of Kyoto’s New Year celebration meeting, Japan’s stagnation and need for globalization were center of discussion – despite focus on globalization, the present author was more or less the only non-Japanese invited and attending(!). The Chairman of Japan’s Industry Federation KEIDANREN, Mr Sakakibara (Chairman … Continue reading Japanese acquisitions in Europe total € 6 billion in 2015

The post Japanese acquisitions in Europe total € 6 billion in 2015 appeared first on Eurotechnology Japan.

by fasol@eurotechnology.com (Gerhard Fasol) at March 03, 2015 06:38 AM

March 02, 2015

mobiForge blog

AngularJS to Opera Mini: You're just not worth it!

Google-backed AngularJS is a popular web application framework providing a client-side MVC architecture. It has been criticised in the past for its performance, particularly on mobile. As observed by Peter-Paul Koch, it's odd that Google was pushing a mobile-challenged framework back in 2012 when it must have been obvious that Android was going to be pretty important to it as a company. Perhaps those who knew weren't those who were pushing AngularJS.

by ruadhan at March 02, 2015 07:45 PM

March 01, 2015

Martin's Mobile Technology Page

Owncloud Benchmarking On A New Raspberry Pi 2 vs. The Original One vs. a NUC

A month ago, the new Raspberry Pi 2 was released and needless to say I couldn't wait to get my hands on one to see by how much the new quad core processor and the 1 GB of RAM would speed up Owncloud.

Back in September 2014 I ran a similar benchmark, comparing Owncloud on a Raspberry Pi Model B, a BananaPi and an Intel Celeron based NUC. While the BananaPi wasn't as fast as the NUC it nevertheless ran Owncloud much quicker than the original Raspberry Pi. With the new Pi 2's hardware specifications now closely matching or perhaps even exceeding that of the Raspberry Pi when it comes to its 4 processor cores vs. the 2 cores of the BananaPi it was time for another Benchmark.

The use cases I used for this benchmark are the same as those used previously. The results are not fully comparable, however, as I have upgraded Owncloud from version 7 to version 8 in the meantime. Also, I have upgraded the NUC from Ubuntu 12.02 to 14.04 and the Truecrypt container was replaced by a dm-crypt partition. And finally, I've decided to run all tests over Wi-Fi instead of over an Ethernet cable as that's how I access my servers anyway.

An interesting thing to mention at this point is that after performing a software and kernel upgrade of my Raspian / Owncloud image on an SD card it runs on both the old and the new Pi. For the benchmark I used the same SD card in both the old and new Raspi which excludes differences due to different software installations and flash speeds. The Owncloud installation is identical on the NUC and the two Raspberry Pis as I used rsync to copy the Owncloud installation (/var/www/owncloud) and Owncloud's data directory from the NUC to the SD card that I then used to boot both Raspberry Pis.

Login Test

Like in the benchmark back in November the first test was about how quickly I could access my Owncloud account after typing-in username and password:

  • NUC: 3 seconds
  • Raspi 2: 5 seconds
  • Raspi: 18 seconds.

Displaying Address Book Entries

Getting my 300 address book entries out of the database and onto a web page took the following time on the three devices:

  • NUC: 5 seconds
  • Raspi 2: 9 seconds
  • Raspi: 26 seconds

Opening the Calendar

Entries of 5 different calendars are displayed on my calendar page and the time it took until all entries were shown on the web page are significantly different:

  • NUC: 4 seconds
  • Raspi 2: 13 seconds
  • Raspi: 104 seconds

Picture Uploads

For this test I uploaded 28 jpg images into a new folder on my Owncloud instance with a total file size of 71 MB. After each picture was uploaded a thumbnail was generated on the server and shown on the web page. And here's how the three systems fared:

  • NUC: 45 seconds
  • Raspi 2: 75 seconds
  • Raspi 1: 509 seconds

This is almost a 7 times speedup between the old and the new Raspberry Pi, due to the 4 CPU cores that are used simultaneously during the process. The two screenshots below (click to enlarge) show CPU usage on the old Raspberry Pi with a single processor and on the new Raspberry Pi with 4 processors.

Raspi1-OC-Upload-MultitaksingOn both systems, several Apache web server tasks are actively working on different activities of the multiple file upload activity. On the old Raspi they all have to share a single CPU core and each can therefore only use around 20% of the CPU's capacity.

 

Raspi2-OC-Upload-MultitaksingOn the Raspberry 2 the picture looks completely different. Instead of sharing a single CPU, several Apache web server tasks are running simultaneously and independently on several cores as can be seen in four bar graphs that represent CPU activity and the amount of processor time (CPU%) used by each. No wonder it is so much faster!

Show Link

The final test I ran was how long it takes to show the page with all pictures I have just uploaded if it is accessed via a "sharing link". This is the typical "I take pictures, upload them to the cloud and then share it with others" scenario. As it takes some time to generate and display the thumbnails of the uploaded pictures I have two results per system below, one for the page to show up in the web browser and one for the time it takes until all thumbnails are loaded:

  • NUC: 2 seconds (7 seconds until thumbnails are loaded)
  • Raspi 2: 5 seconds (16 incl. thumbs)
  • Raspi: 14 seconds (85 inc. thumbnails)

Summary

In all scenarios the new Raspberry Pi 2 ran significantly and noticeably faster than the old Pi. While not as fast as the much more expensive NUC system I can fully recommend the new Pi as an Owncloud server for home. It's still possible by all means to run an Owncloud on a previous generation Pi but it takes patience. As there is almost no price difference between the new and the old Pi, setting up an Owncloud server at home today on very inexpensive hardware yields much better results and usability than just a year ago. At the time I spent around 200 euros to move my Owncloud instance from an old Pi to a NUC. With the Pi 2 now available, I'm not sure if I would do it again for this reason.

by mobilesociety at March 01, 2015 01:54 PM

Open Gardens

Book review: About Time Series Databases and a New look at Anomaly detection by Ted Dunning and Ellen Friedman

Introduction

 This blog is a review of two books. Both are available for free from the MapR site, written by Ted Dunning and Ellen Friedman (published by O Reilly) : About Time Series Databases: New ways to store and access data and A new look at Anomaly Detection

 The  MapR platform is a key part of the Data Science for the Internet of Things (IoT) course – University of Oxford and I shall be covering these issues in my course

 In this post, I discuss the significance of Time series databases from an IoT perspective based on my review of these books. Specifically, we discuss Classification and Anomaly detection which often go together for typical IoT applications. The books are easy to read with analogies like HAL (Space Odyssey ) and I recommend them.

 

Time Series data

The idea of time series data is not new. Historically, time series data can be stored even in simple structures like flat files. The difference now is the huge volume of data and the future applications possible by collecting this data – especially for IoT. These large scale time series databases and applications are the focus of the book. Large scale time series applications typically need a NoSQL database like Apache Cassandra, Apache HBase,  MapR-DB etc.  The book’s focus is Apache HBase and MapR-DB for the collection, storage and access of large-scale time series data.

  Essentially, time series data involves measurements or observations of events as a function of the time at which they occurred. The airline ‘black box’ is a good example of a time series data. The black box records data many times per second for dozens of parameters throughout the flight including altitude, flight path, engine temperature and power, indicated air speed, fuel consumption, and control settings. Each measurement includes the time it was made. The analogy applies to sensor data. Increasingly, with the proliferation of IoT, Time series data is becoming more common and universal. The data so acquired through sensors is typically stored in Time Series Databases.  The TSDB (Time series database) is optimized for best performance for queries based on a range of time

 

Time series data applications

Time series databases apply to many IoT use cases for example:

  • Trucking, to reduce taxes according to how much trucks drive on public roads (which sometimes incur a tax). It’s not just a matter of how many miles a truck drives but rather which miles.
  • A smart pallet can be a source of time series data that might record events of interest such as when the pallet was filled with goods, when it was loaded or unloaded from a truck, when it was transferred into storage in a warehouse, or even the environmental parameters involved, such as temperature.
  • Similarly, commercial waste containers, called dumpsters in the US, could be equipped with sensors to report on how full they are at different points in time.
  • Cell tower traffic can also be modelled as a time series and anomalies like flash crowd events that can be used to provide early warning.
  • Data Center Monitoring can be modelled as a Time series to predict  outages, plan upgrades
  • Similarly, Satellites, Robots and many more devices can be modelled as Time series data

From these readings captured in a Time Series database, we can derive analytics such as:

Prognosis: What are the short- and long-term trends for some measurement or ensemble of measurements?

Introspection: How do several measurements correlate over a period of time?

Prediction:  How do I build a machine-learning model based on the temporal behaviour of many measurements correlated to externally known facts?

Introspection:  Have similar patterns of measurements preceded similar events?

Diagnosis:  What measurements might indicate the cause of some event, such as a failure?

 

Classification and Anomaly detection for IoT

The books gives examples of usage of Anomaly detection and Classification for IoT data.

For Time series IoT based readings, anomaly detection and Classification go together. Anomaly detection determines what normal looks like, and how to detect deviations from normal.

When searching for anomalies, we don’t know what their characteristics will be in advance. Once we know characteristics, we can use a different form of machine learning i.e. classification

Anomaly in this context just means different than expected—it does not refer to desirable or un‐ desirable. Anomaly detection is a discovery process to help you figure out what is going on and what you need to look for. The anomaly-detection program must discover interesting patterns or connections in the data itself.

Anomaly detection and classification go together when it comes to finding a solution to real-world problems. Anomaly detection is used first in the discovery phase—to help you figure out what is going on and what you need to look for. You could use the anomaly-detection model to spot outliers, then set up an efficient classification model to assign new examples to the categories you’ve already identified. You then update the anomaly detector to consider these new examples as normal and repeat the process

The book goes on to give examples of usage of these techniques in EKG

For example, for the challenge of finding an approachable, practical way to model normal for a very complicated curve such as the EKG, we could use a type of machine learning known as deep learning.

 Deep learning involves letting a system learn in several layers, in order to deal with large and complicated problems in approachable steps. Curves such as the EKG have repeated components separated in time rather than superposed. We take advantage of the repetitive and separated nature of an EKG curve in order to accurately model its complicated shape to detect normal patterns using Deep learning

The book also refers to a Data structure called t-Digest for Accurate Calculation of Extreme Quantiles  t-digest was developed by one of the authors, Ted Dunning, as a way to accurately estimate extreme quantiles for very large data sets with limited memory use. This capability makes t-digest particularly useful for selecting a good threshold for anomaly detection. The t-digest algorithm is available in Apache Mahout as part of the Mahout math library. It’s also available as open source at https://github.com/tdunning/t-digest

 

Anomaly detection is a complex field and needs a lot of data.

For example: what happens if you only save a month of sensor data at a time, but the critical events leading up to a catastrophic part failure happened six weeks or more before the event?

IoT from a large scale Data standpoint

To conclude, much of the complexity for IoT analytics comes from the management of Large scale data.

Collectively, Interconnected Objects and the data they share make up the Internet of Things (IoT).

Relationships between objects and people, between objects and other objects, conditions in the present, and histories of their condition over time can be monitored and stored for future analysis, but doing so is quite a challenge.

However, the rewards are also potentially enormous. That’s where machine learning and anomaly detection can provide a huge benefit.

For Time series, the book covers themes such as

Storing and Processing Time Series Data

The Direct Blob Insertion Design

Why Relational Databases Aren’t Quite Right

Architecture of Open TSDB

Value Added: Direct Blob Loading for High Performance

Using SQL-on-Hadoop Tools

Using Apache Spark SQL

 Advanced Topics for Time Series Databases(Stationary Data, Wandering Sources, Space-Filling Curves )

For Anomaly detection:

Windows and Clusters

 Anomalies in Sporadic Events

Website Traffic Prediction

Extreme Seasonality Effects

Etc

 

Links again:

About Time Series Databases: New ways to store and access data and A new look at Anomaly Detection  by Ted Dunning and Ellen Friedman (published by O Reilly).

Also the link for Data Science for the Internet of Things (IoT) course – University of Oxford where I hope to cover these issues in more detail in context of  MapR

by ajit at March 01, 2015 10:24 AM

February 24, 2015

mobiForge blog

Emoji set to live long and prosper, thanks to Unicode

You've probably seen them. Your mom probably uses them to sign off her texts, and your teenage cousin has likely abandoned the Roman alphabet altogether in their favour. Emoji are everywhere, and love them or loathe them, they can't be ignored.

by ruadhan at February 24, 2015 02:46 PM

Cloud Four Blog

Responsive Hero Images

Hero images present unique challenges for responsive designs. During a recent responsive images audit, we found a unique solution which I wanted to share.

What are hero images?

Until a couple years ago, I was unfamiliar with the term hero image. A friend who worked for a large agency used the term, and I had to ask what it meant. I don’t know if it is a common description and I was living under a rock. Or maybe it is agency jargon.

But just in case I’m not the only one who doesn’t know what a hero image is, a hero image is a large promotional image like the one from Target below:

Target.com hero image

Responsive hero images?

Hero images often present unique problems for responsive designs. Many hero images have text in the image itself. When text is in an image, it often means that the responsive image will fall into the art direction use case instead of the easier to solve resolution switching use case.

We can see an example of why the art direction is important by looking at the CB2 site and one of its hero images.

CB2 hero image with text

This image contains three photographs, two logos with type in them, and a stamp and text that both use thin strokes. If we simply resized this image to 320 pixels wide, the text would be too small to be readable.

CB2 hero image with text shrunk to 320 pixels making the text unreadable.

CB2 doesn’t currently have a responsive site, but it does have a mobile web site where we can see how they handle this hero image on small screens.

CB2 mobile hero image

In order to make the image work on small screens, CB2 does the following:

  • Changes from three photographs to two
  • Removes text
  • Modifies the aspect ratio to make the image taller
  • Redesigns the layout of the image

As you can see the changes necessary to make a hero image work on small screens can be substantial.

Requirements for hero images

Our usual process for responsive designs is to understand the goals, needs, analytics, and user feedback for each pattern in the design. We use this information to understand the requirements for the pattern and to prioritize what the small screen version needs to accomplish.

The full requirements for hero images can be summed up as:

A box for marketing.

In my experience, any attempt to narrow down what can go into the box will meet resistance. The people responsible for marketing understandably don’t want their future options limited.

Ideal world solutions

As we were brainstorming ideas on what to do for this client, we kept finding ourselves referring to what we would do in an ideal world.

In an ideal world, we’d:

  • Build complex HTML5-based animations like Apple.

    Apple has created rich pages that react as you scroll and otherwise interact with them. They make the old days of flash animation look quaint.

    They are also one of the wealthiest companies on the planet and only release new products once a year. You may not have similar resources.

  • Remove text from images, put it in HTML, and use CSS to overlay the image.

    This makes a ton of sense. If we separate out the text from hero images, then we can adjust the placement of the text as the viewport changes.

    Sites like Crate and Barrel have established a specific text treatment and placement for all promo images.

    Crate and Barrel Hero Image Example

    However, this again is an ideal world situation. All of the photography must have areas that are designed to accommodate the text. You can see how Crate and Barrel must ask all of their photographers to keep this in mind.

    That solution may not work depending on the requirements of the brand and how frequently the images are being updated.

Real world conditions

We’re often not working in an ideal world. Take Free People for example:

Free-People-Casual-Monday-00-870

Free People has a strong artistic vision for their site. The combination of type and imagery matters. And they update the images daily.

If you’re updating hero images on a daily or weekly basis, many of the ideal world solutions are impractical. Not to mention the fact that the people responsible for creating the hero images may be graphic designers, not web designers or developers.

Give them <picture> and let them have their box

After striking out on the idealistic solutions, we started looking at ways to give the designers as much control as they might need.

We thought, “We should use the picture element to give them a box that marketing can use. Then their designers would have complete control to decide how many image sources they need and where the breakpoints should be.”

Responsive image breakpoints FTL!

Doing this would have been easier for us, but it would have be a jerk move.

Imagine what this would have meant for the designers who create these images. Not only does the responsive design mean that they have to create multiple versions of hero images now.

But we’d also asking them to figure out how many versions of the hero images they need to make and where the image breakpoints should be. And they need to figure this out every day for every image they create.

Like I said: a jerk move. Don’t do this.

Using hero image text to determine breakpoints

After striking out on several different solutions, we realized that the text might be the key. If it weren’t for the text in these hero images, the hero images would fall under the resolution switching instead of art direction use case.

We wondered if there was a way look at how well the text resized and determine the breakpoints based on when the text became illegible.

This was the point at which we discovered that there were in fact a few requirements for the client’s hero images:

  • The images must use the brand’s chosen typeface.
  • The typeface could not be smaller than 18pt.

All of the images must follow these rules. So we set out to find out how well the typeface resized.

We started by creating a canvas in Photoshop that matched the largest size that the hero images would ever be used at. We filled that canvas with the chosen typeface in various sizes and weights.

Below is an example of what that canvas would have looked like if the typeface was Myriad and brand color was black.

Myriad Type Sample at 1080x432

After we saved the image as a PNG, we opened it in Chrome and started resizing it until the text was unreadable.

We determined that the 18pt italics weight became unreadable at around 774 pixels. So we created a new image that started at that size and repeated the experiment.

Animated gif showing resizing of image with type samples

The new image could span from 780 to 540 pixels wide before it became unreadable. So we then made a third image that started at 540 pixels wide. The third image worked at our smallest supported size, 320 pixels, so we stopped there.

Adjusting for ease of implementation

Once we knew where the type was no longer readable, we made some minor adjustments. We changed the image breakpoints from the arbitrary measurements that we had received in our experiment to numbers that were more easily divisible, and where possible, matched our grid.

So instead of using 774 pixels as the point as which we should switch from the 1080 image to a different one, we decided on 780 pixels.

We then took several of the existing hero images and attempted to make smaller versions of them using the new image sizes. We found, similar to the CB2 example above, that we needed to adjust the aspect ratio of the hero images in order to give us more vertical real estate on small screens.

After we had completed all of our tweaks and had new sizes for the responsive versions of the hero image that we thought would work, we used our type image resizing technique to verify that the typeface would hold up for the range of that we were going to recommend.

Type-based guidelines for responsive hero images

When we had completed our research, we had a simple set of guidelines that we could give to the designers responsible for the hero images.

.data-table {width: auto;overflow:auto;border-collapse:collapse;border-spacing:0;} .data-table td, .data-table th {text-align:center;padding:5px;background:#fff;border:1px solid #ccc;width:20%;}
Image Breakpoints
Name Width Height Max Width Min Width
Large 1080 360 n/a 781
Medium 780 320 780 541
Small 540 270 540 n/a

So long as the designers didn’t use anything smaller than 18pt and continued to only use the typeface that the brand specified, then the three sizes of hero images that we specified would work.

I know it seems suspicious that we ended up with small, medium and large images when so much of our industry is focused on mobile, tablet and desktop.

But we didn’t pick three image sources ahead of time. We let the type tell us how many breakpoints were needed.

In fact, we did a test and found that if the client wanted to use 16pt in their typeface of choice, that it would have required four breakpoints. And if they change fonts, a new experiment would be needed.

Start with an audit and let content be your guide

This system worked for one client on one project. It may not work for your project.

But whatever you do, it is a reminder that finding a solution starts with a responsive images audit. And whenever possible, we should let the content dictate how our responsive design will respond.

by Jason Grigsby at February 24, 2015 12:45 AM

February 22, 2015

Open Gardens

Data Science for Internet of Things (IoT) course – University of Oxford

I am pleased to announce a unique course  – Data Science for the Internet of Things (IoT) course – University of Oxford

We are launching first with very limited places We already are collaborating with Mapr, Sigfox, Hypercat and Red Ninja and many others

So the course will be based on practical insights from current systems

Everyone finishing the course will receive a University of Oxford certificate showing that they have completed the course

Course is fully online

Have a look  Data Science for the Internet of Things (IoT) course – University of Oxford for more

Welcome feedback and will update a lot more over next few weeks

If you want to avail of this very unique certification, please email me for more information ajit.jaokar at futuretext.com

by ajit at February 22, 2015 11:52 AM

Martin's Mobile Technology Page

OCSP, Stapling And Android That Doesn't Care

When surfing to an https protected website, most desktop browsers today make use of the Online Certificate Status Protocol (OCSP) to check the validity of the authentication certificate that was sent by the web site. There is lots of debate about whether this feature is useful or not but there's also a privacy aspect to this. Let me quote from Wikipedia:

"OCSP checking also impairs privacy, since it requires the client to contact a third party (the CA) to confirm certificate validity. A way to verify validity without disclosing browsing behavior would be desirable for some groups of users."

I guess I'm part of this group which is why I had a closer look at the OCSP Stapling feature after upgrading my Owncloud server to Ubuntu 14.04 which included an Apache web server update that supports the feature.

What is OCSP Stapling And How Is It Configured in Apache and Nginx?

In short, OCSP stapling means that the web server requests the OCSP information from the CA's OCSP server and then includes it as part of the TLS session establishment when a web browser sends a request for an https encrypted page. The advantage is that the web browser no longer has to send a request to the Certificate Authority to check the validity of the certificate that it has received from the website which in turn protects my privacy. Agreed, this one's part of the last 5% when it comes to privacy protection but every bit counts... Configuring OCSP stapling is actually quite straight forward and this post over at Digitalocean goes into the details including how to verify that everything is working.

Works Well On The Desktop But Android Doesn't Care

On the desktop, both Firefox and Thunderbird, the two programs I use most together with my Owncloud at home make use of the feature and no longer reach out to the Certificate Authority. A Wireshark trace nicely showed how the OCSP information is included during the TLS session establishment. Mission accomplished.

On the mobile side, Android doesn't seem to care at all about OCSP. That should probably not be very surprising as Google has disabled OCSP checking already back in 2012 in their Chrome desktop browser as well. No privacy issues here, good.

An interesting twist is the Opera Mobile browser on Android: When accessing my website it requests the OCSP status information during TLS session establishment and receives it. When going to another https site, however, which does not supply OCSP status information, there is no separate OCSP check as a consequence. That kind of defeats the purpose. But at least there's no privacy issue here.

by mobilesociety at February 22, 2015 09:08 AM

February 21, 2015

Eurotechnology.japan

i-Mode was launched February 22, 1999 in Tokyo – birth of mobile internet

The mobile internet was born 16 years ago in Japan Galapagos-Syndrome: NTT Docomo failed to capture global value On February 22, 1999, the mobile internet was born when Mari Matsunaga, Takeshi Natsuno and Keiichi Enoki launched Docomo’s i-Mode to a handful of people who had made the effort to the Press Conference introducing Docomo’s new … Continue reading i-Mode was launched February 22, 1999 in Tokyo – birth of mobile internet

The post i-Mode was launched February 22, 1999 in Tokyo – birth of mobile internet appeared first on Eurotechnology Japan.

by fasol@eurotechnology.com (Gerhard Fasol) at February 21, 2015 03:13 PM

Volker on Mobile

TEDx Barcelona ED – My Talk…

I did a talk at TEDxBarcelonaED on “Learning for the Unknown”. Quite daunting. Quite exciting. I think it worked. Do you agree? Watch it here:

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

by Volker at February 21, 2015 03:09 AM

February 20, 2015

Martin's Mobile Technology Page

Android USB Tethering to Connect a Raspberry Pi to the Internet

USB - TetheringThese days the easiest way to connect a PC or Raspberry Pi for that matter to the Internet via a smartphone is to use Wi-Fi tethering.  But there are still scenarios in which Wi-Fi is not ideal, e.g. when the air is already pretty "busy". Another way that I recently discovered is to use USB tethering that many Android phones also support in addition to the ubiquitous Wi-Fi tethering. The screenshot on the left shows CyanogenMod's Android menu to activate "USB Tethering". Luckily, Raspbian already comes with drivers for it so the new network interface is recognized immediately. The only thing that is required is an entry in /etc/network/interfaces so the USB0 interfaces gets an IP address from the phone:

auto usb0
iface usb0 inet dhcp

by mobilesociety at February 20, 2015 06:50 AM

February 19, 2015

Cloud Four Blog

Responsive Images Audits

When you start incorporating the new responsive images standards across your site, the task can seem daunting. How do you know which standards to use and where? How will you maintain the new markup and image sizes?

We have found that a good first step is a responsive images audit.

Much like a content audit, a responsive images audit is designed to help you take stock of the images you have on your site and how they are being utilized.

The output of the audit is often a spreadsheet describing different types of images, the templates they are used on, and what solution you plan on implementing for those images.

Sample questions to ask about images

Here are some sample questions you should consider asking about the images on your site:

  • Where are the source files and what is the process for publishing?

    This may seem unrelated to the technical questions of how the responsive images will be implemented, but my experience has been that knowing where the images are coming from, where they are stored, and if anyone needs to modify the image before it is posted can dictate what solutions are practical.

    Consider this your reminder that people are still involved and you need to understand the people part first.

  • Is there a big difference between smallest and largest image size?

    We had a client who had badges for awards and types of recognition. These badges were displayed at a small size even when the page was large. We found that even if we had the largest size possible at retina density, the images were same in file size. So why provide multiple image sizes?

    In addition, the images came from third parties and they didn’t have the source which was another good reason to simply use the largest image and have the browser resize it.

  • Are the images resolution switching or art direction?

    Knowing which use case you’re solving for will help determine which syntax makes the most sense.

  • Can we use SVG?

    If the images will work well as vector, then we may be able to use a single image source.

  • Are there representative images we can use to find sensible jumps in file sizes for our image breakpoints?

    Picking responsive image breakpoints remains a tough problem. It can be easier if all the imagery we’re using are things we control and have a distinct style.

    If there are representative images, then we can save them at various sizes in the range they will be used in the template and make a determination of how many image sources we need to provide.

    In situations where images come from user generated content, it can be harder to find representative images from which you can determine image breakpoints. In that case, we know we’re going to have to make an educated guess about what will work.

  • Do we want to support multiple image formats?

    Perhaps we’re using a lot of alpha channel images and want to provide JPEG 2000 for those whose browser supports that image format. Or perhaps we want to provide WebP alternatives.

    If we determine there is a good reason to do this for a set of images, then that will mean we want to use the <picture> element even if we aren’t doing art direction.

Sample responsive images audit results

We conducted a responsive images audit for a large site. The end result was a spreadsheet that looked like the table below.

.data-table {width: auto;overflow:auto;border-collapse:collapse;border-spacing:0;} .data-table td, .data-table th {text-align:center;padding:5px;background:#fff;border:1px solid #ccc;width:20%;}
Image Description Format Size Markup Notes
Property logos PNG8 (future SVG) Regular, Retina <img> Little variance between the wide and small screen image sizes.
Partner logos PNG8 (future SVG) Regular, Retina <img> Little variance between the wide and small screen image sizes.
Iconography SVG <img>
Brand logos PNG8 (future SVG) regular, Retina <img> Assumes little variance between the wide and small screen image sizes.
Property photography JPG 
(conditional WebP) Dynamically resized and compressed Non-art-direction <picture> Templates specify breakpoints.
Promo images w/ text 
(art direction) TBD <picture> Content producer defines images and breakpoints in CMS.

We did this audit a couple of years so we might have different answers today than we did then.

Also, it is worth noting that the property photography represented over ninety percent of the images on the site. We have found this to be common on the sites we’ve worked with.

Combine the audit with image services

Once you have your audit complete, you need to combine it with image resizing and processing services so you know how you will implement what your audit recommended. For more on that, check out:

by Jason Grigsby at February 19, 2015 11:47 PM

February 18, 2015

mobiForge blog

Using the Google Maps API to display mobile-friendly maps on all devices

In this article we show how to embed a Google Map in a web page so that it will be mobile-friendly and work on all devices, including low-end devices without JavaScript support. To do this, we'll use the Google Maps API for high-end devices that can handle JavaScript, and for low-end we make use of the simpler Google Static Maps API.

by ruadhan at February 18, 2015 06:34 PM

Getting Ready For HTTP 2.0

HTTP is the underlying mechanism that runs the web. It is the language spoken by browsers and web servers to communicate, download webpage elements and upload user data. The version we currently use is 1.1, a specification that is now almost 15 years old.

by mark.anderson at February 18, 2015 09:10 AM

February 17, 2015

Eurotechnology.japan

Google Play Japan – top grossing Android Apps ranking

Google Play Japan: top grossing apps (Feb. 18, 2015) Android smart phone apps ranking in Japan by gross revenues AppAnnie showed that in terms of combined iOS AppStore + Google Play revenues, Japan is No. 1 globally, spending more than the USA. Therefore Japan is naturally the No. 1 target globally for many mobile game … Continue reading Google Play Japan – top grossing Android Apps ranking

The post Google Play Japan – top grossing Android Apps ranking appeared first on Eurotechnology Japan.

by fasol@eurotechnology.com (Gerhard Fasol) at February 17, 2015 05:56 PM

February 16, 2015

Cloud Four Blog

Responsive Images Are Here. Now What?

Responsive images have landed in Chrome and Opera. They are in development for Firefox and Webkit. They are under consideration for Internet Explorer.

This is an amazing accomplishment. To get here, the following happened including many firsts:

  • The Responsive Images Community Group was formed. It now cited as a model for how W3C Community Groups can inform the standards process.
  • There have been four major specifications (picture, srcset, src-n, and the final picture specification) along with many minor iterations along the way.
  • An IndieGogo campaign to fund Yoav Weiss’s implementation of the feature in Blink. The first crowd-sourced funding of a browser feature ever.
  • Hours of time put in by volunteers and browser makers to make sure these standards will work and implementing them in browsers.

After nearly four years and a ton work, we finally have responsive images.

Now the hard work begins.

Responsive images will now go from the limited number of people in the Responsive Images Community Group to the web at large.

Many people will struggle to learn the new tools and to ascertain when it makes sense to use each. Not to mention the navigating the thorny, unsolvable problems of responsive image breakpoints.

For the last few weeks, I’ve been working on ways to help people learn responsive images. The first output of that work will start tomorrow with a presentation called “Responsive Images Are Here. Now What?” at An Event Apart Atlanta. I’m repeating the talk at AEA Seattle and San Diego.

If you want a deeper dive, I’m giving a full day workshop at UX Mobile Immersion on When Responsive Web Design Meets the Real World which will cover images in detail.

The research I did to prepare for those talks has created a backlog of articles that I want to write. Watch this space. They are coming soon.

So responsive images are here, and they are going to be a big deal for the web in 2015. It’s time to prepare for them; to understand how to use them; and to start tackling the tough challenges of integrating them into our sites.

I can’t wait to see how people use these new browser features.

P.S. If you attend AEA or UXIM, you can use these discount codes to save money. ‘AEAGRIG’ will save you $100 on any AEA event. ‘UXIMSPK’ will save you $300 on the UXIM workshops.

by Jason Grigsby at February 16, 2015 09:19 PM

Open Gardens

Content and approach for a Data Science for IoT course/certification

 

 

 

 

 

 

UPDATE: 

Feb 15:  Applications now open -  Data Science for IoT Professional development short course at Oxford University  - more coming soon. Any questions, please email me at ajit.jaokar at futuretext.com

We are pleased to announce support from Mapr, Sigfox, Hypercat and Red Ninja for the Data Science. Everyone finishing the course will receive a University of Oxford certificate showing that they have completed the course. Places are limited – so please apply soon if interested

In a previous post, I mentioned that I am exploring creating a course/certification for Data Science for IoT

Here are some more thoughts

I believe that this is the first attempt to create such a course/program

I use the the phrase “Data Science” to collectively mean Machine learning/Predictive analytics

There are ofcourse many Machine Learning courses – the most well known being Andrew Ng’s course at Coursera/Stanford and the domain is complex enough as it is.

Thus, creating a course/ certification covering both Machine Learning/Predictive analytics and also IoT can be daunting

However, the sector specific focus gives us some unique advantages

Already at UPM (Universidad Politechnica de Madrid) I teach Machine Learning/Predictive analytics for the Smart cities domain through their citysciences program (the remit there being to create a role for the Data Scientist for a Smart city)

So, this idea is not totally new for me ..

Based on my work at UPM (for Smart cites) – teaching DataScience for a specific domain (like IoT) has both challenges but also some unique advantages

The challenges are: You have an extra level of complexity to deal with (in teaching IoT alongwith Predictive analytics)

But the advantages are:

a) The IoT domain focus allows us to be more pragmatic by addressing unique Data Science problems for IoT

b) We can take a Context based learning approach - a technique more common in Holland and Germany for teaching Engineering disciplines – and which I have used in teaching computer science to kids at feynlabs

c)  We don’t need to cover the maths upfront

d)  The participant can be productive faster and apply ideas faster to industry

Here are my thoughts on the elements such a program could cover based on the above approach: 

1) Unique characteristics – IoT ecosystem and data

2) Problems and datasets. This would cover specific scenarios and datasets needed (without addressing the predictive aspects)

3) An overview of Machine learning techniques and algorithms (Classification, Regression, Clustering, Dimensionality reduction etc) – this would also include the basic Math techniques needed for understanding algorithms

4) Programming python scikit-learn

5) Specific platforms/case studies

 Time series data(Mapr)

Sensor fusion for IoT(Camgian – Egburt)

NoSQL data for IoT (ex mongodb for IoT) ,

managing very high volume IoT data Mapr loading time series database 100 million points second

I also include image processing with sensors / IoT(ex surveillance cameras)

Hence,

IBM – Detecting skin cancer more quickly with visual machine learning

Real time face recognition using Deep learning algorithms

and even – Combining the Internet of Things with deep learning / predictive algorithms @numenta 

To conclude:

The above approach for teaching a course on Data Science for IoT  would help focus Machine Learning / Predictive algorithms in a real life problem solving scenario for IoT

Comments welcome.

You can sign up for more information at  futuretext and also follow me on twitter @ajitjaokar

Image source: wired

by ajit at February 16, 2015 07:13 PM