On this page:

May 22, 2015

Martin's Mobile Technology Page

Past the “Peak Telephony” In Germany

Recently, Dean Bubbley has written an interesting blog post about how most industrial nations are beyond “peak telephony”, i.e. the number of voice minutes in fixed line and mobile networks combined is decreasing. When the German regulator published its report for 2014 a couple of days ago I had a closer look here as well to see what the situation is in Germany. And indeed, we are clearly past peak telephony as well.

And here are the numbers:

In 2014, fixed line networks saw 154 billion outgoing minutes in Germany which is 9 billion minutes less than last year. On the mobile side they've been observing an increase of 1 billion minutes. In total that's 8 billion minutes less than the previous year, which is about -3%. The trend has been going on for quite a while now. In 2010, combined fixed and mobile outgoing voice minutes were at 295 billion compared to 265 minutes in 2014. That's 11% less over that time frame.

A question the numbers can't answer is where those voice minutes have gone. Have they been replaced by the ever growing traffic of instant messaging apps such as Whatsapp or have the been replaced by Internet based IP voice and video telephony such as Skype? I'd speculate that it's probably both to a similar degree.

by mobilesociety at May 22, 2015 05:50 AM

May 20, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Making a difference with performance

Here’s Jaime Caballero with some fantastic advice on how to improve your experience’s performance.

by Brad Frost at May 20, 2015 02:31 PM

May 19, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Head Meets Wall

The trick is knowing when bashing your head against a wall repeatedly will lead to a breakthrough instead of you lying unconscious in a pool of blood.

by Brad Frost at May 19, 2015 01:06 PM

May 18, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Web Conference at PSU

I’m thrilled to be speaking at the Web Conference at Penn State alongside so many other great speakers (including apparently Tim Gunn of Project Runway fame. Life achievement unlocked?). Registration closes in just a few days, so I’d look into snagging a ticket if I were you.

by Brad Frost at May 18, 2015 08:44 AM

Project Hub

A while back I wrote about the concept of project hubs for 24 Ways. I’m thrilled to see people finding the concept of a simple timeline for project milestones useful, and am even more thrilled to see tools emerge to make project hubs easy to set up and manage.

Gunther Groenewege created a tool called Kirby Project Hub, which uses the lightweight CMS  to create and manage project hubs. I’m excited to check it out!

by Brad Frost at May 18, 2015 07:22 AM

May 16, 2015

London Calling

Evan Yee: Startup Exhibition in New York

IMG_20150516_161622

Walking around New York over the weekend after a week of business meetings, I stumbled across a fantastic exhibition at Gallery 151 in Chelsea titled “Startup” by Evan Yee.

The exhibition brilliantly pokes fun at the concept of today’s technology, as the pictures below show.

If you’re in New York between now and June 15, it is worth a look.

From the publicity for the exhibition

Evan Yee : StartUp

On View May 14- June 15, 2015 at Gallery 151

132 West 18th Street, New York, NY 10011

Opening Reception Thursday May 14, 7-9pm

Gallery 151 is pleased to present StartUp, a conceptual mobile and technology  installation by Evan Yee. In StartUp, the artist creates and displays physical interpretations of phone applications as well as reappropriated tech items, creating an experiential commentary on the digital generation. In the artist’s audio statement, Siri dictates the phrase “iThink, if Utopia holds convenience, then the future holds Utopia.” Startup encourages the viewer to evaluate obsessions with contemporary design, convenience and the future.

With the explosion of app creation over the past six years, virtual environments have eclipsed “mechanical” utilitarian objects. Apps and the platforms and the technology that hosts them are at the forefront of commercialism, function and design. They blur the boundaries between the physical and digital worlds.Yeehas created objects that are whimsical in their nature and scale, yet dystopic in their implication.

Evan Yee’s StartUp is designed with the contemporary feel of Apple’s retail environment, displaying physical iterations of apps as art objects. Presented with a dark underlying humor reminiscent of Kurt Vonnegut and Stanley Kubrick, here we are presented with the paradox of applications that we perceive to have functionality but may not perform any function at all.

Evan has created a number of fake products, all which poke fun at their real equivalents.  They are beautifully presented in a way that makes you think you could even be in an Apple store …

Click on the images below to read a caption about each one, or read more on Evan’s website.

If you enjoyed this blog post you may like other related posts listed below under You may also like ...

To receive future posts you can subscribe via email or RSS, download the android app, or follow me on twitter @andrewgrill.



You may also like ...

by Andrew Grill at May 16, 2015 10:39 PM

May 15, 2015

Martin's Mobile Technology Page

Skype Still Supports Linux - But I Got Rid Of It On The PC Anyway

Despite my fears last year that Skype that is owned by Microsoft these days might cease to support PC Linux at some point and leave me stranded, it hasn't happen yet. Last year I speculated that should this happen I would probably just move Skype to an Android tablet and be done with it. As I remarked at the time this would have the additional benefit that I would reduce exposure of my private data to non-open source programs as an added benefit. Between then and now I went ahead and tried out using Skype on a tablet and a smartphone despite its ongoing support for Linux on PCs and found that it's even nicer to use on these platforms than on the PC. During video calls I can even walk around now without cutting multiple cords first. And I have the added benefit that there's no exposure to my private information anymore by a non-open source program as I otherwise only use that tablet for ebook reading. I'm glad tablets have become so cheap that one can have several of them each dedicated to a few sepcific purposes. That ties in well with my thoughts on the Macbook 2015 becoming the link between Mobiles and Notebooks.

by mobilesociety at May 15, 2015 06:01 AM

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Rebecca’s Gift

Rebecca’s Gift provides a space away from the everyday rhythm of life for families who have experienced the death of a child; to assist them in moving forward in their healing process through the opportunity to reconnect, rebuild, and relax by providing that significant first vacation wherein new family dynamics can be developed and bonds strengthened.

Eric Meyer and his family tragically lost their daughter to a brain tumor almost a year ago, and this non-profit has been set up to raise money to give families who have lost a child a much-needed vacation.

by Brad Frost at May 15, 2015 01:12 AM

May 14, 2015

mobiForge blog

Nokia at 150

Nokia hasn't always been a phone manufacturer. The company dabbled in paper products, footwear and tires before it became involved in the wireless industry. To celebrate their 150th birthday, the Finnish company, which began in 1865 as a rubber manufacturer, released a video detailing its long history.

by tomwryan at May 14, 2015 03:44 PM

May 13, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

The Why & How of Successful Style Guides

I’ll be giving a virtual seminar for UIE all about style guides: what they are, what kinds of style guides exist, how to make them, and how to make them work for your organization. Feel free to tune in!

by Brad Frost at May 13, 2015 10:10 PM

mobiForge blog

Google Mobile Friendly #Fail

Yes. I am a curmudgeonly old web dev. I remember when marquee tags were all the rage, but even back when I had hair, we paid attention to how heavy we made our web pages. Image optimization tools were invented to make sure we could shave as much off our web pages as we didn’t off our chins. The beeps and and pings of the dial up world did not a happy bedfellow make with heavy weight web pages.

by thecurmudgeon at May 13, 2015 04:01 PM

May 12, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Strip District company pushes back against ‘terrible T-shirts’ of the world

As someone who almost exclusively lives in the many gorgous, comfortable t-shirts I’ve acquired from Cotton Bureau, it’s great to see them featured in the Pittsburgh Post Gazette. Great people doing great work in Pittsburgh, Pennsylvania.

by Brad Frost at May 12, 2015 01:52 PM

Open Gardens

An Introduction to Deep Learning and it’s role for IoT/ future cities

Note The paper below best read as a pdf which you can download free below

 

An Introduction to Deep Learning and it’s role for IoT/ future cities

By Ajit Jaokar

@ajitjaokar

Please connect with me if you want to stay in touch on linkedin and for future updates

Background and Abstract

This article is a part of an evolving theme. Here, I explain the basics of Deep Learning and how Deep learning algorithms could apply to IoT and Smart city domains. Specifically, as I discuss below, I am interested in complementing Deep learning algorithms using IoT datasets. I elaborate these ideas in the Data Science for Internet of Things program which enables you to work towards being a Data Scientist for the Internet of Things  (modelled on the course I teach at Oxford University and UPM – Madrid). I will also present these ideas at the International conference on City Sciences at Tongji University in Shanghai  and the Data Science for IoT workshop at the Iotworld event in San Francisco

Please connect with me if you want to stay in touch on linkedin and for future updates

Deep Learning

Deep learning is often thought of as a set of algorithms that ‘mimics the brain’. A more accurate description would be an algorithm that ‘learns in layers’. Deep learning involves learning through layers which allows a computer to build a hierarchy of complex concepts out of simpler concepts.

The obscure world of deep learning algorithms came into public limelight when Google researchers fed 10 million random, unlabeled images from YouTube into their experimental Deep Learning system. They then instructed the system to recognize the basic elements of a picture and how these elements fit together. The system comprising 16,000 CPUs was able to identify images that shared similar characteristics (such as images of Cats). This canonical experiment showed the potential of Deep learning algorithms. Deep learning algorithms apply to many areas including Computer Vision, Image recognition, pattern recognition, speech recognition, behaviour recognition etc

 

How does a Computer Learn?

To understand the significance of Deep Learning algorithms, it’s important to understand how Computers think and learn. Since the early days, researchers have attempted to create computers that think. Until recently, this effort has been rules based adopting a ‘top down’ approach. The Top-down approach involved writing enough rules for all possible circumstances.  But this approach is obviously limited by the number of rules and by its finite rules base.

To overcome these limitations, a bottom-up approach was proposed. The idea here is to learn from experience. The experience was provided by ‘labelled data’. Labelled data is fed to a system and the system is trained based on the responses. This approach works for applications like Spam filtering. However, most data (pictures, video feeds, sounds, etc.) is not labelled and if it is, it’s not labelled well.

The other issue is in handling problem domains which are not finite. For example, the problem domain in chess is complex but finite because there are a finite number of primitives (32 chess pieces)  and a finite set of allowable actions(on 64 squares).  But in real life, at any instant, we have potentially a large number or infinite alternatives. The problem domain is thus very large.

A problem like playing chess can be ‘described’ to a computer by a set of formal rules.  In contrast, many real world problems are easily understood by people (intuitive) but not easy to describe (represent) to a Computer (unlike Chess). Examples of such intuitive problems include recognizing words or faces in an image. Such problems are hard to describe to a Computer because the problem domain is not finite. Thus, the problem description suffers from the curse of dimensionality i.e. when the number of dimensions increase, the volume of the space increases so fast that the available data becomes sparse. Computers cannot be trained on sparse data. Such scenarios are not easy to describe because there is not enough data to adequately represent combinations represented by the dimensions. Nevertheless, such ‘infinite choice’ problems are common in daily life.

How do Deep learning algorithms learn?

Deep learning is involved with ‘hard/intuitive’ problem which have little/no rules and high dimensionality. Here, the system must learn to cope with unforeseen circumstances without knowing the Rules in advance. Many existing systems like Siri’s speech recognition and Facebook’s face recognition work on these principles.  Deep learning systems are possible to implement now because of three reasons: High CPU power, Better Algorithms and the availability of more data. Over the next few years, these factors will lead to more applications of Deep learning systems.

Deep Learning algorithms are modelled on the workings of the Brain. The Brain may be thought of as a massively parallel analog computer which contains about 10^10 simple processors (neurons) – each of which require a few milliseconds to respond to input. To model the workings of the brain, in theory, each neuron could be designed as a small electronic device which has a transfer function similar to a biological neuron. We could then connect each neuron to many other neurons to imitate the workings of the Brain. In practise,  it turns out that this model is not easy to implement and is difficult to train.

So, we make some simplifications in the model mimicking the brain. The resultant neural network is called “feed-forward back-propagation network”.  The simplifications/constraints are: We change the connectivity between the neurons so that they are in distinct layers. Each neuron in one layer is connected to every neuron in the next layer. Signals flow in only one direction. And finally, we simplify the neuron design to ‘fire’ based on simple, weight driven inputs from other neurons. Such a simplified network (feed-forward neural network model) is more practical to build and use.

Thus:

a)      Each neuron receives a signal from the neurons in the previous layer

b)      Each of those signals is multiplied by a weight value.

c)      The weighted inputs are summed, and passed through a limiting function which scales the output to a fixed range of values.

d)      The output of the limiter is then broadcast to all of the neurons in the next layer.

Image and parts of description in this section adapted from : Seattle robotics site

The most common learning algorithm for artificial neural networks is called Back Propagation (BP) which stands for “backward propagation of errors”. To use the neural network, we apply the input values to the first layer, allow the signals to propagate through the network and read the output. A BP network learns by example i.e. we must provide a learning set that consists of some input examples and the known correct output for each case. So, we use these input-output examples to show the network what type of behaviour is expected. The BP algorithm allows the network to adapt by adjusting the weights by propagating the error value backwards through the network. Each link between neurons has a unique weighting value. The ‘intelligence’ of the network lies in the values of the weights. With each iteration of the errors flowing backwards, the weights are adjusted. The whole process is repeated for each of the example cases. Thus, to detect an Object, Programmers would train a neural network by rapidly sending across many digitized versions of data (for example, images)  containing those objects. If the network did not accurately recognize a particular pattern,  the weights would be adjusted. The eventual goal of this training is to get the network to consistently recognize the patterns that we recognize (ex Cats).

How does Deep Learning help to solve the intuitive problem

The whole objective of Deep Learning is to solve ‘intuitive’ problems i.e. problems characterized by High dimensionality and no rules.  The above mechanism demonstrates a supervised learning algorithm based on a limited modelling of Neurons – but we need to understand more.

Deep learning allows computers to solve intuitive problems because:

  • With Deep learning, Computers can learn from experience but also can understand the world in terms of a hierarchy of concepts – where each concept is defined in terms of simpler concepts.
  • The hierarchy of concepts is built ‘bottom up’ without predefined rules by addressing the ‘representation problem’.

This is similar to the way a child learns ‘what a dog is’ i.e. by understanding the sub-components of a concept ex  the behavior(barking), shape of the head, the tail, the fur etc and then putting these concepts in one bigger idea i.e. the Dog itself.

The (knowledge) representation problem is a recurring theme in Computer Science.

Knowledge representation incorporates theories from psychology which look to understand how humans solve problems and represent knowledge.  The idea is that: if like humans, Computers were to gather knowledge from experience, it avoids the need for human operators to formally specify all of the knowledge that the computer needs to solve a problem.

For a computer, the choice of representation has an enormous effect on the performance of machine learning algorithms. For example, based on the sound pitch, it is possible to know if the speaker is a man, woman or child. However, for many applications, it is not easy to know what set of features represent the information accurately. For example, to detect pictures of cars in images, a wheel may be circular in shape – but actual pictures of wheels may have variants (spokes, metal parts etc). So, the idea of representation learning is to find both the mapping and the representation.

If we can find representations and their mappings automatically (i.e. without human intervention), we have a flexible design to solve intuitive problems.   We can adapt to new tasks and we can even infer new insights without observation. For example, based on the pitch of the sound – we can infer an accent and hence a nationality. The mechanism is self learning. Deep learning applications are best suited for situations which involve large amounts of data and complex relationships between different parameters. Training a Neural network involves repeatedly showing it that: “Given an input, this is the correct output”. If this is done enough times, a sufficiently trained network will mimic the function you are simulating. It will also ignore inputs that are irrelevant to the solution. Conversely, it will fail to converge on a solution if you leave out critical inputs. This model can be applied to many scenarios as we see below in a simplified example.

An example of learning through layers

Deep learning involves learning through layers which allows a computer to build a hierarchy of complex concepts out of simpler concepts. This approach works for subjective and intuitive problems which are difficult to articulate.

Consider image data. Computers cannot understand the meaning of a collection of pixels. Mappings from a collection of pixels to a complex Object are complicated.

With deep learning, the problem is broken down into a series of hierarchical mappings – with each mapping described by a specific layer.

The input (representing the variables we actually observe) is presented at the visible layer. Then a series of hidden layers extracts increasingly abstract features from the input with each layer concerned with a specific mapping. However, note that this process is not pre defined i.e. we do not specify what the layers select

For example: From the pixels, the first hidden layer identifies the edges

From the edges, the second hidden layer identifies the corners and contours

From the corners and contours, the third hidden layer identifies the parts of objects

Finally, from the parts of objects, the fourth hidden layer identifies whole objects

Image and example source: Yoshua Bengio book – Deep Learning

Implications for IoT

To recap:

  • Deep learning algorithms apply to many areas including Computer Vision, Image recognition, pattern recognition, speech recognition, behaviour recognition etc
  • Deep learning systems are possible to implement now because of three reasons: High CPU power, Better Algorithms and the availability of more data. Over the next few years, these factors will lead to more applications of Deep learning systems.
  • Deep learning applications are best suited for situations which involve large amounts of data and complex relationships between different parameters.
  • Solving intuitive problems: Training a Neural network involves repeatedly showing it that: “Given an input, this is the correct output”. If this is done enough times, a sufficiently trained network will mimic the function you are simulating. It will also ignore inputs that are irrelevant to the solution. Conversely, it will fail to converge on a solution if you leave out critical inputs. This model can be applied to many scenarios

In addition, we have limitations in the technology. For instance, we have a long way to go before a Deep learning system can figure out that you are sad because your cat died(although it seems Cognitoys based on IBM watson is heading in that direction). The current focus is more on identifying photos, guessing the age from photos(based on Microsoft’s project Oxford API)

And we have indeed a way to go as Andrew Ng reminds us to think of Artificial Intelligence as building a rocket ship

“I think AI is akin to building a rocket ship. You need a huge engine and a lot of fuel. If you have a large engine and a tiny amount of fuel, you won’t make it to orbit. If you have a tiny engine and a ton of fuel, you can’t even lift off. To build a rocket you need a huge engine and a lot of fuel. The analogy to deep learning [one of the key processes in creating artificial intelligence] is that the rocket engine is the deep learning models and the fuel is the huge amounts of data we can feed to these algorithms.”

Today, we are still limited by technology from achieving scale. Google’s neural network that identified cats had 16,000 nodes. In contrast, a human brain has an estimated 100 billion neurons!

There are some scenarios where Back propagation neural networks are suited

  • A large amount of input/output data is available, but you’re not sure how to relate it to the output. Thus, we have a larger number of “Given an input, this is the correct output” type scenarios which can be used to train the network because it is easy to create a number of examples of correct behaviour.
  • The problem appears to have overwhelming complexity. The complexity arises from Low rules base and a high dimensionality and from data which is not easy to represent.  However, there is clearly a solution.
  • The solution to the problem may change over time, within the bounds of the given input and output parameters (i.e., today 2+2=4, but in the future we may find that 2+2=3.8) and Outputs can be “fuzzy”, or non-numeric.
  • Domain expertise is not strictly needed because the output can be purely derived from inputs: This is controversial because it is not always possible to model an output based on the input alone. However, consider the example of stock market prediction. In theory, given enough cases of inputs and outputs for a stock value, you could create a model which would predict unknown scenarios if it was trained adequately using deep learning techniques.
  • Inference:  We need to infer new insights without observation. For example, based on the pitch of the sound – we can infer an accent and hence a nationality

Given an IoT domain, we could consider the top-level questions:

  • What existing applications can be complemented by Deep learning techniques by adding an intuitive component? (ex in smart cities)
  • What metrics are being measured and predicted? And how could we add an intuitive component to the metric?
  • What applications exist in Computer Vision, Image recognition, pattern recognition, speech recognition, behaviour recognition etc which also apply to IoT

Now, extending more deeply into the research domain, here are some areas of interest that I am following.

Complementing Deep Learning algorithms with IoT datasets

In essence, these techniques/strategies complement Deep learning algorithms with IoT datasets.

1)      Deep learning algorithms and Time series data : Time series data (coming from sensors) can be thought of as a 1D grid taking samples at regular time intervals, and image data can be thought of as a 2D grid of pixels. This allows us to model Time series data with Deep learning algorithms (most sensor / IoT data is time series).  It is relatively less common to explore Deep learning and Time series – but there are some instances of this approach already (Deep Learning for Time Series Modelling to predict energy loads using only time and temp data  )

2)      Multiple modalities: multimodality in deep learning. Multimodality in deep learning algorithms is being explored  In particular, cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time

3)      Temporal patterns in Deep learning: In their recent paper, Ph.D. student Huan-Kai Peng and Professor Radu Marculescu, from Carnegie Mellon University’s Department of Electrical and Computer Engineering, propose a new way to identify the intrinsic dynamics of interaction patterns at multiple time scales. Their method involves building a deep-learning model that consists of multiple levels; each level captures the relevant patterns of a specific temporal scale. The newly proposed model can be also used to explain the possible ways in which short-term patterns relate to the long-term patterns. For example, it becomes possible to describe how a long-term pattern in Twitter can be sustained and enhanced by a sequence of short-term patterns, including characteristics like popularity, stickiness, contagiousness, and interactivity. The paper can be downloaded HERE

Implications for Smart cities

I see Smart cities as an application domain for Internet of Things. Many definitions exist for Smart cities/future cities. From our perspective, Smart cities refer to the use of digital technologies to enhance performance and wellbeing, to reduce costs and resource consumption, and to engage more effectively and actively with its citizens (adapted from Wikipedia). Key ‘smart’ sectors include transport, energy, health care, water and waste. A more comprehensive list of Smart City/IoT application areas are: Intelligent transport systems – Automatic vehicle , Medical and Healthcare, Environment , Waste management , Air quality , Water quality, Accident and  Emergency services, Energy including renewable, Intelligent transport systems  including autonomous vehicles. In all these areas we could find applications to which we could add an intuitive component based on the ideas above.

Typical domains will include Computer Vision, Image recognition, pattern recognition, speech recognition, behaviour recognition. Of special interest are new areas such as the Self driving cars – ex the Lutz pod and even larger vehicles such as self driving trucks

Conclusions

Deep learning involves learning through layers which allows a computer to build a hierarchy of complex concepts out of simpler concepts. Deep learning is used to address intuitive applications with high dimensionality.  It is an emerging field and over the next few years, due to advances in technology, we are likely to see many more applications in the Deep learning space. I am specifically interested in how IoT datasets can be used to complement deep learning algorithms. This is an emerging area with some examples shown above. I believe that it will have widespread applications, many of which we have not fully explored(as in the Smart city examples)

I see this article as part of an evolving theme. Future updates will explore how Deep learning algorithms could apply to IoT and Smart city domains. Also, I am interested in complementing Deep learning algorithms using IoT datasets.

I elaborate these ideas in the Data Science for Internet of Things program  (modelled on the course I teach at Oxford University and UPM – Madrid). I will also present these ideas at the International conference on City Sciences at Tongji University in Shanghai  and the Data Science for IoT workshop at the Iotworld event in San Francisco

Please connect with me if you want to stay in touch on linkedin and for future updates

by ajit at May 12, 2015 12:12 PM

May 11, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Conducting an Interface Inventory

United homepage button styles

Conducting an interface inventory is a critical first step for establishing an effective interface design system. I’ve written about what interface inventories are and why they’re important, so here’s a recap of what they are:

An interface inventory is similar to a content inventory, only instead of sifting through and categorizing content, you’re taking stock and categorizing the components making up your website or app.

In order to present your product or service in a cohesive and consistent manner across a slew of properties, browsers, devices, and environments, it’s essential to take stock of what your interface is composed of. Documenting unique UI patterns highlights inconsistencies and sets the stage for a broader conversation about establishing a pattern-based workflow.

So how do you go about conducting an interface inventory? What are the considerations to keep in mind? Here are the steps to conduct a successful interface inventory:

  1. Round up the troops
  2. Prepare for screenshotting
  3. Screenshot exercise
  4. Present findings
  5. Regroup and establish next steps

Step 1: Round up the troops

At many workshops and consulting engagements, I’ve encountered folks who say something along the lines of “Oh yeah Sarah started doing an inventory of our UI.” While it’s great one member of the team is thinking systematically, it’s absolutely essential to get everyone in an organization to participate in the interface inventory exercise.

Gather a cross-disciplinary group of folks – designers, developers, project managers, business owners, QA, and really anyone that touches the experience – in a room together to conduct the interface inventory exercise together. After all, one of the most crucial results of this exercise is to establish a shared vocabulary for everyone in the organization, and that requires input from everyone.

Step 2: Prepare for screenshotting

This exercise requires generating a ton of screenshots, so naturally you’ll need screenshotting software. Ultimately, it doesn’t really matter what tool to use, but everyone involved in the exercise should agree on a tool to make it easier to combine at the end. A few options:

Lately I’ve found Google Slides to be my go-to recommendation for conducting interface inventories. It provides a canvas for freeform positioning of images, it’s chunked out into slides for better categorization, and it’s web-based so can be shared with ease. I’ve created a template for you to use if you’re interested.

Step 3: Screenshot exercise

And now for the main event, which is to document, name, and categorize unique UI patterns across the entire experience.

I typically break people up into pairs and assign them a UI category or categories, but this all depends on how many people are participating. I try to timebox the screenshotting exercise to avoid going down a rabbit hole that ends up lasting all day. The amount of time to allocate will vary depending on how many people are participating, but I find between 30-90 minutes to be sufficient for a first pass of the interface inventory.

So what categories of interface elements should be captured? Obviously the interface element categories are going to vary from experience to experience, but here’s a few categories to potentially start with:

Some Interface Inventory Categories

  • Global – things like headers and footers and other global elements that are shared across the entire UI
  • Navigation – primary navigation, footer navigation, pagination, breadcrumbs, interactive component controls, and basically anything that’s used to navigate around a UI
  • Image types –  Logos, heros, avatars, thumbnails, backgrounds, etc and any other unique image pattern that shows up in the UI. I’ve found this of the most challenging categories to round up.
  • Icons – a special type of image worthy of its own category. Capture magnifying glasses, social icons, arrows, hamburgers, spinners, favicons, and every other interface icon across the experience
  • Forms – Inputs, text areas, select menus, checkboxes, switches, radio buttons, sliders, and other forms of user input
  • Buttons – The quintessential UI element. Capture all the unique button patterns throughout an experience: primary, secondary, big, small, disabled, active, loading, etc
  • Headings – h1, h2, h3, h4, h5, h6 and variations of typographic headings. This can be another challenging category as many elements might be considered headings
  • “Blocks” – I call collections of headings and/or images and/or excerpts “blocks”.  These are relatively simple clusters of interface that are built for reuse (see Nicole Sullivan’s write-up about the Media Object)
  • Lists –  Unordered, ordered, definition, bulleted, numbered, lined, striped, etc. Any collection of elements presented in a list-type format
  • Interactive Components – Accordions, tabs, carousels, and other modules with moving parts
  • Media – Video players, audio players and other rich media elements
  • 3rd Party – Widgets, iframes, stock tickers, social buttons, anything that isn’t hosted on your domain
  • Advertising – A special kind of 3rd party category which includes all ad formats and dimensions
  • Messaging – Alerts, success, errors, warnings, errors, validation, in-progress, popups, tooltips, 4o4s, etc. This is a challenging category as these messages often require user action to expose, but it’s essential to get messaging right.
  • Colors – Capture all unique colors presented in the interface. This category can be aided by fantastic tools like CSS Stats and Stylify Me
  • Animation – This is a special category as it involves capturing UI animation. This requires using screen recording software such as Quicktime to capture motion (you can capture screens in Quicktime with File > New Screen Recording).

Again, these categories aren’t set in stone and will vary based on the nature of the project. Once these patterns are documented the person (or pair of people) should drop them into  Google Slides and cluster them together. Now the fun part: naming these patterns. Obviously it’s important to use existing conventions wherever possible, but you’ll quickly find out many UI patterns either don’t have names or have conflicting/confusing names.

Step 4: Present Findings

Screenshotting and naming can be exhausting and overwhelming, so be sure to take a break after the gathering exercise is complete. Get some food, grab some coffee, and stretch your legs.

Once everyone’s recharged, reconvene and spend about 10-15 minutes per category presenting the findings to the group. This is where things get interesting. Presenting to the group allows the team to start discussing the naming conventions and rationale for these UI patterns.  As I do more of these exercises with teams, it’s fascinating to hear that designers, developers, and product owners often have different names for the same UI pattern.

Once every category has been presented and discussed, have all the participants send their slides to the exercise leader, who merges everything into one uber-document.

Step 5: Regroup and Establish Next Steps

The hard work is done, so now what? This exercise should be used as a conversation starter to get the entire organization on board with crafting an interface design system.

The uber-document can be shopped around to stakeholders to get buy in for establishing an interface design system. The beautiful thing about seeing all the disparities of an interface laid bare for everyone to see is that it becomes crystal clear something needs done about it.

In addition to selling the idea through, the interface inventory should be used as the groundwork for a future pattern library. Gather a smaller cross-disciplinary team together to go through the uber-document and have some conversations about it. Some important questions for this group to cover:

  • What names should we settle on?
  • What patterns should stay, and which should go?
  • Can we merge patterns together easily?
  • How do developers, designers, and managers begin to utilize this shared vocabulary?
  • How do we translate this exercise into a living pattern library?

A Necessary First Step

Interface inventories are a crucial first step for establishing deliberate design systems. While they don’t guarantee long-term pattern library success, the exercise surfaces crucial conversations and can help create advocacy for pattern-based thinking across the organization.

by Brad Frost at May 11, 2015 04:46 PM

mobiForge blog

Measuring page weight

It used to be so easy. Measuring the weight of a web page in the early days of the web was merely a matter of waiting for the page to finish loading and then counting up the size of its constituent resources.

by ronan at May 11, 2015 03:09 PM

May 05, 2015

mobiForge blog

Google's Mobile-Friendly Test: can a Spruce Goose really fly?

There has been a lot written recently about Google's mobile-friendly search algorithm update, starting first with SEO expert blogs, and ending up being covered by mainstream media sites. The update promised to penalise sites for not being mobile-friendly.

by ruadhan at May 05, 2015 04:57 PM

Martin's Mobile Technology Page

My Gigabit/s Fiber in Paris Is Already Outdated - Say Hello to 10 and 40 Gbit/s PON

Since I know how a gigabit GPON fiber link feels and performs and that it's deployed significantly in some countries I can't but wonder when telecom operators in other countries stop praising DSL vectoring with 100 Mbit/s downlink and a few Mbit/s in the uplink as the future technology and become serious about fixed line optical network development!? Having said that I recently noticed that the Gigabit Passive Optical Network (GPON) I have in Paris with a line rate of 1 Gbit/s is actually quite out of date already.

10G-PON, already specified since 2010, is the successor technology and, as the abbreviation suggests, offers a line rate of 10 Gbit/s. According to the Wikipedia article that line rate can be shared by up to 128 users. And thankfully, PON networks are upgradable to 10G-PON as the fiber cable is reused by changing the ONT. Backwards compatibility is ensured as 10G-PON uses a different wavelength compared to GPON so both can coexist on the same fiber strand thus allowing a gradual update of subscribers by first changing the optical equipment in the distribution cabinet and subsequently the fiber devices in people's homes.

But that's not all as standardization of the successor to the successor is already full swing. NG-PON2 is the new kid on the block and will offer 40 Gbit/s downlink speeds on several wavelengths over a single fiber cable and 10 Gbit/s in the uplink direction. For details have a look at the ITU G.989.1 document that contains the requirements specification and G.989.2 for the physical layer specification.

So who's still talking about a measly 100 Mbit/s in the downlink?

by mobilesociety at May 05, 2015 06:18 AM

May 04, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Sparkbox Labs

The fine folks at Sparkbox created a hub for all their little tools and documents they use in their process. Some of it’s code, some of it’s documentation, but it’s all helpful. These resources (as well as Filament Group’s code hub) look like a great way to brand community contributions and would certainly look attractive to potential employees. Really great stuff.

by Brad Frost at May 04, 2015 04:00 PM