On this page:

October 30, 2014

mobiForge blog

Why Swift Flies for iOS Developers

Now that the dust has settled somewhat on Swift, the new language on the block for developing iOS and OSX applications, we take a look at its impact and improvements over its predecessor, Objective-C. Apple claims Swift to be a modern, safe, and powerful language for developing for iOS and OSX. Just how powerful is Swift compared to the venerable Objective-C? And how does it make developing applications easier and safer?

by weimenglee at October 30, 2014 09:42 AM

October 29, 2014

London Calling

Twitter and IBM join forces

SAN FRANCISCO and ARMONK, NY – 29 Oct 2014: Twitter and IBM (NYSE: IBM) today announced a landmark partnership that will help transform how businesses and institutions understand their customers, markets and trends – and inform every business decision. The alliance brings together Twitter data that distinctively represents the public pulse of the planet with IBM’s industry-leading cloud-based analytics, customer engagement platforms, and consulting services.

The collaboration will focus on three areas:

Integration of Twitter data with IBM analytics services on the cloud: IBM plans to offer Twitter data as part of select cloud-based services, including IBM Watson Analytics, a new cognitive service in the palm of your hand that brings intuitive visualization and predictive capabilities to business users; and a cloud-based data refinery service that enables application developers to embed data services in applications. Entrepreneurs and software developers will also be able to integrate Twitter data into new cloud services they are building with IBM’s Watson Developer Cloud or IBM Bluemix platform-as-a-service.

New data-intensive capabilities for the enterprise: IBM and Twitter will deliver a set of enterprise applications to help improve business decisions across industries and professions. The first joint solution will integrate Twitter data with IBM ExperienceOne customer engagement solutions, allowing sales, marketing, and customer service professionals to map sentiment and behavior to better engage and support their customers.

Specialized enterprise consulting: IBM Global Business Services professionals will have access to Twitter data to enrich consulting services for clients across business. Additionally, IBM and Twitter will collaborate to develop unique solutions for specific industries such as banking, consumer products, retail, and travel and transportation. The partnership will draw upon the skills of tens of thousands of IBM Global Business Services consultants and application professionals including consultants from the industry’s only integrated Strategy and Analytics practice, and IBM Interactive Experience, the world’s largest digital agency.

“Twitter provides a powerful new lens through which to look at the world – as both a platform for hundreds of millions of consumers and business professionals, and as a synthesizer of trends,” said Ginni Rometty, IBM Chairman, President and CEO. “This partnership, drawing on IBM’s leading cloud-based analytics platform, will help clients enrich business decisions with an entirely new class of data. This is the latest example of how IBM is reimagining work.”

“When it comes to enterprise transformation, IBM is an undisputed global leader in enabling companies to take advantage of emerging technologies and platforms,” said Dick Costolo, Twitter CEO. “This important partnership with IBM will change the way business decisions are made – from identifying emerging market opportunities to better engaging clients, partners and employees.”

With the development of new solutions to improve business decisions across industries and professions, IBM and Twitter will be able to enrich existing enterprise data streams to improve business decisions. For example, the integration of social data with enterprise data can help accelerate product development by predicting long-term trends or drive real-time demand forecasting based on real-time situations like weather patterns.

“IBM brings a unique combination of cloud-based analytics solutions and a global services team that can help companies utilize this truly unique data,” said Chris Moody, Vice President of Twitter Data Strategy. “Companies have had successes with Twitter data – from manufacturers more effectively managing inventory to consumer electronic companies doing rapid product development. This partnership with IBM will allow faster innovation across a broader range of use cases at scale.”

IBM has established the world’s deepest portfolio in big data and analytics consulting and technology expertise based on experiences drawn from more than 40,000 data and analytics client engagements. This analytics portfolio spans research and development, solutions, software and hardware, and includes more than 15,000 analytics consultants, 4,000 analytics patents, 6,000 industry solution business partners, and 400 IBM mathematicians who are helping clients use big data to transform their organizations.

For more information regarding the new Twitter and IBM collaboration, please visit www.ibm.com/IBMandTwitter or https://blog.twitter.com/ibm, and follow the conversation at #IBMandTwitter.

If you enjoyed this blog post you may like other related posts listed below under You may also like ...

To receive future posts you can subscribe via email or RSS, download the android app, or follow me on twitter @andrewgrill.

by Andrew Grill at October 29, 2014 04:13 PM

Martin's Mobile Technology Page

Another LTE First For Me: Intercontinental Roaming

I've had quite a couple of LTE and roaming firsts this year and, as I've laid down in this post, 2014 is the year when affordable global Internet roaming finally became a reality. Apart from having used a couple of LTE networks in Europe over the last couple of months I can now also report my first intercontinental LTE experience. When I recently traveled with my German SIM card to the United States, I was greeted by an LTE logo from both the T-Mobile US and AT&T network. Data connectivity was as quick (but I didn't run speed tests  so I can't give a number) and with the 20 bands supported by my mobile device I could actually detect quite a number of LTE networks at the place in Southern California where I stayed for a week:

  • Verizon was active in band 13 (700 MHz)
  • Metro-PCS in Band 4 (1700/2100 MHz)
  • AT&T was available in band 4 (1700/2100 MHz) and Band 17 (700 MHz)
  • Sprint had a carrier on air in band 25 (1900 MHz, FDD) and band 41 (2500 MHz, TDD)
  • T-Mobile US had a carrier on air in band 4 (1700/2100)

And in case you wonder how you can find LTE transmissions without special equipment, have a look here. It's not quite straightforward to map transmissions to network operators but not impossible with a bit of help of Wikipedia (see here and here) and 3GPPs band plan that shows uplink and downlink frequencies of the different bands.

by mobilesociety at October 29, 2014 06:47 AM

October 28, 2014

Volker on Mobile

How Not To Do it: the Fallacy of Big Data & CRM (@slideshare @linkedin)

So today I receive an email, subject line "your expertise is requested". The sender? Slideshare. Now, if you read this blog regularly (and, yes, I know that I haven’t been blogging...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

by Volker at October 28, 2014 02:42 PM

October 27, 2014

Cloud Four Blog

The Forgotten Responsive Images Spec: image-set()

Now that responsive images have landed in Chrome and Opera, I’ve started working on a flowchart to help people decide how to use these new features.

This work led me to wonder what ever happened to the image-set() specification.

For those who haven’t heard of the image-set() specification, it was a precursor to srcset which is now part of the responsive images specification. It was originally proposed in February 2012, and WebKit-based browsers shipped prefixed support for it in August of the same year.

The are a few differences between srcset and image-set(), but the biggest difference is that image-set() deals with CSS images whereas srcset is an attribute on the <img> element.

How we forgot about image-set()

In 2012, the image-set() spec was still under development and we were cautioned against using it at the time. Because media queries were available in CSS, handling CSS images in responsive designs wasn’t as difficult as handling responsive images in HTML itself.

So the Responsive Images Community Group focused on how to solve the <img> problem. And I gradually forgot about image-set() thinking that it was moving forward in the CSS Working Group and browsers.

It seems that I may not have been the only one who forgot about image-set() because despite being two years older than <picture>, it is still only supported under prefixing in Chrome and Safari.1 Worse, it isn’t on the roadmap for either Internet Explorer or Firefox.

Why we need image-set()

We need image-set() for the exact same reasons we need srcset and sizes. Whenever we are dealing with a CSS image that fits the resolution switching use case instead of the art direction use case, we should be using image-set() instead of media queries.

In fact, you can take nearly everything I wrote in Don’t Use <picture> (most of the time) and substitute image-set() for srcset and media queries for <picture> and the same logic applies.

We need image-set() for resolution switching because it gives browsers the choice of what size image source will work best. And in cases where all we are dealing with is resolution switching, the browser is better informed than we are as web authors.

Help get image-set() into browsers

We need your help to make sure that image-set() is implemented in browsers. Help us by voicing your support and ask browsers to prioritize this feature:

by Jason Grigsby at October 27, 2014 04:06 PM

October 24, 2014

Brad Frost Web » Brad Frost Web | Web Design, Speaking, Consulting, Music, and Art

BDConf Sketch

The fine folks at BDConf whipped up this really cool sketch of me for the upcoming Orlando event. Color me flattered.


by Brad Frost at October 24, 2014 07:52 PM

Responsive Images

There’s three topics I avoid discussing: religion, politics, and responsive images. But now that the responsive images dust is settling, I figured it’s time to face the music and actually learn this stuff.

So here’s how I’m going to write 95% of my responsive images:

<img src="small.jpg" srcset="small.jpg [smallwidth]w, large.jpg [largewidth]w" alt="Alt Text" />

That’s it. See a demo.

My biggest priority is avoiding sending gigantic images to small screens, and this does the trick. The [smallwidth] and [largewidth] are the widths of the image assets, which’ll change per use case). I’ll use the Picturefill polyfill to make it all work. Most of my images won’t need swapped out, just the big honkin’ hero image types.

I could further explain this, but in an effort to keep things simple I’ll stop writing now.

by Brad Frost at October 24, 2014 03:41 PM

October 22, 2014

mobiForge blog

Detecting Language Preference from the Browser with Accept Header

Some time ago I was on a trip to Germany for the Smashing Mag event. Several websites I visited (including the world’s largest search engine) asked me to confirm my language preferences based on my current physical location. This struck me as a rather inefficient approach to setting language preferences. Especially given the language of the browser is readily available to web publishers as part of the HTTP request.

by mclancy at October 22, 2014 11:49 AM

Martin's Mobile Technology Page

Opera Turbo Turned Off After 30 Seconds

Opera and it's server side compression have helped me a lot over the years to overcome issues like slow connections or strange operator proxys blocking access to websites such as the strange case I came across back in 2008. Fortunately, networks have become faster and other strange effects caused by meddling with data have also receded so I usually use the full Opera browser these days instead of Opera Mini or the Opera Turbo functionality. But every now and then I end up in a GSM-only place and so far the server side compression always helped. Well, up until now.

When I recently wanted to use Opera Turbo again to browse my favorite websites in a bandwidth starved area it took a long time because all advertisement I can block so conveniently locally with a modified hosts file had to be loaded again. Not only did it take long to load the pages due to the advertisement but splash screens and other intrusive advertisement is just not my cup of cake. So after about 30 seconds I switched off Opera Turbo again and resorted to a non-proxied connection, which was not slower for my favorite pages than using server side compression as all advertisement was stripped out. And not only was it not slower, I also didn't have to put up with splash screen advertisement. So for me the days of using server side compression to speed up my web experience in bandwidth limited areas are definitely over...

by mobilesociety at October 22, 2014 06:32 AM

October 21, 2014

Open Gardens

Predictive Analytics as a service for IoT


This post is a personal viewpoint based on my teaching (IoT and Machine Learning) at the City sciences program at UPM in Madrid – Technical University of Madrid and at Oxford University (with a mobile perspective).

Predictive Analytics are critical for IoT, but most companies do not have the skillsets to develop their own Predictive analytics engine.  The objective of this effort is to provide a predictive analytics interface for Hypercat. We aim to provide a solution accessed through a Hypercat API and a library. Whenever possible, we will use Open Source. We will also encapsulate industry best practices into the solution. The post is also related to extending the discussions at the event Smart cities need a Trusted IoT foundation

Data and Analytics will be the key differentiator for IoT.

A single sensor collecting data at one-second intervals will generate 31.5 million datapoints year (source Intel/WindRiver). However, the value lies not just in one sensor’s datapoints – but rather the collective intelligence gleaned for thousands (indeed millions) of sensors working together

As I discuss below, this information (and more specifically the rate of IoT based sensor information and its real time nature) will make a key difference for IoT and Predictive analytics.

IoT and predictive analytics will change the nature of decision making and will change the competitive landscape of industries. Industries will have to make thousands of decisions in near real-time. With predictive analytics, each decision will improve the model for subsequent decisions (also in near real time). We will recognize patterns, make adjustments and improve performance based on data from multiple people and sensors

IoT and Predictive analytics will enable devices to identify, diagnose and report issues more precisely and quickly as they occur. This will create a ‘closed loop’ model where the Predictive model improves with experience. We will thus go from identifying patterns to making predictions – all in real time  

However, the road to this vision is not quite straight forward. The two worlds of IoT and Predictive analytics do not meet easily

Predictive analytics needs the model to be trained before the model makes a prediction. Creating a model and updating it on a continuous real-time basis with streaming IoT data is a complex challenge. Also, it does not fit in the traditional model of map reduce and it’s inherently batch processing nature. This challenge is being addressed already (Moving Hadoop beyond batch processing and MapReduce) but will become increasingly central as IoT becomes mainstream.


IoT and Predictive analytics – opportunities

For IoT and Predictive analytics, processing will take place both in the Cloud but also more to the edge. Not all data will be sent to the Cloud at all times. The newly launched Egburt from Camgian microsystems is an example of this new trend.  Some have called this trend ‘Data gravity’ where computing power is brought to the data as opposed to processing Data in a centralized location.

In addition, the sheer volume of IoT data leads to challenges and opportunities. For example 100 million points per second in a time series is not uncommon. This leads to specific challenges for IoT (Internet of Things – time series data challenge)

Here are some examples of possible opportunities for IoT and Predictive analytics where groups of sensors work together:

  • We could undertake system wide predictive maintenance of offshore equipment like wind farms for multiple turbines (i.e. the overall system as opposed to a specific turbine).  If we predict a high likelihood of failure in one turbine, we could dynamically reduce the load on that turbine by switching to a lower performance.
  • Manage overall performance of a group of devices – again for the wind farm example – individual turbines could be tuned together to achieve optimal performance where individual pieces of equipment have an impact on the overall performance
  • Manage the ‘domino effect’ of failure – as devices are connected (and interdependent) – failure of one could cascade across the whole network. By using predictive analytics – we could anticipate such cascading failure and also reduce its impact

IoT and Predictive analytics – challenges

Despite the benefits, the two worlds of IoT and Predictive analytics do not meet very naturally

In a nutshell, Predictive analytics involves extracting information from existing data sets to identify patterns which help predict future outcomes and trends for new (unseen) scenarios.  This allows us to predict what will happen in future with an acceptable level of reliability.

To do this, we must

a)      Identify patterns from existing data sets

b)      Create a model which will predict the future


Doing these two steps in Real time is a challenge. Traditionally, data is fed to a system in a batch. But for IoT, we have a continuous stream of new observations in real time. The outcome (i.e. the business decision) also has to be made in real time. Today, some systems like Credit card authorization perform some real time validations – but for IoT, the scale and scope will be much larger.


So, this leads to more questions:

a)      Can the predictive model be built in real time?

b)      Can the model be updated in real time?

c)       How much historical data can be used for this model?

d)      How can the data be pre-processed and at what rate?

e)      How frequently can the model be retrained?

f)       Can the model be incrementally updated?


There are many architectural changes also for Real time  ex In memory processing, stream processing etc



According to Gartner analyst Joe Skorupa. “The enormous number of devices, coupled with the sheer volume, velocity and structure of IoT data, creates challenges, particularly in the areas of security, data, storage management, servers and the data center network, as real-time business processes are at stake,”

Thus, IoT will affect many areas: Security, Business processes, Consumer Privacy Data Storage Management Server Technologies Data Center Network etc

The hypercat platform provides a mechanism to manage these complex changes

We can model every sensor+actuator and person as a Digital entity. We can assign predictive behaviour to digital objects (Digital entity has processing power, an agenda and access to meta data). We can model and assign predictive behaviour to multiple levels of objects(from the while refinery to a valve)

We can model time varying data and predict behaviour based on inputs at a point in time.  The behaviour is flexible (resolved at run time) and creates a risk prediction and a feedback loop to modify behaviour in real time along with a set of rules

We can thus cover the whole lifecycle – Starting with discovery of new IoT services in a federated manner, managing security and privacy to ultimately creating autonomous, emergent behaviour for each entity

All this in context of a security and Interoperability framework


Predictive analytics as a service?

Based on the above, predictive analytics cannot be an API – but it would be more a dynamic service which can provide the right data, to the right person, at the right time and place. The service would be self improving(self learning) in real time.

I welcome comments on the above. You can email me at ajit.jaokar at futuretext.com or post in the Hypercat LinkedIn forum






by ajit at October 21, 2014 09:43 AM

Brad Frost Web » Brad Frost Web | Web Design, Speaking, Consulting, Music, and Art


I’ve really wanted to explore the Indie Web movement a lot more. Philosophically the whole thing really resonates with me, but initial cursory glances left me wondering where to start. Indiewebify.me looks like a great checklist/starting point.

by Brad Frost at October 21, 2014 07:42 AM

October 20, 2014

Brad Frost Web » Brad Frost Web | Web Design, Speaking, Consulting, Music, and Art

“I Don’t Know”

I think it’s high time that we got rid of the stigma attached to “I Don’t Know”. This is especially relevant in the web development industry – where the technologies we use come and go as fast as the speed of light.

My friend and fellow Pittsburgher Jason Head hits the nail on the head.

Get comfortable not knowing everything, because increasingly there’s just too much to know.

by Brad Frost at October 20, 2014 08:44 PM

Kai Hendry's blog

Experiencing CoreOS+Docker

Docker Logo

Once upon a time there was chroot (notice, it's just 50LOC?!). chroot was a simple way of sandboxing your application. It didn't really work as well as some people wanted and along came Docker, which is a front end to LXC. It works, though it has a LOT of SLOC/complexity/features. Docker is monolithic and depends on Linux.

Today we have general packaged distributions like Debian & Archlinux. Their main fault was probably being too general, poor abilities to upgrade and downgrade. Along comes CoreOS, a lightweight Linux OS with (a modern init) systemd & docker. CoreOS is also monolithic and depends on Linux.

I've attempted to understand CoreOS before, though since I needed to move Greptweet to a VPS with more disk space... quickly... I "deep dived" into CoreOS & Docker and here is my writeup of the experience. Tip #1, the default user for CoreOS is "core", e.g. ssh core@ once you get for e.g. your CoreOS droplet going.


The 20LOC Greptweet Dockerfile took me almost all day to create, though this was my favourite accomplishment. I studied other Archlinux and Ubuntu docker files on Github to give me guidance how to achieve this.

So now I have a succinct file that describes Greptweet's environment to function. I found it astonishing the container for running a PHP app on Nginx is almost 1GB!

Yes, I need to re-implement greptweet in Golang to greatly reduce this huge bloat of a dependency!

Read only filesystem on CoreOS means no superuser or what?

I was hoping CoreOS would do away with root altogether. I'm seriously tired of sudo. I noticed read only mounts, whilst trying to disable password ssh logins to avoid loads of:

Failed password for root from $badman_IP port $highport ssh2

In my journalctl. Ok, if they are going to fix the config of sshd_config I thought, maybe they would do away with root?! PLEASE.

Haunting permissions

I hate UNIX permissions, hate hate hate. So with Docker your data is mounted on the host IIUC and your app stays firmly containerized.

But when your app writes data out on to a mount point, what THE HELL should the permissions be? I ended up just chmod -R 777 on my Volume's mountpoint, though I should probably have used setfacl What a mess!

User/group 33

How am I supposed to log CoreOS/Docker?!

I'm confused about Volume mounts. I run Greptweet like so: /usr/bin/docker run --name greptweet1 -v /srv/www/greptweet.com:/srv/http/u -p 80:80 greptweet, and /srv/http/u/ is where the data lives. But HOW am I supposed to get at my container's logs? Another volume mount?

How does CoreOS envision managing httpd logs? I don't understand. And how am I supposed to run logrorate!? "One step forward, two steps back" is playing in my mind.


A jarring thing is that when you run a docker container, you IIUC are expected to run one process, i.e. the httpd.

Unfortunately with nginx, to get PHP working you need to run a separate (FastCGI) PHP process to nginx httpd, hence the Greptweet Dockerfile uses Python's supervisor daemon to manage both processes. Urgh. I copied this paradigm from another Dockerfile. Tbh I was expecting to manage the process with systemd inside the container. Now I have Python crapware in my container for managing nginx/php processes. Suck.

NO Cron

Greptweet used cron to create backups, relay stats and generate reports. Now AFAICT I don't have the basic utility of cron in my container. Now what?!



As mentioned in my previous blog on CoreOS, I was quite excited about have "free" updates to my core host system. Sadly after looking at the logs, I'm not impressed.

There is little visibility to the actual update. I have recently found https://coreos.com/releases/ but it uses some horrible XML manifest to layer on the updates. Why can't the whole rootfs just be in git ffs?

Furthermore I noticed locksmithd which I think reboots the machine, but I'm not sure.

Oct 18 03:11:11 dc update_engine[458]: <request protocol="3.0" version="CoreOSUpdateEngine-" updaterversion="CoreOSUpdateEngine-0
Oct 18 03:11:11 dc update_engine[458]: <os version="Chateau" platform="CoreOS" sp="444.5.0_x86_64"></os>
Oct 18 03:11:11 dc update_engine[458]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="444.5.0" track="stable" from_track="
Oct 18 03:11:11 dc update_engine[458]: <ping active="1"></ping>
Oct 18 03:11:11 dc update_engine[458]: <updatecheck targetversionprefix=""></updatecheck>
Oct 18 03:11:11 dc update_engine[458]: <event eventtype="3" eventresult="2" previousversion=""></event>
Oct 18 03:11:11 dc update_engine[458]: </app>
Oct 18 03:11:11 dc update_engine[458]: </request>

I've glanced over https://coreos.com/using-coreos/updates/ several times now and it's still not clear to me. As an operating system maintainer myself for Webconverger updates, our gitfs upgrade system is MUCH CLEARER than how CoreOS updates are handled. I wonder wth Docker 1.3 is going to hit CoreOS stable.

Keeping my Archlinux container uptodate is also a bit of a mystery to me...

CoreOS packaging is just WEIRD

It took me way too long to figure out how to enter a Docker 1.2 container and have a look. nsenter will be replaced by Docker 1.3's docker exec, but the way it installed was very intriguing.

In fact package management in CoreOS eyes I think means starting a share/privileged container and mapping it back to the host system. That's a pretty darn wild way of doing things imo.

I've been BATTLING TO GET TMUX running. It was suggested that this screen CoreOS install guide might help me. There is also an odd "toolbox" alias to a Fedora container with tools, but it doesn't map back to the host. All this for a terminal multiplexer. OMG.

Starting your Docker container in CoreOS was non-trivial

Here is Greptweet's service file.

CoreOS's launch guide was a bit strange to me. Am I supposed to publish my Greptweet image, so the docker pull works? It could be a lot simpler I feel. I.e. why doesn't the docker daemon manage/start the containers itself!?


I think the basic idea of lightweight host OS (CoreOS) and containers (Docker) has legs. I just wish it was as simple as chroot. Now I'm left thinking how Ansible/Chef/Puppet/Vagrant did such a bad job compared to the Dockerfile. Or perhaps blaming VPS hosters who never really got a decent API together to move/expand/inspect their VPS volumes.

Gosh, how did we get into this mess?!

So now system administrators now run hypervisors aka CoreOS and spin up VPSes aka Docker containers all by themselves. Seems like another level of abstraction that empowers system administrators but at the same time there is going to a raft of bugs/pain to enjoy with this "movement". It's also slightly concerning that CoreOS/Docker seems to fly in the face of the Unix philosophy.

October 20, 2014 05:50 AM

October 19, 2014

Brad Frost Web » Brad Frost Web | Web Design, Speaking, Consulting, Music, and Art

Atomic Design at Webdagene

Webdagene is an absolutely phenomenal conference. This year they went all out and held the conference at the massive Oslo Spektrum.

I had the pleasure to give a workshop, give a talk about atomic design, and close out the conference talking about being more open.

Here’s the video:

And the slides:

by Brad Frost at October 19, 2014 11:46 AM

Martin's Mobile Technology Page

My First Prepaid LTE Experience

It's taken a long time and still today, at least in Germany, most network operators reserve their LTE networks for their postpaid customers. In recent months, this has somewhat changed in Germany with the fourth network operator also starting LTE operations and allowing their prepaid customers access from day one. These days their LTE network is also available in Cologne so I had to take a closer look, of course with a prepaid SIM and a €2 per 24 hours data option that gave me up to 1 GB of unthrottled data.

Data rates I could achieve were not stellar but not really bad either. Under very good signal conditions I got close to 30 Mbit/s in the downlink direction and about 10 Mbit/s in the uplink direction. Closer examination revealed that they are using a 10 MHz carrier in the 1800 MHz band which should allow, under very ideal conditions up to 75 Mibt/s in the downlink direction (have a look here if you'd like to know how you can find out which band and bandwidth your LTE network operators is using). But no matter what I did and where I went in the city, the 30 Mbit/s was the magical limit. I don't think the air interface is the limit, the bottleneck must be somewhere else. Under other circumstances I would probably be ecstatic about such speeds but with data rates of 100 Mbit/s+ other operators achieve easily, the 30 Mbit/s pale in comparison.

In a recent network test I reported on, CS-Fallback Voice Call establishment times of that network operator were reported to be pretty bad. I can't confirm this, however, so perhaps they have changed something in their network in the meantime. What's a bit unfortunate, however, is that after a voice call the mobile stays in 2G or 3G a long time before returning to LTE. Other network operators are more advanced and redirect their mobiles back to LTE after the call. That makes for a much better experience. Also, I noticed that there's a 2-3 seconds interruption in the data traffic while switching from UMTS and LTE. That means that they must still be using a rather crude LTE Release with Redirect to UMTS procedure rather than a much smoother PS handover.

While the above is perhaps still excusable, there's one thing they should have a look at quickly: Whenever the mobile switches from 2G or 3G back to LTE the PDP context is lost. In other words, I always get a new IP address when that happens which kills, for example, my VPN tunnel every time it happens. Quite nasty and that's definitely a network bug. Please fix!

In summary the network speed is not stellar compared to what others offer today and some quirks in the network still have to be fixed. On the other hand, however, you can pick up a prepaid SIM in a supermarket and get LTE connectivity without a contract.

by mobilesociety at October 19, 2014 02:14 AM

October 18, 2014

Brad Frost Web » Brad Frost Web | Web Design, Speaking, Consulting, Music, and Art

Prepping the right thing

Emil Björklund wrote a very thoughtful follow-up post to my Primed and Ready to Go post about front-end prep work.

Emil clarifies that it’s while it’s important to prepare as much as possible, it’s dangerous to jump in and start building CMSs and frameworks without knowing what the project requirements are. Developers have a tendency to get a bit too excited about New and Shiny Tools, and that can be dangerous.

I agree completely, and I suppose it’s worth noting most projects I’ve worked on have included high-level tech requirements in the proposal and statement of work. Usually I know before the project kicks off that the organization is migrating to WordPress, or that an e-commerce site will be built in Magento. It’s important to have the right up-front discussions about what technologies, platforms, and tools will be important to the success of the project.

by Brad Frost at October 18, 2014 10:32 AM

This Is My Jam: Rock ‘n’ Roll Lifestyle by Cake

I’ll never forget the first time I heard this song (which was also my introduction to the band). It was played during the credits of an episode of Daria, and I was totally blown away.

But this was before everything was a Google search away, and definitely before Shazaam/Soundhound. The moment was fleeting, and I still recall that frustrated feeling of having something truly unique and original slip away into a commercial break.

It wasn’t until sometime later (months? years?) that the track crossed my path again. This time I made certain not to let the name of the band slip by. After many years, many Napster/Kazaa/Limewire downloads, many CD purchases, and (only) one live show, Cake remains one of my absolute favorite bands.

Thanks, Daria.

by Brad Frost at October 18, 2014 10:18 AM

October 16, 2014

Brad Frost Web » Brad Frost Web | Web Design, Speaking, Consulting, Music, and Art

Primed and Ready to Go

It’s absolutely essential to treat front-end development as part of the design process. However, the (foolish, artificial) line between design and development “phases” gets in the way of true collaboration between disciplines.

This often isn’t due to any malicious intent, but rather because archaic processes and mental models keep disciplines out of sync with each other and prevent teams from working together in a meaningful way.

There are loads of things teams can do to address this issue, but I’m going to focus on what developers can do to make themselves more useful earlier in the design process.

Front-end Prep Chefs

“Welluh boss, nobody gave me any designs to build out so I’muh just gonna sit here on my hands until they uh send me the designs.
—Lazy, foolish developer

This pisses me off to no end.

The role of a prep chef is essential to the cooking process. A prep chef’s responsibilities include chopping vegetables and preparing ingredients so that when the rest of the cooking staff gets into work they can collectively spend their time pursuing the art of cooking instead of tediously chopping peppers.

It is developers’ responsibility to do the work of the prep chef. If developers aren’t busy from Day 1 of your project, there’s something broken with the process. Because boy there’s plenty of work that needs done: setting up Github repos, dev and production server setup, installing CMSs, setting up development tools, etc.

Sure that stuff’s important, but it’s not like we can start immediately coding, right? Wrong. Get to work. Establish patterns. Chuck in your CSS reset. Set up some atoms and molecules. Set up shell page templates.

While you won’t know what the design will look like, you can cover a lot of ground. Making an e-commerce site? You can set up site search, shopping cart table, shell PDP homepage and checkout pages. Making a “web app?” Start marking up the login form, forgot password flow, and dashboard.

Of course this stuff is all subject to change, but by prepping this stuff ahead of time frees your time up to work with (rather than after) designers. Developers can help validate UX design decisions through conversations and working prototypes, can help visual designers better understand source order & web layout, and can quickly produce a fledgling codebase that will evolve into the final product.


Front-end developers need to work with designers for the benefit of everyone involved in the project. Failure to do the appropriate prep-chef work ahead of time shortens the development cycle and leaves you spending late nights and weekends at the office the duration of the last phase of the project. You deserve better than that.

So get in early and start chopping those peppers.

by Brad Frost at October 16, 2014 02:06 PM

October 15, 2014