On this page:

January 31, 2015

Open Gardens

Data Science at the command line – Book and workshop ..











I am reading a great book called Data Science at the Command line

The author Jeroen Janssens has a workshop in London on Data Science at the command line which I am attending

Here is a brief outline of some of the reasons why I like this approach ..

I have always liked the Command line .. from my days of starting with Unix machines. I must be one of the few people to actually want a command line mobile phone!

 If you have worked with Command line tools, you already know that they are powerful and fast.
For data science especially, that’s relevant because of the need to manipulate data and work with a range of products that can be invoked through a shell like interface
The book is based on the Data science toolbox – created by the author as an Open source tool and is brief and concise(187 pages). The book focuses on specific commands / strategies that can be linked together using simple but powerful command line interfaces
Examples include:
using tools such as json2csv tapkee dimensionality reduction library  and Rio (created by the author). Rio loads CSVs into R as a data.frame, executes given commands and gets the output as CSV or PNG )
run_experiment -  a SciKit-Learn command-line utility for running a series of learners on datasets specified in a configuration file.
tools like topwords.R
and many others
By co-incidence I read this as I was working on this post:  command line tools can be 235x faster than your hadoop cluster

I recommend both the book and the workshop.


a) I have been informed that there is a 50% discount offered for students, academics, startups and NGOs for the workshop
b) Jeroen says that:  The book is not really based on the Data Science Toolbox, but rather provides a modified one so that you don’t have to install everything yourself in order to get started. You can download the VM HERE

by ajit at January 31, 2015 10:52 AM

January 28, 2015

mobiForge blog

Large-screen iPhones see Apple's Q1 2015 revenue skyrocket

Apple sold a record-breaking 74.5 million iPhones during Q1 2015, according to its recently published financial report. The company’s net profit hit a whopping $18 billion. Read on to find out more on Apple's financial performance and iPhone-driven mobile web usage in different countries.

Apple’s fiscal year ends in late September, and thus the Q1 2015 figures cover the period roughly from October to December. It is worth noting that these results include iPhone 6 and iPhone 6 Plus released on 19 September, and they also cover the holiday period.

by pawelpiejko at January 28, 2015 04:21 PM

Martin's Mobile Technology Page

Another VPN Use: Route Around Youtube Proxy Overloads

These days I seem to experience Youtube issues more and more frequently when I'm at home, especially in the evenings. Sometimes, some videos just don't stream very well while others work o.k. It seems this is related to the content distribution server Youtube selects for my location or the link to it which seem to be overloaded from time to time. The proof in point is that I can get around the issue by opening a VPN tunnel to a VPN gateway located in another country. When doing that, the video that I just had problems with just plays fine after re-opening the browser and going back to the video that I had problems with just seconds before. While this work around is certainly far from ideal, I've just found another use for my VPN.

by mobilesociety at January 28, 2015 04:23 AM

January 26, 2015

Open Gardens

Call for Papers Shanghai, 4-5 June 2015 – International Conference on City Sciences (ICCS 2015): New architectures, infrastructures and services for future cities

Call for Papers from  International  Conference on City  Sciences  (ICCS  2015): New  architectures,  infrastructures  and  services  for  future cities co-organized by City sciences where I teach




Call for Papers  Shanghai,  4-5  June  2015

International  Conference on City  Sciences  (ICCS  2015): New  architectures,  infrastructures  and  services  for  future cities

The   new   science   of   cities   stands   at   a   crossroads.   It   encompasses   rather   different,   or   even  conflicting,  approaches.  Future  cities  place  citizens  at  the  core  of  the  innovation  process  when  creating  new  urban  services,  through  “experience  labs”,  the  development  of  urban  apps  or  the  provision   of     ”open    data”.     But     future     cities     also    describe     the     modernisation     of    urban  infrastructures     and    services    such    as    transport,    energy,    culture,    etc.,    through    digital    ICT  technologies:   ultra-­‐fast   fixed  and  mobile  networks,  the  Internet  of  things,  smart  grids,  data  centres,   etc.  In  fact  during  the  last   two  decades local   authorities  have  invested   heavily  in  new  infrastructures   and  services,   for  instance  putting  online  more  and  more  public  services  and  trying    to   create   links   between  still prevalent silo   approaches   with   the   citizen   taking   an  increasingly  centre-­‐stage  role.  However,  so  far  the  results  of  these  investments  have  not  lived  up  to  expectations,  and  particularly  the  transformation  of  the  city  administration  has  not  been  as    rapid   nor   as   radical   as   anticipated.   Therefore,   it   can   be   said   that   there   is   an   increasing  awareness  of  the  need  to  deploy  new  infrastructures  to  support  updated  public  services  and  of  the     need    to   develop   new    services    able    to   share    information    and    knowledge    within    and  between   organizations   and   citizens.   In   addition,   urban   planning   and   urban   landscape   are  increasingly    perceived   as   a   basic   infrastructure,   or   rather   a   framework,   where   the   rest   of  infrastructures  and  services  rely  upon.  Thus,  as  an  overarching  consequence,  there  is  an  urgent  need   to   discuss  among  practitioners  and   academicians  successful  cases  and   new  approaches  able  to  help  to  build  better  future  cities.

Taking  place  in  Shanghai,   the  paradigm  of  challenges  for  future  cities  and   a  crossroad   itself  between   East  and   West,  the  International   Conference  on  City  Sciences  responds  to  these  and  other   issues  by  bringing  together  academics,  policy  makers,  industry  analysts,  providers  and  practitioners     to    present    and    discuss    their    findings.    A    broad    range    of    topics    related  to  infrastructures    and   services   in   the   framework   of   city   sciences   are   welcome   as   subjects   for  papers,  posters  and  panel  sessions:

  • Developments  of   new  infrastructures  and  services  of   relevance  in  an  urban  context:  broadband,    wireless,   sensors,   data,   energy,   transport,   housing,   water,   waste,   and  environment.
  • City sustainability  from  infrastructures  and  services
  • ICT-­‐enabled  urban  innovations
  • Smart city  developments  and  cases
  • Social and  economic  developments  citizen-­‐centric
  • Renewed  government  services  in  a  local  level
  • Simulation and  modelling  of  the  urban  context
  • Urban landscape  as new infrastructure Additional relevant topics  is  also  welcomed.

Authors  of  selected   papers  from  the  conference  will  be  invited   to   submit  to   special  issues  of International  peer-reviewed  academic  journals.

Important  deadlines:

  • 20 February:  Deadline  for  Abstracts  and  Panel  Session  Suggestions
  • 30 March:  Notification  of  Acceptance
  • 30  Apr:  Deadline  for  Final  Papers  and  Panel  Session  Outlines
  • 4- 5  June:  International  Conference  on  City  Sciences  at  Tongji  University  in  Shanghai,  PR  China

Submission of  Abstracts:

Abstracts  should   be  about  2  pages  (800  to   1000  words)  in   length   and  contain  the   following


  • Title of  the  contribution
  • A  research  question
  • Remarks on  methodology
  • Outline of  (expected)  results
  • Bibliographical notes  (up  to  6  main  references  used  in  the  paper)
  • All abstracts  will  be  subject  to  blind  peer  review  by  at  least  two  reviewers.

conference link: International  Conference on City  Sciences  (ICCS  2015): New  architectures,  infrastructures  and  services  for  future 





by ajit at January 26, 2015 10:47 PM

January 25, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Melissa Frost on the Creative Briefs Podcast

My wife Melissa appeared on AIGA Pittsburgh’s Creative Briefs podcast, talking about launching her new jewelry studio, Frost Finery, working on the Pittsburgh Food Bank’s website in the open, working on a redesign of Time Inc., and a whole lot more. I’m absolutely honored to call this woman my wife.

by Brad Frost at January 25, 2015 11:13 PM

Open Gardens

Data Science for IoT: The role of hardware in analytics






This post is leading to vision for Data Science for IoT course/certification. Please sign up on the link if you wish to know more when launched in Feb.

Often, Data Science for IoT differs from conventional data science due to the presence of hardware. Hardware could be involved in integration with the Cloud or Processing at the Edge (which Cisco and others have called Fog Computing). Alternately, we see entirely new classes of hardware specifically involved in Data Science for IoT(such as synapse chip for Deep learning)

Hardware will increasingly play an important role in Data Science for IoT. A good example is from a company called Cognimem which natively implements classifiers(unfortunately, the company does not seem to be active any more as per their twitter feed)

In IoT, speed and real time response play a key role. Often it makes sense to process the data closer to the sensor. This allows for a limited / summarized data set to be sent to the server if needed and also allows for localized decision making.  This architecture leads to a flow of information out from the Cloud and the storage of information at nodes which may not reside in the physical premises of the Cloud.

In this post, I try to explore the various hardware touchpoints for Data analytics and IoT to work together.

Cloud integration: Making decisions at the Edge

Intel Wind River edge management system certified to work with the Intel stack  and includes capabilities such as data capture, rules-based data analysis and response, configuration, file transfer and  Remote device management

Integration of Google analytics into Lantronix hardware –  allows sensors to send real-time data to any node on the Internet or to a cloud based application.

Microchip integration with Amazon Web services  uses an  embedded application with the Amazon Elastic Compute Cloud (EC2) service. Based on  Wi-Fi Client Module Development Kit . Languages like Python or Ruby can be used for development

Integration of Freescale and Oracle which consolidates data collected from multiple appliances from multiple Internet of things service providers.


Libraries are another avenue for analytics engines to be integrated into products – often at the point of creation of the device. Xively cloud services is an example of this strategy through xively libraries


In contrast, keen.io provides APIs for IoT devices to create their own analytics engines ex (smartwatch Pebble’s using of keen.io)  without locking equipment providers into a particular data architecture.

Specialized hardware

We see increasing deployment  of specialized hardware for analytics. Ex egburt from Camgian which uses sensor fusion technolgies for IoT.

In the Deep learning space, GPUs are widely used and more specialized hardware emerges such as IBM’s synapse chip. But more interesting hardware platforms are emerging such as Nervana Systems which creates hardware specifically for Neural networks.

Ubuntu Core and IFTTT spark

Two more initiatives on my radar deserve a space in themselves – even when neither of them have currently an analytics engine:  Ubuntu Core – Docker containers+lightweight Linux distribution as an IoT OS and IFTTT spark initiatives

Comments welcome

This post is leading to vision for Data Science for IoT course/certification. Please sign up on the link if you wish to know more when launched in Feb.

Image source: cognimem

by ajit at January 25, 2015 10:03 PM

January 23, 2015

Martin's Mobile Technology Page

Are We Headed For A New Crypto War? How Would This Affect The Mobile World?

After the recent terrorist attacks in Paris a lot of high government officials and even prime ministers are calling for new laws to allow them to decrypt any kind of communication if it is deemed necessary. That makes me wonder if we are headed for another crypto war!?

I find it highly disconcerting that governments of liberal and democratic countries are seriously considering to outlaw private communication, a basic human right in a feeble attempt to improve security. Perhaps this thinking still comes from the days when wire tapping was the main means to intercept communication. Still today, a court order can get you a tap on anyone's phone line or mobile phone in the country and conversations can be recorded and listened to in real time. It was a different world then. No mobile computers, dumb 'terminals', you had to use the fixed infrastructure that was in place, encryption systems for the masses were none existent. From that point of view I can understand the push to get the same means for other forms of communication that have sprung up in recent years, too. But the world has changed dramatically over the past decades. Networks and services have split, dump 'terminals' for fixed line networks with voice only capabilities have become smartphones and strong encryption is used everywhere and is the foundation for our global economy today. Applying the principle of wire taping to other forms of communication would effectively spell the end of free and democratic societies as we know them today and would have a profound impact on everybody's lives, whether, even for those who claim that they have nothing to hide. So here are a couple of points why attempts to increase security by requiring a second key for governments is hoplessly useless and has become impossible to implement:

Classic Wire Tapping And Crypto Phones

To stay with the classic wire tapping example there's nothing to stop people from using crypto-phones today to encrypt a phone conversation. This is very different from 30 years ago when such technology was simply not available to everyone. Government officials have a need for this today to keep their conversations private, people working for companies around the world have a need for this today because they have a need to keep sensitive information private. As a consequence, people like you and me who are no less important and who have the same rights should therefore also have the right to encrypt their phone calls without anyone being able to tap in somewhere in between, not in the least because privacy is a basic human right. The proposals above would mean that such crypto-systems have to have a second key that the government can get access to. So who produces crypto equipment and software and how do you ensure no foreign governments and other entities eventually get the key? That makes me actually wonder which government should get the key? And what if I travel abroad with my mobile phone, should the government of the country I travel to get the key as well? If not, how could you stop to foreigners in a country to call each other and use cryptography which has a second key for their home country but not for the country they have traveled to?

Instant Messaging

Let's venture a bit further out to instant messaging. Let's say Google, Apple, Microsoft, Facebook and all others are suddenly required by law to give governments (plural!) access to private conversations and to prevent people from using end to end encryption. But how would they stop people from using a further layer of encryption over their government pseudo-crypto? They can't. Governments could outlaw such overlays but that again would violate my human rights for privacy.

Next example: Today, I'm using a private instant messaging server at home and end to end encryption for communication with close relatives and friends. With a crypto-intercept law in place, would I have to give a second key from all clients to the government? Or would there be an exception because I'm not a commercial service provider? And if so, what keeps the bad guys from just not being a commercial service provider themselves? And further along those lines what keeps anyone from using an instant messaging service for which the server is located in a country that is not on good terms with the government of the country you currently reside in? Does that mean that ISPs will be required to block their users from using such services? And how exactly should that work, clever protocols would just look for a way around.

Secure HTTP

Another example: I have a web server at home and access it using https. On my devices I use Certificate Patrol to ensure that a certificate change required for interception is indicated and communication is aborted. Would crypto-intercept mean that programs like Certificate Patrol are outlawed? And if so what keeps me from installing it anyway? As it's a passive method to ensure privacy there's not even a way to detect it from the outside. Or would such a law require me to give my private SSL key to the government? And what if I travel from Germany to Austria, would that mean that I had to send my private SSL key to the Austrian government as well? Doing so would require an encrypted connection. But then the German government needs to listen in. So would the Austrian government thus have a second key for the German government and for all other governments of nations from which people come to visit Austria? and what about the transit countries over which the encrypted communication flow is transported? It's getting absurd pretty quickly

Secure Shell

Yet another example: To administer my servers at home I use the Secure Shell Protocol (SSH) like millions of other system administrators. It uses perfect forward secrecy and certificates for the server and the client and strong public/private keys. Unlike secure http where man in the middle attacks with government signed certificates are possible, SSH is bullet proof in this respect. Does that mean that I have to give the government a second key whenever I set up a new server or change my certificates? What happens when I travel to France or Russia? Do I have to give those governments my keys in advance? Or maybe a law should be in place to require ISPs to block all ciphered communication over country borders for which no second key is available to all the governments over who's territories the data packets are sent!? Good luck working out a mechanism for that.

The only way to enforce this is to ban the use of any crypto-system that does not contain a second key for the government of the country you currently reside in. That will make traveling with computing equipment across national borders pretty difficult to impossible unless you come up with a system where governments around the world can get a key for your communication. Does anyone really want that!? Would it even be possible?

Would 2-nd Keys To Intercept Traffic of Large Internet Companies Change Anything For The Bad Guy?

These are just a couple of thoughts that show how ridiculous it would be to require big Internet companies to give second keys to governments. The overhead to play this game with 200 countries is ridiculous, the potential for fraud enormous and instead of ways to communicate securely, bad guys would be left with only 999.999.999.

Less Is More

In the end, the only way is to tap the bad guys at the source before data is encrypted. That is not trivial but it shouldn't be anyway as otherwise governments would just spy on anyone. After the Snowden revelations there is little if any doubt on that. When looking at terrorist incidents I can't find a single one after which it is discovered that the terrorists were already known by the authorities but manpower was missing to have a closer look. There is no need to ban encryption to get even more data, police can't even handle the data they already have access to. So in my opinion they should even be required to collect less data rather than more for their own sake.

by mobilesociety at January 23, 2015 07:31 AM

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Over It

Two days ago, I got a call from a friendly guy from Microsoft. He told me they were planning on announcing a new browser at their big Windows 10 unveiling event. We had a long chat about the new browser, their new direction, and what they had planned. It was a nice chat, but could have been a lot shorter. It could have been:

Microsoft: “We’re making a new browser!”

Me: “Oh, that’s nice.”

Of course I’m happy Microsoft is creating an evergreen, standards-based browser unencumbered by legacy rot. But the truth is I don’t care.

My device fatigue has been actin’ up something fierce lately. My ability to get excited over any one company’s initiatives, device-specific features, or clever browser stuff has diminished rapidly over the last few years. I don’t think I’m alone, either.

I really noticed this when the iPhone 6 was released. In my 5 years of working with mobile Web stuff, I’ve watched a lot of Apple keynotes. When the iPhone 4 was announced with its fancy Retina screen, everybody freaked out. Ohmagod what are we gonna do?! When the iPhone 5 was announced with its taller screen, everybody freaked out. Ohmagod what are we gonna do?! What the iPhone 6 and 6 Plus was released, everybody…didn’t freak out.

Over the past few years, we as a Web community have learned  to manage a plethora of viewports, capabilities, and environments. That’s nothing but a Good Thing.

We’re reaching a point where instead of thinking about this:

Website sent to a bunch of different browsers

We’re starting to think like this:

Website going to stuff that can access the Web

My remedy for device fatigue has been to take a step back and let my eyes go unfocused. Much like a Magic Eye, I can then see the hidden pictures behind the stippled noise that is the device landscape. This remedy helps me cope, gets me to stop caring about things that don’t really matter, and gets me to care about the broader trends the Magic Eye unveils. Device Fatigue

Differences in browsers and devices are always going to be there, but the time has come to stop freaking out over them. We have bigger fish to fry. Of course I’m still thankful that there are people like PPK who go in with a scalpel and detail the nuances of every major and not-so-major platform out there. And of course we’ll have to wrestle those annoying little quirks from time to time. But the time has come to let go and focus on making great experiences.

So keep things simple. Build to standards. Use progressive enhancement. Don’t try to send wheelbarrows full of JavaScript down the pipes unless you have to. Don’t make assumptions. Save the stress for more important things.

And do you yourself a favor and go read or re-read Trent Walton‘s magnificent piece called Device Agnostic.

by Brad Frost at January 23, 2015 06:34 AM

January 22, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Atomic Design Newsletter #2

I just sent out the second edition of the Atomic Design newsletter, where I’m posting progress on the book as well as resources related to style guides and pattern libraries.

If you’re interested, you can sign up here. You can also check out the first edition, where I talk about how I’m setting up the website and am planning to post progress.


by Brad Frost at January 22, 2015 11:15 PM

mobiForge blog

App deep linking: Do we really need Facebook App Links and similar services

It seems odd that in 2015 we must address ourselves to the problem of linking resources across a network, but in the version of 2015 we're lumbered with, we live in an appified world, so address ourselves we must. While linking has formed the backbone of the web since the demise of Compuserve and AOL's walled gardens in the mid-nineties, the apps that populate our smartphone home screens are about as interlinked as Compuserve's forums in the early 1990s; which is to say, not very interlinked at all.

by ruadhan at January 22, 2015 07:59 PM

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Style Guide Podcast

Style Guide Podcast

I’m thrilled to announce that Anna Debenham and I making a podcast dedicated to all things related to style guides and pattern libraries.

Why We’re Making It

Anna and I are firm believers in style guides and think they’re essential tools for sanity maintenance and organizational success. But the conversations around style guides are scattered, and often appear as side notes in other conversations. And they’re certainly not happening as frequently as we’d like. We want to change that!

The first step was to put together Styleguides.io, which collects style guide articles, tools, examples, and other resources under one roof. The second step now is to start talking to ultra-smart people who build and maintain these design systems, style guides, and pattern libraries.

Who We’re Talking To

We’re going to be talking to a healthy blend of people who successfully create and maintain style guides. We’re covering a whole host of topics, including selling style guides to stakeholders, creating maintainable style guides, designing reusable patterns, establishing a pattern-based workflow, and much much more.

Since Anna has written a book on the topic of style guides and I’m in the process of writing one myself, we’re super excited to talk to other people about the challenges and opportunities around style guides.

The Format

The Style Guide Podcast is going to be short-run, “small batch” podcast consisting of about a dozen or so episodes. We’re striving to keep things on topic and under a half hour in length.

So here we go! You can listen to (or read the handy transcript) our very first episode with the extremely talented Jina Bolton, who’s been creating style guides and advocating for them for a long time now. You can subscribe to the podcast feed, and hold tight for the podcast to be available in iTunes.


by Brad Frost at January 22, 2015 05:23 PM

January 21, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Pattern Man

Pattern Man is an atomic-design-style approach to Middleman, brought to you by those brainiacs at Bearded.

I really like the nested include approach that allows you to keep your patterns DRY, although I think I still very much prefer how Pattern Lab handles dynamic data. Keeping the structure and the data entirely separate has proved to be really helpful.

by Brad Frost at January 21, 2015 01:35 PM

January 20, 2015

Cloud Four Blog

When to use <picture> for resolution switching

I wrote recently about why you shouldn’t use <picture> for responsive images most of the time.

In short, my argument is that most responsive images fall under the resolution switching use case and that <picture> is best used for art direction.

There is one resolution switching use case where <picture> makes sense—when you want to provide different file formats using the type attribute.

If that is the case, then you should use <picture> without the media attribute.

Most of the syntax examples for <picture> include the media attribute, but it isn’t required. You can do something like:

   <source type="image/svg+xml" srcset="logo.svg" />
   <source type="image/webp" srcset="logo.webp" />
   <source type="image/png" srcset="logo.png" />
   <img src="logo.gif" alt="Company logo" />

That is a simple example with a single file per source element, but there is no reason you can’t use the full power of the srcset attribute to provide multiple files per file type. You can even add the sizes attribute to give you more control.

So long as you don’t use the media attribute, you’re still giving the browser the information it needs to pick the right image source without dictating to it that it must use a specific one.

And unless you’re doing art direction, you should be striving to provide the browser with options, but letting the browser pick which source will work best.

(Thanks to Kevin Lozandier for reminding me that I need to write this up, and to Brett Jankford and Wesley Smits for raising this point in the comments on my previous article.)

by Jason Grigsby at January 20, 2015 09:46 PM

January 19, 2015

Brad Frost » Brad Frost | Web Design, Speaking, Consulting, Music, and Art

Brad Frost’s survival guide for the modern web worker

I’m really excited to be the opening keynote speaker at Generate Conf in New York City. I’ve been thinking a lot about how not to drown in a sea of devices, technologies, and opinions, and hope to dive into that topic a bit more.

by Brad Frost at January 19, 2015 06:43 PM

mobiForge blog

Standards and browser compatibility

Browser compatibility is hard. Especially on mobile. If you thought things were difficult 10 years ago when there were only a handful of browsers to contend with, then thinking about the situation for mobile may make you dizzy or depressed. For now we live in a world of tens of thousands of devices of wildly variable shapes and sizes and capabilities. And we have to make the web work on all of them.

by ruadhan at January 19, 2015 09:41 AM

January 18, 2015

London Calling

Social Buying report from IDC says trusted networks improve the purchase experience

alys-woodwardAt the excellent LinkedIn London Sales Connect event in October 2014, where I delivered a keynote on social eminence, there was a presentation from IDC (sponsored by LinkedIn) on the results of a recent study on social buying by Alys Woodward.

Alys provided an overview of a whitepaper titled “Social Buying Meets Social Selling: How Trusted Networks Improve the Purchase Experience”.

You can view and download the paper at the end of this post.

As this was a social selling event, it was fascinating to hear the primary research from a social buying perspective.

The whitepaper findings can be summarised as follows:

  • To increase trust and confidence in making high-stakes company purchase decisions, B2B buyers leverage their professional networks
  • online social networks play a vital role in the purchase process of 84% of the most senior B2B buyers
  • In the final stage of the purchasing process, when stakes are highest, online professional networks are the number one information preference of buyers
  • 75% of B2B buyers and 84% of C-level/vice president (VP) executives surveyed use social media to make purchasing decisions
  • Online professional networks are the number 1 information preference of buyers in the final stage of the purchase process
  • The average B2B buyer who uses social media for buying support is more senior, has a bigger budget, makes more frequent purchases, and has a greater span of buying control than a buyer who does not use social media
  • B2B buyers find the greatest benefit of social media is gaining greater confidence in and comfort with their decisions

For these reasons, sales professionals (and anyone reading this post) needs to rethink how they use social in their selling and marketing efforts

As a refresher, I invite you to read my blog post on the importance of being eminent.

Key findings from the report

The Most Senior and Influential B2B Buyers Use Social Media for Their Purchase Process.  This may be a surprise to some people who think the c-suite and senior management don’t use social media.

While they may not being social to post their every thought or move, when it comes to large and strategic purchasing decisions, they want to use all of the intelligence at their disposal to make the best decision.

The report found that Social media, especially online professional networks, plays a vital role for senior executives making company purchases.

A large majority, 84%, of C-level and VP-level buyers use social media for purchasing. Overall, 75% of B2B buyers consult social media when making purchasing decisions.

The figure below from the report highlights this fact.

While the report was sponsored by LinkedIn, the business network comes in for special attention, as over half of the B2B buyers surveyed have used LinkedIn to support their purchase process in the last 12 months.

For me, one of the most interesting parts of the study is the fact that B2B buyers who are active on social media represent are more senior and influential segment than those who do not use social media.

Social buyers, those who use online connections to support the purchase process, are:

1. more senior

2. have a greater span of purchasing control than non-social users

3. spend 84% more per purchase

4. make 61% more purchase decisions

Implications for sales professionals

So if your job is selling (and that doesn’t just mean you are in sales, as we all are selling in some way), then this report should be a wake-up call if you are not leveaging social, and not active on social.

Back in 2011, I coined the phrase “the art of the e-tap” to describe how you can “tap someone on the shoulder” virtually using social media to make them aware of you, your company and the way they can help you. Have a look at a short video below where I explain how this can work, and you can also read my blog post on this.

The IDC study has some key recommendations – namely:

Sales professionals who are not active social media users are missing an important opportunity to connect. Salespeople have long leveraged offline social networks for recommendations, referrals, and revenue because the strategy is so successful. All things being equal, people prefer to buy from those they know and trust. Now, sales professionals selling high-impact products or services (e.g., complex, expensive, important to the buyer) need to replicate their networking strategy online because social media is where peer conversations are happening. Salespeople active with online networks will gain additional benefits. Large numbers of relationships can be maintained more efficiently online. It takes less time to keep up with customer news and changes. Contact is made more quickly. Social media gives a sales professional’s positive actions more visibility and a longer shelf life. Online social networks may make it easier to build relationships with senior, influential buyers because the constraints of time and location are reduced for all parties.

One of the best soundbites from the report was in the latter section. Opting out of social media may cause real damage. Salespeople should recognize that they can’t opt out of social media if their buyers are there. Evidence of non-participation is just as visible as presence. Salespeople will inadvertently deliver a negative message with their absence.

Put simply, sales professionals must answer their social phones.

This is so true. I have made sure for the last 10 years that my “social phone” has always been on, and I continue to find success as a result.

Another key finding in the report was the fact that buyers place a great level of trust in their professional networks, and hence their professional social networks, such as LinkedIn.

Quoting from the report

Complex problems with complex solutions are fraught with risk. B2B buyers reduce that risk by practicing social buying. They leverage professional networks for buying support in order to increase their confidence in decision making. Buyers place greater trust in commercial relationships that have the stamp of approval from their professional networks

You can delegate tasks but you cannot delegate trust

The report highlights the fact that while time is scarce, trust and confidence can be even rarer, buyers making high-impact decisions will gravitate toward methods that make confidence building easier.

To get more time, executives can reprioritise tasks and use productivity-enhancing tools, but they cannot delegate trust.

This is an incredibly important part of the buying process, especially when million-dollar sums are at stake.

If you will let me be indulgent for a moment, some time ago there was a saying “no-one ever got fired buying IBM” – most probably referring to the trust in the relationship when dealing with a company as large and established as IBM. [disclaimer – unless you’ve been under a rock, you would know that I am a Partner at IBM].

The study also looks at how social media drives confidence and credibility in the purchase decision.

Quoting from the report:

Buyers need trust to buy confidently and would like to ensure this trust is maintained online. When asked about concerns they may have about social media, respondents answered that their top concern is that vendors and sales professionals would not be authentic.

Of those survey participants who acknowledged concern about social media, 62.2% are concerned or strongly concerned about the authenticity of online profiles.

“There are a lot of false promises and profiles [online], that’s why references are key for us.”

In spite of this concern, respondents were overwhelmingly willing to use social media for purchasing support. 75% of B2B buyers surveyed have already used it, and of the remaining 25% who have not yet used it, only 5.2% are unwilling to try.

Making the right connection online.

Those of you reading this post that are relatively new to social media should not go out and start spamming and tweeting and Linking-In to all of your connections!

The report highlights the need to respect the connection.

Sales professionals can certainly use social media for direct outreach to buyers who they don’t know. In the IDC study, many buyers were open to this contact under the right circumstances. 62.6% agreed or strongly agreed with the following statement: “I appreciate being contacted by vendors at the right time with relevant information and opportunities.”

This is a very key point – “at the right time with relevant information and opportunities”.

Social is therefore not a substitute for a good account plan and knowing who, when and why to contact someone about a sales opportunity.

While those that have been used to cold calling might see social as an opportunity to “cold tweet”, this is simply the wrong approach.

Making a connection, then at the right time later on leveraging that connection is the best way to use social media in the sale cycle.

Timing is everything

The IDC report suggests that when sales professionals interact via social media, it’s better if they approach prospects through a mutual connection.

Even when a connection isn’t possible, sales professionals must boost buyers’ confidence by being visibly present online in an authentic, transparent, and reasonably complete way.

This doesn’t happen overnight. If you are in sales and have a very low presence on social media, it will show.  In contrast, if you google my name or search on social media, you will find a large amount of information, recommendations endorsements, and presentation videos produced over the last 15 years or so.

The report also breaks down the purchase cycle into 3 key stages and explains how social can be used in each.

  • The earliest stage of your purchase process includes investigating how you can improve business and/or productivity, determining whether your problem(s) is important enough to invest in a solution, and investigating possible alternatives (features you may need, etc.).
  • The middle stage of your purchase process includes constructing a “short list” of specific brands and products and determining implementation challenges and solutions.
  • The final stage of your purchase process includes getting answers to final questions, finalizing decisions, and negotiating terms and conditions.

Key recommendations

The report finishes with some key recommendations, many of which I have been blogging about for the last few years – so good to see that IDC agree with me!

1. Increase social proximity

  • Find the social connectors (people in your industry with strong social networks and influence), and try to get to know them.
  • Grow social networks to be closer to more people/right people.
  • Be at the intersection of conversations, as presence alone helps build familiarity and eventually trust.

2. Improve social presence

  • Be present in the right way. Buyers will want to get to know a sales professional in advance of a deeper relationship, and people who may serve as possible references will also be looking. Manage a professional identity (trusted personal brand). Be credible, authentic, accurate, information rich, and service oriented.
  • When sharing thought leadership or expertise, consider the interests of potential buyers with respect to the purchase process stage. For example, buyers who engage with content intended for final-stage decision making may be primed to purchase, which is a signal for salespeople to increase attention and outreach.
  • Engage earlier and with a lighter touch. Approach people with a “warm” introduction — through their valued social network.

3. Build social capital (build up a reservoir of “like” and trust)

  • Conduct research before making sales calls. Review the prospective buyer’s profile, follow the individuals and companies of interest, and investigate group memberships and other social media activity to ensure relevancy. Salespeople with knowledge of the person/situation are more likely to be able to serve as a trusted advisor.
  • Facilitate peer-to-peer recommendations. Make others proud to be a reference. Make it easy to share information.
  • Be a good guy. Send thank you notes, share knowledge freely, facilitate exchange between peers (help them help each other), provide referrals, and rarely ask for favors (social capital is built when people give without demanding immediate exchange).

This is in my mind one of the most comprehensive, and insightful reports on social buying I have read, and I encourage you to download and read in more detail.

If you enjoyed this blog post you may like other related posts listed below under You may also like ...

To receive future posts you can subscribe via email or RSS, download the android app, or follow me on twitter @andrewgrill.

You may also like ...

by Andrew Grill at January 18, 2015 06:46 PM

January 16, 2015

Martin's Mobile Technology Page

China Only Seems To Use High Frequency Bands For LTE So Far

When recently contemplating about the use of frequency bands for LTE in different parts of the world I realized that there have been different approaches: In the US, network operators opted for deploying LTE in the 700 MHz area and have only recently started to use higher bands such as the 1700/2100 MHz band for LTE services. In Europe most carriers started with the 1800/2600 MHz band but quickly also opted for the 800 MHz band to push nation wide coverage. When I was in China recently, I noticed that I could only trace deployments in the 1900+ MHz range (e.g. TDD bands 39-41 as described on Wikipedia) but nothing below that. In retrospect, I find it quite surprising that they haven't started their rollout on lower frequencies to get a large footprint in their not so small country.

by mobilesociety at January 16, 2015 07:09 AM

January 14, 2015

Martin's Mobile Technology Page

LTE in the 5 GHz Wi-Fi Band – What's The Fuzz About?

When it comes to spectrum, current cellular networks that are designed well can still keep up with the ever rising demand for bandwidth. In addition, not all of the spectrum below 3 GHz that has been assigned to cellular operators is used so capacity can be further increased by adding additional carriers per base station for a few years. In addition, there is still spectrum below 3 GHz that could be assigned to cellular networks in the future but compared to what's already assigned it is not that much. So sooner or later, alternatives are required to keep up should bandwidth demands continue to grow at current rates of 50-80% per year.

Small Cells to Escape The Bandwidth Crunch On High Frequencies

There are a couple of options to escape the dilemma: Smaller cell sizes and spectrum in higher frequency bands. Smaller cell sizes and the use of licensed spectrum allow a higher re-use factor and hence increase the overall capacity of the network. There is also no way around very small cells when licensed or unlicensed spectrum beyond 3 GHz are considered for future use as the higher signal attenuation limit practical cell sizes to a few tens of meters from the cell's center. So one way or another, going beyond current network capacity by an order of magnitude requires smaller cells. How that can be done economically is a matter of debate and I won't dwell on this particular point in this blog post.

What I would like to take a closer look at today, however, is the potential use of the license free 5 GHz band which is pretty much exclusively used by Wi-Fi today. So while the Wi-Fi camp is probably not very happy about the LTE camp considering this band, it is a logical evolution of the LTE ecosystem  and the 300+ MHz of available spectrum in the band at zero cost is quite irresistible as well.

A number of industry players are doing quite a lot to push the idea in the media at the moment. Even 3GPP published an article back in September 2014 on their web site stating that LTE in the license free 5 GHz band, referred to as LAA-LTE (License Assisted Access LTE), is  seen as a major RAN feature for 3GPP Release 13. [http://www.3gpp.org/news-events/3gpp-news/1628-rel13].

What is License Assisted Access?

So why this strange name “License Assisted Access”? The 3GPP Study Item on the topic (see links below) is quite clear that the aim, at least initially, is not to have independent LTE cells in the 5 GHz band. Instead, a primary cell transmitting on a carrier in a licensed chunk of spectrum a network operator has paid for is complemented with an additional LTE carrier in the unlicensed 5 GHz band. The Study Item even goes as far as saying that in a first instance, the 5 GHz band shall only be used for downlink transmission, i.e. as a Secondary Cell (SCell) in a classic Carrier Aggregation (CA) configuration. All signaling and control information is only sent on the Primary Cell (PCell) which is operated in a licensed band. I'm not quite sure why that limitation has, at least for now, been put into place but it sounds like there is a fair amount of politics involved to appease some players with a vested interest in Wi-Fi. What such a setup would of course do is to keep a cellular signaling link in place all the time so an ongoing data transfer can quickly be pulled away from a LAA-LTE carrier back to the primary carrier that potentially covers a larger area when the signal deteriorates.

And this immediately brings us into the domain of the various HetNet (Heterogeneous Network) and CoMP (Coordinated Multipoint) features that have been specified in recent 3GPP releases but are not used in practice so far. So perhaps at the beginning small eNodeBs devices may transmit both the PCell and SCell parts. But I am sure at some point the industry might get more daring and also think about a split between the PCell being transmitted in licensed spectrum from an overlay macro cell while the SCell using unlicensed spectrum is transmitted from a small cell. An interesting aspect of such a scenario would be that a small cell that only uses unlicensed spectrum is easy to set-up as the site does not have to be registered as a cellular transmitter. After all, it's only using unlicensed spectrum and has to adhere to the same transmit power limitations and regulations as Wi-Fi access points.

Dealing with Interference and Being a Fair Player

Obviously as someone operating a Wi-Fi access point in the 5 GHz band at home for high bandwidth media streaming I'd be very unhappy if a nearby LAA-LTE cell would significantly interfere with my transmissions. Fortunately there is enough space in the 5 GHz band, at least until 160 MHz Wi-Fi channels defined by 802.11ac replace the 40 Mhz and 80 MHz channels used by 802.11n and 802.11ac products today, so networks can go out of each other's way. LAA-LTE carriers will be limited to 20 MHz channels but Carrier Aggregation (CA) would allow to bundle several channels in addition to the PCell channel in a licensed band. Today, 2x20 MHz Carrier Aggregation in licensed bands are used in practice and 3x20 MHz Carrier Aggregation is just around the corner. By the time LAA-LTE might be ready for deployment, perhaps in the 2018-2019 timeframe, it might be even more.

Also, the Study Item promises to have a close look at how a “Listen before Transmit” scheme can be implemented for the LAA-LTE cell so it can detect Wi-Fi networks in the same spectrum and either change to a different section of the band, reduce it's transmit power or to coordinate transmissions with the Wi-Fi networks it detects. The promise is that a LAA-LTE carrier would no more interfere with a Wi-Fi network than other Wi-Fi networks in the area. A nice promise but a heavily loaded nearby Wi-Fi network would interfere quite a lot with my own Wi-Fi network.

It's going to be interesting to see how this particular part will be standardized. Today, an LTE cell does not look out for interference and uses the licensed spectrum assigned to a carrier whenever it likes. Wi-Fi on the other hand has an interference and collision detection scheme with backoff times and retries. So if you will, LTE without any enhancements is not really a fair player when it comes to competing for the same spectrum with other transmitters in real time because it did not have to compete for access to a channel so far. Also a LAA-LTE cell has to take care that it doesn't interfere with a LAA-LTE cell in 5 GHz band of another network operator that has also decided to put a small cell in the area.

Why Compare Spectral Efficiency of LTE vs. WiFi?

Sure, one could use Wi-Fi access as part of a cellular network but there have been so many approaches to include Wi-Fi access into a cellular network infrastructure that have failed that it's not worth to think about yet another flavor. What makes LTE so attractive over Wi-Fi for cellular use is not its spectral efficiency but that LTE is a cellular technology while Wi-Fi is a hotspot technology without mobility in mind. So while Wi-Fi is great for homes, hotels and offices for stationary or nomadic Internet use, it can't compete with LTE in full mobility scenarios in which it is important to have automatic subscriber authentication, integrated backhaul and seamless mobility to and from larger macro cells. Also, Wi-Fi is not specified 3GPP so they have little influence over potential enhancements required to fully integrate it into an LTE network. As I said above, it's been tried before...


To make sure I still remember how it this activity started here are some dates. From what I can tell, the Study Item started in December 2014 and Ericsson envisages the study to be finished by mid-2015 with the specification to be finished mid 2016. Deployment usually takes 2-3 years after that.

Background Reading

And here are a number of great links to the details:

by mobilesociety at January 14, 2015 06:50 AM

January 11, 2015

Open Gardens

Understanding the nature of IoT data

This post is in a series Twelve unique characteristics of IoT based Predictive analytics/machine learning .

I will be exploring these ideas in the Data Science for IoT course /certification program when it’s launched.

Here, we discuss IoT devices and the nature of IoT data

Definitions and terminology

Business insider makes some bold predictions for IoT devices

The Internet of Things will be the largest device market in the world.

By 2019 it will be more than double the size of the smartphone, PC, tablet, connected car, and the wearable market combined.

The IoT will result in $1.7 trillion in value added to the global economy in 2019.

Device shipments will reach 6.7 billion in 2019 for a five-year CAGR of 61%.

The enterprise sector will lead the IoT, accounting for 46% of device shipments this year, but that share will decline as the government and home sectors gain momentum.

The main benefit of growth in the IoT will be increased efficiency and lower costs.

The IoT promises increased efficiency within the home, city, and workplace by giving control to the user.

And others say internet things investment will run 140bn next five years


Also, the term IoT has many definitions – but it’s important to remember that IoT is not the same as M2M (machine to machine). M2M is a telecoms term which implies that there is a radio (cellular) at one or both ends of the communication. On the other hand, IOT means simply connecting to the Internet. When we are speaking of IoT(billions of devices) – we are really referring to Smart objects. So, what makes an Object Smart?

What makes an object smart?

Back in 2010, the then Chinese Premier Wen Jiabo once said “Internet + Internet of things = Wisdom of the earth”. Indeed the Internet of Things revolution promises to transform many domains .. As the term Internet of Things implies (IOT) – IOT is about Smart objects


For an object (say a chair) to be ‘smart’ it must have three things

-       An Identity (to be uniquely identifiable – via iPv6)

-       A communication mechanism(i.e. a radio) and

-       A set of sensors / actuators


For example – the chair may have a pressure sensor indicating that it is occupied

Now, if it is able to know who is sitting – it could co-relate more data by connecting to the person’s profile

If it is in a cafe, whole new data sets can be co-related (about the venue, about who else is there etc)

Thus, IOT is all about Data ..

How will Smart objects communicate?

How will billions of devices communicate? Primarily through the ISM band and Bluetooth 4.0 / Bluetooth low energy. Certainly not through the cellular network (Hence the above distinction between M2M and IoT is important). Cellular will play a role in connectivity and there will be many successful applications / connectivity models (ex Jasper wireless). A more likely scenario is IoT specific networks like Sigfox (which could be deployed by anyone including Telecom Operators).  Sigfox currently uses the most popular European ISM band on 868MHz (as defined by ETSI and CEPT), along with 902MHz in the USA (as defined by the FCC), depending on specific regional regulations.

Smart objects will generate a lot of Data ..

Understanding the nature of IoT data

In the ultimate vision of IoT, Things are identifiable, autonomous, and self-configurable. Objects  communicate among themselves and interact with the environment. Objects can sense, actuate and predictively react to events

Billions of devices will create massive volume of streaming and geographically-dispersed data. This data will often need real-time responses. There are primarily two modes of IoT data: periodic observations/monitoring or abnormal event reporting. Periodic observations present demands due to their high volumes and storage overheads. Events on the other hand are one-off but need a rapid reponse. If we consider video data(ex from survillance cameras) as IoT Data, we have some additional characteristics.

Thus, our goal is to understand the implications of predictive analytics to IoT data. This ultimately entails using IoT data to make better decisions.

I will be exploring these ideas in the Data Science for IoT course /certification program when it’s launched. Comments welcome. In the next part of this series, I will explore Time Series data


by ajit at January 11, 2015 08:00 PM

Martin's Mobile Technology Page

My Android Privacy Configuration

I'm always a bit shocked when I hear people saying that “Google and others track you anyway on your smartphone and there is nothing that can be done about it”. I sense a certain frustration not only on the side of the person who's made the statement. But I obviously beg to differ. As this comes up quite frequently I decided to put together a blog post that I can then refer to with the things that I do on my Android based device to keep my private data as private as possible.

The Three Cornerstones to Privacy

In essence, my efforts to keep my private data private is based on three cornerstones:

  • As few apps on the device as possible that communicate with servers on the Internet without my consent.
  • Allowing access to private information such as location, calendar entries, the address book, etc. to specific apps while blocking access for all others by default.
  • Preventing communication of apps with servers on the Internet that I would like to use. Amazon's Kindle reader app is a prime example. It's a good app for reading books but if left on it's own it's far too chatty for my taste.

In some cases, implementing these cornerstones in practice is straight forward while other things require a more technical approach. The rest of this blog entry now looks at how I implement these cornerstones in practice.


For a number of blocking and monitoring approaches described below it's necessary to get root rights on the device. This is nothing bad and there's a difference between rooting and jail breaking as I described in this blog post. One way to conveniently get root rights is to install an alternative Android OS on the device. CyanogenMod is my distribution of choice as it is mainstream enough for good support and stability and has no Google apps installed by default. As a user, I can then decide to install only those Google apps I require such as the Google Play store. Apart from the app store, the Play Services framework and the Google keyboard there's little else Google specific I have installed. Sure, there's additional stuff in the basic OS that wants to talk to Google servers but these parts can also be silenced as shown below.

Privacy Guard

By default, CyanogenMod is shipped with the 'Privacy Guard' feature that allows to restrict access on a per app basis to lots of different sources of private information such as the calender and address book, the GPS receiver, etc. So even if an app requires access to the address book to install properly, access can still be removed later on. A good example is the app of the German railway company which wants to have access to my address book and my location so it can help me to find nearby stations easier. It's a noble goal but I don't want to it to have access to my private data. With Privacy Guard restricting access the app still works but only sees an empty address book, doesn't get a position fix when it asks for it without crashing or acting up in strange ways. In practice I've configured Privacy Guard to ask me when newly installed apps want to access any kind of private information so I can either allow or deny it temporarily or permanently so I won't be bothered by requests anymore in the future.  Also, I went through the list of already installed apps and removed access rights for many of them to information sources I don't want those apps to have access to.

Using an Alternative Web Browser

Most people just use Google's default Chrome browser on an Android device and are thus freely sharing everything they do on the web from their mobile device with Google. And yet is is so easy to install an alternative browser such as Opera or Firefox. Personally I prefer Opera even though it is not open source. But at least it does not talk to Google all the time and it has a great text-reflow feature that I personally miss in all other alternatives I tried so far.

Using an Alternative Search Engine

Most web browsers, even alternative one's use Google for web searches by default, giving Google yet another way to collect private information about users.  But it's easy to use other search engines such as DuckDuckGo, Yahoo/Bing etc., I'm still giving away private information through my search queries but at least it ends up in other hands.

Blocking Unwanted Communication

Despite using Privacy Guard, alternative web browsers and search engines there's still a lot of communication going on between apps and servers of Google, Amazon and others. The only way to control these short of not installing them is to block communication to those servers. A way to conveniently do this is to block communication to them via the 'hosts' file. Modifying the 'hosts' file requires a terminal program and root access, both of which come packaged with CyanogenMod. I've put my hosts file and scripts to activate and deactivate blocking on Github so you don't have to start from scratch. Have a look at this blog post for details. With this in place there's very little traffic going to servers I don't like it to go and further down in the post I describe how I occasionally check that this is still the case.

GPS-only Location

While it is admittedly convenient to send information about Wi-Fi networks at my location and the current Cell ID to Google's location services for a quick location fix it also allows Google to track me. Yes, they say the data is processed anonymously but I still don't like it. And I don't have to because GPS-only location provides a quick location fix as well these days. Unfortunately, some devices use Google's Secure User Plane Location (SUPL) server and include my subscriber id (IMSI) and location information in the request. Many devices, however, use other services to help the GPS chip to find the satellites more quickly which seem to be less nosy. For details have a look at the the blog posts here and here.

Owncloud for Synchronization

Naturally I don't want to share my address book and calendar with Google, Microsoft or any other big company for that matter. As a consequence I only used a non-synchronized calendar on my PC and an address book on a smartphone. And then Owncloud came along and made me make peace with 'the cloud', as my cloud services are at home. Since having an Owncloud server at home I enjoy calendar and address book synchronization between all my devices without the need to share my data with a cloud service provider.

Open Street Maps

Yes, and you might have guessed by now I also don't want Google to know where I'm going. I thus do not use Google Maps for street or car navigation. Instead I use Open Street Maps for Android (Osmand). It's a great app and I used it for car navigation for the past two years now. Better still, all maps are on the device so my smartphone doesn't have to frequently download maps data from the web while driving. The downside is that I don't have traffic jam notifications but I'm willing to trade that for my privacy.

Mobile eMail

Needless to say that Google is not my email provider either. Instead I use a medium sized German company for the purpose and Thunderbird on the PC. On my Android smartphone I use the open source K9 email program to manage my emails. It's forked from Google's email program and works pretty much the same way, except it's not talking to Google of course.

Music and Videos On My Mobile

Again, I prefer the open source VLC player to other pre-installed products, even to CyanogenMod's Apollo. It plays just about any audio and video format I throw at it without the need to talk to anyone on the web.

Seeing is Believing

And finally, it's important to check what kind of unwanted communication is still going on in the background after having taken the steps above. This can then be used to update the 'hosts' file or take some further measures. Tracing IP traffic to and from the device can be done in several ways. One way is to use a Wi-Fi access point and connect it to the Internet via the Ethernet socket of a PC which is configured for Internet sharing. With Wireshark on the PC, all traffic from the smartphone via the Wi-Fi access point can then be monitored. Another approach is to use a Raspberry Pi as a Wi-Fi access point and then use tcpdump running on the Pi and Wireshark running on a PC that is also connected to the network to inspect the phone's data exchange. Another interestion option is to run tcpdump directly on the smartphone and inspect the resulting dump file with Wireshark on a PC. This way, one can even collect IP packets that are transferred over the cellular interface.

Final Words

Purists will argue now that even when all measures are taken things are far from perfect. I tend to agree but there far less of my private data being extracted from my smartphone without my consent or, at least, without my awareness.

by mobilesociety at January 11, 2015 11:51 AM