On this page:

December 17, 2014

Martin's Mobile Technology Page

Upgrading Ubuntu With Minimal Downtime And A Fallback Option

When it comes to my notebook that I use around 25 hours per day I'm in a bit of a predicament. On the one hand it must be stable and ultra reliable. That means I don't install software on it I don't really need and resort to virtual machines to do such things. On the other hand, however, I also like new features of the OS which means I had to upgrade my Ubuntu 12.04 LTS to 14.04 LTS at some point. But how can that be done with minimal downtime and without running the risk of embarking on lengthy fixing sessions after the upgrade and potentially having to find workarounds for things that don't work anymore!?

When I recently upgraded from a 512 GB SSD to a 1 TB SSD and got rid of my Truecrypt partitions a few weeks ago I laid the foundation for just such a pain free OS update. The cornerstone was to have an OS partition that is separate from the data partition. This way, I was now able to quickly create a backup of the OS partition with Clonezilla and restore the backup to a spare hard drive in a spare computer. And thanks to Ubuntu, the clone of my OS partition runs perfectly even on different hardware. And quick in this case really means quick. While my OS partition has a size of 120 GB, only 15 GB is used so the backup takes around 12 minutes. In other words, the downtime of my notebook at this point for the upgrade was 12 minutes. Restoring the backup on the other PC took around 8 minutes.

On this separate PC I could then upgrade my cloned OS partition to Ubuntu 14.04, sort out small itches and ensure that everything is still working. As expected, a couple of things broke. My MoinMoin Wiki installation got a bit messed up in the process, Wi-Fi suspend/resume with my access point also got a bit bruised but everything else worked just as it should.

Once I was satisfied that everything was working as it should I used Clonezilla again to create a backup of the cloned OS partition and then restored this to my production notebook. Another 12 minute outage plus an additional 3 minutes to restore the boot loader with a "Boot Repair" USB stick as my older Clonezilla version could not restore a Ubuntu 14.04 Grub boot loader installation after the restore process.

And that's it, Ubuntu 14.04 is now up and running on my production PC with as little as two 12 minute outages. In addition, I could try everything at length before I committed the upgrade and I still have the backup of the 12.04 installation that I could restore in 12 minutes should the worst happen and I discover a showstopper down the road.

So was it worth all the hassle other than being able to boast that I have 14.04 up and running now? Yes I think it has and here's a list of things that I have significantly improved for my everyday use:

  • Video playpack is smoother now (no occasional vertial shear anymore)
  • The dock shows names of all LibreOffice Documents now
  • Newer Virtualbox, seems to be faster (graphics, windows, etc.)
  • MTP of more phones recognized
  • Can be booted with external monitor connected without issues
  • Nicer fonts in Wine Apps (Word, etc.)
  • Nicer animations/lock screen
  • Updated Libreoffice, improved .doc and .docx support
  • The 5 years support period starts from 2014
  • Better position to upgrade in 2 years to 16.04
  • Menus in header save space
  • VLC has more graphical elements now

by mobilesociety at December 17, 2014 07:45 AM

December 15, 2014

London Calling

How to avoid having your social media team become a “social switchboard”

I don’t know about you, but I haven’t called a switchboard for years.

social-switchboard

Some of the millennial readers of London Calling may not even know what a switchboard is.

A quick primer: when telephone networks were first introduced, you couldn’t directly call a number, so you had to have your call manually connected by an “operator” who would literally patch your call from one phone to another, via the switchboard.

In business, the switchboard for a long time became the focal point of the company, and to reach anyone, you had to call a central number, then be transferred to the right “extension” by the switchboard operator.

I am sure that companies still maintain the switchboard function, but on pretty much every business card and email signature I come across these days, there is a direct desk number and a mobile number – even fax numbers seem to be disappearing [read what a fax machine is here].

Which brings me to the point of this post.

After many years, the telephone switchboard is becoming less useful, however in the social media space, it seems like many companies are developing a social media switchboard.

I believe the switchboard analogy is fair, as if you tweet a company’s @name, then in most instances, a person on the “social media team” picks this up, and then they have to decide the most appropriate person or department to respond to the tweet. In some cases the tweet is copied into an email (yes really!) and sent onto another department for action.

Now I am sure that some people reading this will by now be screaming at the screen saying “you are so wrong, we answer all the tweets ourselves”.

But herein lies the problem

When a piece of social media content enters a company and is triaged by a single, central team, then there is a risk that it either never gets to the right person/department at the right time, or an opportunity for a detailed response is missed.

I have started to present my concept of how companies are investing in a social media switchboard rather than federating this amazing social media insight throughout the organisation.

Watch me explain the social media switchboard problem in this 1 minute video, part of my 30 minute keynote at the New York Brand Innovators conference in December 2014.

Who is missing out when a social media switchboard is in place?

New employees

Social media content not obviously actionable by the social team may be missed – and the next bright millennial you have been trying to find may go unnoticed because the content they are directing at your company (or a competitor) isn’t something that the switchboard is set up to respond to so it goes unanswered.

The same can be said for other areas of the business such as:

  • Supply chain
  • Finance
  • Logistics
  • Product development

In each of these areas of the business, it is unlikely that the social media team would be able to realistically look out for all areas of the business, then capture and relay on specific content that matters most to these teams.

My view is that for social to work inside large organisations and provide real business value is to have it federated among the organisation.

And when we talk about companies graduating from “doing” social media and instead becoming a social business, it is worthwhile looking at the definition of a social business.


How does social media 2.0 work in an organisation?

Let’s look at the normal flow of social media content into an organisation.

Content comes in – either direct or indirect to an organisation (mentioning their @name directly or via use of a relevant keyword picked up by a social media listening tool).

Instead of being intercepted by a human on the “social switchboard”, powerful text analytics or psycholinguistic analysis is applied and then the content is routed to the most appropriate person in the organisation – with full workflow management. If it is not actioned by someone in a timely manner, then an alert is fired off.


How would this work in practice?

Just as it works today, we know how to get our message direct to someone, either by their direct email address, direct line, twitter, LinkedIn profile or mobile number.

The intelligence (and the workload) is federated throughout the organisation, rather than relying on a “switchboard” to capture and process the message.

In this federated model, the right person gets the message, and a service like IBM Watson learns how the content is used, and smart analytics is applied to the content received, and how it is distributed is analysed to see that the right people are getting the right content.


Social media reports are not the answer

While we are discussing this, can we stop doing those weekly “social media reports” that get emailed around that no-one actually reads??

I simply don’t understand how can you produce a static report and expect people to action it?

I heard about a company the other day that emails (yes emails!!) thousands of people each week the “social media report”. When asked how people collaborate in this insight, the response was “well of course they can’t”.

My view is that teams that persist in this behaviour won’t be around in a year’s time when your CEO sees that your team has just become another expensive switchboard.

Do you agree? Have your say in the comments below or tweet me @AndrewGrill.

If you enjoyed this blog post you may like other related posts listed below under You may also like ...

To receive future posts you can subscribe via email or RSS, download the android app, or follow me on twitter @andrewgrill.



You may also like ...

by Andrew Grill at December 15, 2014 02:36 PM

Eurotechnology.japan

EU Japan FTA

Free Trade Agreement (FTA) and Economic Partnership Agreement (EPA) Preparations: EU Japan FTA trade negotiations initiated: At the 20th EU-Japan Summit of May 2011 the EU and Japan decided to start preparations for both a Free Trade Agreement (FTA) and a political framework agreement (Economic Partnership Agreement, EPA). For updates and further details see: http://eu-japan.com/eu-japan-agreements/eu-japan-trade-negotiations/ … Continue reading EU Japan FTA

The post EU Japan FTA appeared first on Eurotechnology Japan.

by fasol@eurotechnology.com (Gerhard Fasol) at December 15, 2014 12:32 PM

December 13, 2014

Martin's Mobile Technology Page

Walking Down Memory Lane - 10 Years Ago, My First 3G Mobile

V800-1Is 10 years a long or a short timeframe? Depends, and when I think back to my first UMTS mobile that I bought 10 years ago on this day (I checked), the timeframe seems both long and short at the same time. It seems like eternity from an image quality point of view as is pretty much visible in the first picture on the left which is the first picture I took with my first UMTS phone, a Sony Ericsson V800 - Vodafone edition. Some of you might see another UMTS phone on the table, a Nokia 6630 which was a company phone so that doesn't count.

On the other hand, 10 years is not such a long time when you think about how far the mobile industry has come since. Back in 2004 I had trouble finding UMTS network coverage as mostly only bigger cities (population > 500.000 people perhaps) had 3G coverage at the time. Back in 2004, that first UMTS phone was still limited to 384 kbit/s, no HSDPA, no dual-carrier, just a plain DCH. But it was furiously fast for the time, the color display was so much better than anything I had before and the rotating camera in the hinge was a real design highlight. Today, 10 years later, there's almost nationwide 3G and even better LTE coverage, speeds in the double digit megabit/s range are common and screen size, UI speed, storage capacity and camera capabilities are orders of magnitude better than at that time.

Even more amazing is that at the time, people in 3GPP were already thinking about the next step. HSDPA was not yet deployed in 2004 but already standardized and meetings were already held to define the LTE we are using today. Just to get you in the mindset of 2004, here are two statements from the September 2004 "Long Term Evolution" meeting in Toronto Canada:

  • Bring your Wi-Fi cards
  • GSM is available in Toronto

In other words, built-in Wi-Fi connectivity in notebooks was not yet the norm and it was still not certain to get GSM coverage in places were 3GPP went. Note, it was GSM, not even UMTS...

I was certainly by no means a technology laggard at the time, so I can very well imagine that many delegates attending the Long Term Evolution meeting in 2004 still had a GSM-only device that could do voice and sms, but not much more. And still, they were laying the groundwork for LTE that was so far away from the reality at the time that it almost seems like a miracle.

3-generations-mobileI close for today with the second image on the left, that shows my first privately owned GSM phone from 1999, a Bosch 738, my first UMTS phone from 2004 and my first LTE phone, a Samsung Galaxy S4 from 2014 (again, I had LTE devices for/from work before but this is the first LTE device I bought for private use). 15 years of mobile development side by side.

by mobilesociety at December 13, 2014 09:32 PM

Check The Hotel's Wi-Fi Speed Before Reserving

Whenever I make a hotel reservation these days I can't help but wondering how good their Wi-Fi actually is or if it works at all. Most of the time I don't care because I can use my mobile data allowance anywhere in Europe these days. Outside of Europe, however, it's a different story as it's more expensive so I still do care. Recently I came across HotelWifiTest, a website that focuses on data rates of hotel Wi-Fis based on hotel guests using the site's speed tester. Sounds like an interesting concept and it's promised good speeds for the next hotel I'm going to visit. So let's see...

by mobilesociety at December 13, 2014 03:57 PM

December 11, 2014

Martin's Mobile Technology Page

Smartphone Firmware Sizes Rival Those Of Desktop PCs Now

Here's the number game of the day: When I recently installed Ubuntu on a PC I noticed that the complete package that installs everything from the OS to the Office Suite has a size of 1.1 GB. When looking at firmware images of current smartphones I was quite surprised that the images are at least the same size or are even bigger!

If you want to see for yourself, search for "<smartphone name> stock firmware image" on the net and see for yourself. Incredible, there's as much software on mobile devices now as there is on PCs!

A lot of it must be crap- and bloatware, though, because Cyanogen firmware images have a size of around 250 MB. Add to that around 100 MB for a number of Google apps that need to be installed separately and you are still only at about a third of a manufacturer's stock firmware image size.

by mobilesociety at December 11, 2014 06:00 AM

December 10, 2014

London Calling

CIPR Podcast on Social Business

On Friday 5th December, I participated in a podcast to talk about Social Business.

Hosted by Russell Goldsmith, it featured Ben Smith from the RealPRMoment, and Emma Hazan, Deputy MD from Hotwire PR.

A replay of the podcast is available below, or you can subscribe and download from iTunes.

At just over 30 minutes, it is well worth a listen if you work in PR, or any aspect of social media.

If you enjoyed this blog post you may like other related posts listed below under You may also like ...

To receive future posts you can subscribe via email or RSS, download the android app, or follow me on twitter @andrewgrill.



You may also like ...

by Andrew Grill at December 10, 2014 01:38 PM

mobiForge blog

Free service to identify device type, browser and OS

If you are looking for an easy and reliable way to identify device type (mobile, tablet, desktop, TV etc), OS and browser in your web applications, then you may want to check out a new free tool for that express purpose released by DeviceAtlas.

by mclancy at December 10, 2014 12:46 PM

December 08, 2014

Open Gardens

Implementing Tim Berners-Lee’s vision of Rich Data vs. Big Data

 

 

 

 

 

 

 

 

INTRODUCTION:

In a previous blog post,  I discussed (Magna Carta for the Web) about the potential of Tim Berners-Lee vision of Rich Data.

When I met Tim at the EIF event in Brussels, I asked about the vision of Rich Data. I also thought more about how this vision could be actually implemented from a Predictive/Machine learning standpoint.

To recap the vision from the previous post:

So what is Rich Data? It’s Data (and Algorithms) that would empower the individual. According to Tim Berners-Lee: “If a computer collated data from your doctor, your credit card company, your smart home, your social networks, and so on, it could get a real overview of your life.” Berners-Lee was visibly enthusiastic about the potential applications of that knowledge, from living more healthily to picking better Christmas presents for his nephews and nieces. This, he said, would be “rich data”. (Motherboard

This blog explores a possible way this idea could be implemented. I hope perhaps I can implement it perhaps as part of an Open Data Institute incubated start-up

To summarize my view here:

The world of Big Data needs to maintain large amounts of Data because the past is used to predict the future. This is needed  because we do not voluntarily share data and Intent. Here,  I propose that to engender Trust, both the Algorithms and the ‘training’ should be transparent – which leads to greater Trust and greater sharing.  This in turn does not need us to hold large amounts of Data (Big Data) to determine Predictions(Intents). Instead, Intents will be known (shared voluntarily) by people at the point of need. This would create a world of Rich Data – where the Intent is determined algorithmically using smaller data sets (and without the need to maintain a large amount of historical data)

BACKGROUND AND CHALLENGES:

Thus, to break it down further, here are some more thoughts:

a)      Big Data vs. Rich Data: To gain insights from data, we currently collect all the data we can lay our hands on (Big Data).  In contrast, for Rich Data, instead of collecting all data in one place in advance, you need access to many small data sets for a given person and situation. But crucially, this ‘linking of datasets’ should happen at the point of need and dynamically. For example:  Personal profile, Contextual information and risk profile ex for a person who is at a risk of Diabetes or a Stroke – only at the point of a medical emergency(vs. gathered in advance).

b)      Context already exists: Much of this information exists already. The mobile industry has done a great job of  capturing contextual  information accurately – for example location and tying it to content(Geo tagged images)

c)       The ‘segment of one’ idea has been tried in many variants: Segmenting has been tried – with some success. In Retail (The future of Retail is segment of One), BCG perspective paper (Segment of One marketing – pdf) Inc magazine – Audience segmenting – targeting your customers . Segmentation is already possible

d)      Intents are not linked to context: The feedback loop is not complete because currently while context exists – it is not tied to Intent. Most people do not trust advertisers and others with their intent

e)      Intent (Predictions) are based on the past:  Because we do not trust providers with Intent – Intent is gleaned through Big Data. Intents are related to Predictions. Predictions are based on a large number of historical observations either of the individual or related individuals. To create accurate predictions in this way, we need large amounts of centralized data and any other forms of Data.  That’s the Big Data world we live in

f)       IoT: IoT will not solve the problem. It will create an order of magnitude of contextual information – but providers will not be trusted and datasets will not be shared. And we will continue to create larger datasets with bigger volumes.

CREATING A TRUST FRAMEWORK FOR SHARING DATA AT AN ALGORITHMIC LEVEL

To recap:

a)      To gain insights from data, we currently collect all the data we can lay our hands on. This is the world of Big Data.

b)      We take this approach because we do not know the Intent.

c)       Rather, we (as people) do not trust providers with Intent.

d)      Hence, in the world of Big Data, we need a lot of Data.  In contrast, for Rich Data, instead of collecting all data in one place in advance, you need access to many small data sets for a given person and situation. But crucially, this ‘linking of datasets’ should happen at the point of need and dynamically. For example:  Personal profile, Contextual information and risk profile ex for a person who is at a risk of Diabetes or a Stroke – only at the point of a medical emergency(vs. gathered in advance).

 

From an algorithmic standpoint, the overall objective is:  To determine the maximum likelihood of sharing under a Trust framework. Given a set of trust frameworks and a set of personas ( for example person with a propensity of a stroke)  - We want to know the probability of sharing information and under which trust framework

We need a small number of observations for an individual

We need an inbuilt trust framework for sharing

We need the Calibration of Trust to be ‘people driven’ and not provider driven

POSSIBLE ALGORITHMIC APPROACH

A possible way to implement the above could be through a Naive Bayes Classifier.

  • In machine learning, Naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes’ theorem with strong (naive) independence assumptions between the features.
  • Workings: Let {f1, . . . , fm} be a predefined set of m features. A classifier is a function f that maps input feature vectors x ∈ X to output class labels y ∈ {1, . . . , C} where X is the feature space. Our goal is to learn f from a labelled training set of N input-output pairs, (xn, yn), n = 1 : N; this is an example of supervised learning i.e. the algorithm has to be trained
  • An advantage of Naive Bayes is that it only requires a small amount of training data to estimate the parameters (means and variances of the variables) necessary for classification.
  • This represents the basics of Naive Bayes. Tom Mitchell in a Carnegie Mellon paper says “A hundred independently drawn training examples will usually suffice to obtain a maximum likelihood estimate of P(Y) that is within a few percent of its correct value1 when Y is a Boolean variable. However, accurately estimating P(X|Y) typically requires many more examples.”
  • In addition, we need to consider feature selection and dimensionality reduction. Feature selection is the process of selecting a subset of relevant features for use in model construction. Feature selection is different from dimensionality reduction. Both methods seek to reduce the number of attributes in the dataset, but a dimensionality reduction method do so by creating new combinations of attributes, where as feature selection methods include and exclude attributes present in the data without changing them. Examples of dimensionality reduction methods include Principal Component Analysis

IMPLEMENTATION

  • Thus, a combination of Naive Bayes and PCA may be  a start to implementing Rich Data. Naive Bayes needs relatively a smaller amount of data. PCA will reduce dimensionality.
  • How to incorporate Trust? The next question is: How to incorporate Trust? Based on above, Trust become a feature (an input vector) to the algorithm with an appropriate weightage. The output is then based on the probability of sharing under a Trust framework for a given persona
  • Who calibrates the Trust? A related and bigger question is: How to calibrate Trust within the Algorithm? This is indeed the Holy Grail and underpins the foundation of the approach. Prediction in research has grown exponentially due to the availability of Data – but Predictive science is not perfect (Good paper: The Good, the Bad, and the Ugly of Predictive) .  Predictive Algorithms gain their intelligence through two ways:  Supervised learning  (like Naive Bayes where the algorithm learns through training Data) or through Unsupervised learning where the algorithm tries to find hidden structure in unlabeled data.

 

So, if we have to calibrate trust for a Supervised learning algorithm – the workings must be open and the trust (propensity to share) must be created from the personas itself. Ex – People at risk of a stroke, elderly etc. Such an Open algorithm that learns from the people and whose workings are transparent will engender trust. It will in turn lead to greater sharing – and a different type of predictive algorithm which will need smaller historical amounts of data  - but will track a larger number of Data streams to determine value at their intersection. This in turn will complete the feedback loop and tie intent to context

Finally, I do not propose that a specific algorithm (such as Naive Bayes) is the answer – rather I propose that both the Algorithms and the ‘training’ should be transparent – which leads to greater Trust and greater sharing.  This in turn does not need us to hold large amounts of Data (Big Data) to determine Predictions(Intents). Instead, Intents will be known (shared voluntarily) by people at the point of need. This would create a world of Rich Data – where the Intent is determined algorithmically using smaller data sets (and without the need to maintain a large amount of historical data)

Comments welcome – at ajit.jaokar at futuretext.com 

by ajit at December 08, 2014 05:29 PM

December 05, 2014

mobiForge blog

M-commerce insights: Retailers in action

For the third part of our m-commerce series (see part 1, part 2), we decided to look more closely at two websites that have chosen different approaches to mobile. One, Maplin, has chosen an adaptive approach without a unique mobile URL, while the other, Currys, went with an RWD approach. Both are electronics retailers based out of the United Kingdom.

by ruadhan at December 05, 2014 12:25 PM

Kai Hendry's blog

Docker container update workflow

I setup Greptweet a few months ago on a DO "Droplet" aka VPS in London running CoreOS stable.

Over that period of time, there was at least one PHP update (as per bloody usual) and a cache bugfix (backstory) I need to rollout.

How do I test the fix?

This fix was to do with the nginx.conf, not the actual code, so it was easy to build the docker image locally, e.g. sudo docker build -t g5 . & sudo docker run -v /srv/www/greptweet.com:/srv/http/u -p 80:80 -ti g5

NEED A SOLUTION: What I found difficult to do however, is test any local code changes, like a Bootstrap update. Since the Dockerfile checks the source code out from git, but I need to test my working changes.

UPDATE: Neerav Kumar from DockerSG suggested I use ADD instead of git clone in my Dockerfile.

How did I deploy the fix

On my CoreOS droplet, I built a new image (from scratch) with docker build --no-cache -t greptweet . from a git checkout. I wasn't too sure what was going to happen, since there was already an image called "greptweet" there and in fact running. The new build seemed to simply replace that currently running build and all I then needed to do was sudo systemctl restart greptweet.service for the systemd service file to serve it.

NEED A SOLUTION: Er, so what happened to the old build of Greptweet? Seems to have disappeared by the build that replaced it. What happens if I want to downgrade, just git checkout oldversion and Docker build from there??

UPDATE: People suggested tags and updating the service file but I think git checking out and older version is a better approach for me.

Gotchas

WTH: Initially I built like so docker build -t greptweet . and noticed no changes on restart and an old version number on the app. It seems that Docker caching can't seem to tell when a step is actually likely to change (new changes in git) and invalidate it.

UPDATE: I'm told Docker can sense changes with ADD but not the RUN. So hopefully the change will make the builds better.

Had some issues with nginx configuration syntax error, independent of Docker.

December 05, 2014 03:28 AM

December 04, 2014

m-trends.org

Mobile World Congress 2015 Networking Events & Parties

Another year and just another couple of months before the mobile craze will hit Barcelona again at the Mobile World Congress - starting on March 2, 2015 - here’s my annual list of side events and *places to be*: the events and networking cocktails – next to the classic big industry players mega parties – where I’m involved with […]

by Rudy De Waele at December 04, 2014 04:56 PM

December 02, 2014

mobiForge blog

Using Objective-C and Swift together in iOS Apps

As an addendum to our previous article about the role of Swift in iOS app development, we now take a look at how Swift and Objective-C can be used together in iOS apps. Despite Apple’s intention to replace the Objective-C language using Swift, it is not practical in the short term simply due to the fact that developers are deeply entrenched in Objective-C. Rather than force Swift down the developer’s throat, Apple has made it easy to allow Objective-C to interoperate with Swift.

by weimenglee at December 02, 2014 01:18 PM

November 25, 2014

mobiForge blog

M-commerce insights: Mobile users and context

Our last article on m-commerce looked at the effect of performance on conversions for e-retailers. In this second part, we take a look at how user context is used to optimise and adapt content by major e-retailers, and how this can affect conversions.

by ruadhan at November 25, 2014 11:50 AM

MobileMonday London

MoMoLo goes to Apps World, November 2014

Firstly, a massive thanks to Apps World 2014 who gave us twenty-four stands to offer out to startups from the community over the two day conference - 12th and 13th November at ExCeL London.


Armed with my dictaphone, I interviewed our startups and they all talked about the diverse range of people that they met over the two days including web developers, investors, universities, corporates, press and bloggers. There was also mention of lots of different sectors; such as music and fashion.

Douglas Robb of Scramboo, talked about two of his favourite conversations: "I met a major English football club that are very interested in the product and someone yesterday refused to tell me their identity but this morning I received a LinkedIn invite from them - turns out they were very senior in a large corporate...it was very worthwhile"



Of course, it was also a great opportunity for some of the startups to engage with end customers too: 5 Tiles (Jose and Michal pictured here), were testing out the usability of their new keyboard for small devices on a Samsung smartwatch. They were delighted by the how quickly people were able to grasp their new interface. They also found a whole new target group where their product was providing a great solution for those with reduced dexterity. Frederick of Adsy also said that he had met quite a few of their 23,000 users and it was great to get feedback.

Albert at Quiztix added how useful it was to meet other startups and celebrate how far the word of Apps have come along. He also said how useful it was for other team members to have the chance to meet customers face to face.

From our side, we had many of our friends from international Mobile Monday chapters pop along to see us - here we have (left to right) London, London, Singapore, Tel Aviv and Singapore!


We also had a great opportunity to take a ride on the new cable car over from ExCeL to the after party at the 02. For me, the whole experience reinforced how lucky we are to live in such a diverse and vibrant city - a hot bed of innovation that pulls people from all over the world. 

Well done to the startups and thanks to those of you that came down over the two days to support us. 


Julia Shalet, Co-Organiser, Mobile Monday London

by Julia Shalet (noreply@blogger.com) at November 25, 2014 11:44 AM

November 19, 2014

mobiForge blog

M-commerce insights: Give users what they want, and make it fast

With Black Friday fast approaching, and this year’s predicted to be the busiest since 2006, we thought it an opportune time to take an in-depth look at e-commerce – specifically, at mobile e-commerce (m-commerce). M-commerce is a strand of online retail that’s nudging close to taking up 20% of total annual online sales; appreciating its business significance, and coming up with a viable mobile retail strategy, is, thus, crucial.

by ruadhan at November 19, 2014 04:34 PM

Cloud Four Blog

We’re Hiring: Front-end Designer

We’re growing! We’re searching for an enthusiastic and talented front-end designer to join our small team in downtown Portland.

We believe good designers are also educators and explainers. You should be comfortable leading design discussions and facilitating workshops with clients to gather requirements and establish direction.

While you’re comfortable with wireframes and comps, you’re also fluent in the language of the web and often prefer to go from sketching to designing in the browser. You’ll need a passion for HTML and CSS to build the complex responsive designs we specialize in.

Nearly every project we undertake we do as a team. We prefer frequent iterations and working collaboratively with our clients on designs. We need people who have empathy and communicate well.

We provide a positive and creative workplace where people can do their best work. We value the unique contributions of every member of the Cloud Four team. We welcome and seek diverse opinions and backgrounds.

We’re not interested in startup insanity. We support our families with reasonable hours, flexible schedules, and the ability to work from home when needed. We offer benefits including medical, dental, vision, and IRA.

We’re a small agency with big aspirations. We started Mobile Portland and host a community device testing lab. We speak at conferences and participate in web standards setting. We like exploring the frontiers of what’s possible on the web and sharing what we learn.

If this sounds like you, please send your resume and a cover letter explaining why you’re the right person to join our team to jobs@cloudfour.com.

by Lyza Gardner at November 19, 2014 04:02 PM

Martin's Mobile Technology Page

Some Musings About LTE on Band 3 (1800 MHz)

It's 2014 and there is no doubt that LTE on Band 3 (1800 MHz) has become very successful and the Global mobile Supplier's Association (GSA) even states that "1800 MHz [is the] Prime Band for LTE Deployments Worldwide". When looking back 5 years to 2009/2010 when first network operators began deploying LTE networks, this was far from certain.

Quite the contrary, deploying LTE in 1800 MHz was seen by many I talked to at the time as a bit of a gamble. At the time, the general thinking, for example in Germany, was more focused on 800 MHz (band 20) and 2600 MHz (band 7) deployments. But as the GSA's statement shows, that the gamble has paid out. Range is said to be much better compared to band 7 so operators who went for this band in auctions or could re-farm it from spectrum they already had for GSM have an interesting advantage today over those who need to use the 2600 MHz band to increase their transmission speeds beyond the capabilities of their 10 MHz channels in the 800 MHz band.

To me, an interesting reminder that the future is far from predictable...

by mobilesociety at November 19, 2014 03:14 PM

November 17, 2014

Martin's Mobile Technology Page

A Capacity Comparison between LTE-Advanced CA and UMTS In Operational Networks Today

With LTE-Advanced Carrier Aggregation being deployed in 2014 it recently struck me that there's a big difference in deployed capacity between LTE and UMTS now. Most network operators have had two 5 MHz carriers deployed for quite a number of years now in busy areas. In some countries, some carriers have more spectrum and have thus deployed three 5 MHz carriers. I'd say that's rather the exception, though. On the LTE side, carriers with enough spectrum have deployed two 20 MHz carriers in busy areas and can easily extend that with additional spectrum in their possession as required. That's also a bit of an exception and I estimate that most carriers have deployed between 10 and 30 MHz today. In other words it's 15 MHz UMTS compared to 40 MHz LTE. Quite a difference and the gap is widening.

by mobilesociety at November 17, 2014 05:54 PM

London Calling

Would you use “Facebook at Work” as your corporate social network?

facebook-logoI was reading an interesting article in the Financial Times this morning (registration required) about the fact that Facebook is apparently developing a corporate version called “Facebook at work”.

ft-fbatwork-sml

It got me thinking, as I spend most of my professional time now convincing corporates that that they should be using an enterprise social network to collaborate, would I use (and trust) Facebook for my enterprise collaboration?

There are a number of existing enterprise-grade social networks such as IBM’s own IBM Connections (free trial here), Microsoft’s Yammer, Salesforce’s Chatter, and Jive.

All of these existing solutions offer a variety of features, from full file-sharing, document management and collaboration, through to communities, and real-time chat.

Many of these solutions can be hosted in the cloud, and some can also be hosted securely “on premise”, providing CIOs peace of mind that sensitive corporate data is secure on the company’s own network.

facebook-at-work

Now I’ve got my corporate social network, what next?

The issue I find my clients struggle with, regardless of the platform is how do you drive adoption?  It is all very well to have rolled out a brand new corporate social network, but you need users to actually want to use it – and this requires a cultural change.

Cultural change is hard, so how do you encourage people to change their behaviour and share what they are doing?

My view of how we will treat the value of collaboration in the future can be summed up in one line:

“In the future, your value to an organisation won’t be what you know, it will be what you share.”

The analogy I use all the time with clients and at conferences is this:

“..ten (or 20) years ago, If I said to you that you had to carry a piece of plastic (a mobile phone) with you everywhere you went, and be available for calls on the weekend, you probably would have asked me what additional pay I would receive as a result”.

The thing is, in 2014. when we join a new company, one of the first things we ask about is our mobile phone.  We need to get corporate social networks to the same “mobile phone” moment where everyone is asking about access to the network, and if we took it away, there would be a riot.

How would you separate Facebook from FB@Work?

So back to the FT article, where they hint that the new Facebook network is aimed to compete with existing networks such as LinkedIn and Google Docs.

I am not sure I would want to use the same network for both personal and business use, and given Facebook is driven by advertising, could we be assured that I would not see advertisements for the latest top secret deal I am working on alongside my posts?

I am also not sure that a FB@Work would compete directly with networks such as LinkedIn.

LinkedIn woks well as a directory of contacts, and also surfaces great business-related content from my contacts.

While I am sure LinkedIn is looking at how they might expand the site beyond being an excellent directory, the issue of adoption remains, and this is of course where companies such as IBM excel at taking an existing network, and developing processes to ensure they are properly adopted.

How do you become a social organisation?

Below you can see a video that explains how IBM Interactive Experience social collaboration experts designed a program and process to help 320,000 Tesco Colleagues communicate, collaborate and reward great work in real-time right across the country using Yammer.

In the words of Alison Horner, the Group Personnel Director: “How do we make a big business feel smaller?”

What we did at Tesco was a year-long project and required more than just training people on how to use the platform, as in every implementation, the focus needs to be on the cultural changes, not just the technology.  This will be something I focus on in an upcoming blog post.

I will be watching how the FB@Work progress progresses, and see if larger, more risk-averse organisations take to it, or it remains the “free” model for small and medium companies.

What are your views – could Facebook make the jump to becoming your company’s internal social network? Please leave me a comment below or Tweet @AndrewGrill

If you enjoyed this blog post you may like other related posts listed below under You may also like ...

To receive future posts you can subscribe via email or RSS, download the android app, or follow me on twitter @andrewgrill.



You may also like ...

by Andrew Grill at November 17, 2014 10:45 AM