On this page:

November 25, 2014

MobileMonday London

MoMoLo goes to Apps World, November 2014

Firstly, a massive thanks to Apps World 2014 who gave us twenty-four stands to offer out to startups from the community over the two day conference - 12th and 13th November at ExCeL London.


Armed with my dictaphone, I interviewed our startups and they all talked about the diverse range of people that they met over the two days including web developers, investors, universities, corporates, press and bloggers. There was also mention of lots of different sectors; such as music and fashion.

Douglas Robb of Scramboo, talked about two of his favourite conversations: "I met a major English football club that are very interested in the product and someone yesterday refused to tell me their identity but this morning I received a LinkedIn invite from them - turns out they were very senior in a large corporate...it was very worthwhile"



Of course, it was also a great opportunity for some of the startups to engage with end customers too: 5 Tiles (Jose and Michal pictured here), were testing out the usability of their new keyboard for small devices on a Samsung smartwatch. They were delighted by the how quickly people were able to grasp their new interface. They also found a whole new target group where their product was providing a great solution for those with reduced dexterity. Frederick of Adsy also said that he had met quite a few of their 23,000 users and it was great to get feedback.

Albert at Quiztix added how useful it was to meet other startups and celebrate how far the word of Apps have come along. He also said how useful it was for other team members to have the chance to meet customers face to face.

From our side, we had many of our friends from international Mobile Monday chapters pop along to see us - here we have (left to right) London, London, Singapore, Tel Aviv and Singapore!


We also had a great opportunity to take a ride on the new cable car over from ExCeL to the after party at the 02. For me, the whole experience reinforced how lucky we are to live in such a diverse and vibrant city - a hot bed of innovation that pulls people from all over the world. 

Well done to the startups and thanks to those of you that came down over the two days to support us. 


Julia Shalet, Co-Organiser, Mobile Monday London

by Julia Shalet (noreply@blogger.com) at November 25, 2014 11:44 AM

November 23, 2014

Martin's Mobile Technology Page

How To Fix Ubuntu Wi-Fi Tethering Issues With Some Smartphones

Tethering-issuesI use smartphone Wi-Fi tethering every day to connect my notebook to the Internet. This mostly works out of the box. There are, however a tiny number of smartphones with which I have problems While the notebook connects just fine, ping times are very long and erratic as shown in the screenshot on the left and there's almost no data throughput. I took me a long time to figure out what the issue was but at some point I realized that I only had the problems with a few particular devices when my notebook was not connected to the charger. Ah, may of you might say now, then it has something to do with power saving modes!

And indeed it has. Per default, Ubuntu activates power save mode in the Wi-Fi chip when running on battery and deactivates it as soon as the notebook is connected to the mains again. While power save mode slightly increases ping times it otherwise has no negative effects with 99% of the smartphones I try, except for the few it wreaks total havoc on.

Fortunately, there's a simple way to disable power save mode. A simple "sudo iwconfig wlan0 power off" from a shell instantly fixes the problem. The "iwconfig" command without any parameters then shows that power save mode was switched off desite running on battery:

wlan2     IEEE 802.11bgn  ESSID:"martins-i-spot"  
          Mode:Managed  Frequency:2.462 GHz  Access Point: xx:xx  
          Bit Rate=57.8 Mb/s   Tx-Power=16 dBm   
          Retry  long limit:7   RTS thr:off   Fragment thr:off
          Power Management:off
          Link Quality=70/70  Signal level=-38 dBm  
          Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0
          Tx excessive retries:0  Invalid misc:90   Missed beacon:0

While this is a good short term fix, Wi-Fi power management is activated again after rebooting or after sleep mode. To permanently disable Wi-Fi power save mode, a script that contains the command can be added in the power management configuration directory:

cd /etc/pm/power.d/
sudo touch wireless
sudo chmod 755 wireless
sudo nano wireless

And then paste the following two lines inside:

#!/bin/bash
/sbin/iwconfig wlan0 power off

That's it. Just one more thing perhaps: Use "ifconfig" to check if your Wi-Fi adapter is "wlan0" or if the OS has at some point assigned another name to it and adapt the command accordingly.

 

by mobilesociety at November 23, 2014 08:14 AM

November 19, 2014

mobiForge blog

M-commerce insights: Give users what they want, and make it fast

With Black Friday fast approaching, and this year’s predicted to be the busiest since 2006, we thought it an opportune time to take an in-depth look at e-commerce – specifically, at mobile e-commerce (m-commerce). M-commerce is a strand of online retail that’s nudging close to taking up 20% of total annual online sales; appreciating its business significance, and coming up with a viable mobile retail strategy, is, thus, crucial.

by ruadhan at November 19, 2014 04:34 PM

Cloud Four Blog

We’re Hiring: Front-end Designer

We’re growing! We’re searching for an enthusiastic and talented front-end designer to join our small team in downtown Portland.

We believe good designers are also educators and explainers. You should be comfortable leading design discussions and facilitating workshops with clients to gather requirements and establish direction.

While you’re comfortable with wireframes and comps, you’re also fluent in the language of the web and often prefer to go from sketching to designing in the browser. You’ll need a passion for HTML and CSS to build the complex responsive designs we specialize in.

Nearly every project we undertake we do as a team. We prefer frequent iterations and working collaboratively with our clients on designs. We need people who have empathy and communicate well.

We provide a positive and creative workplace where people can do their best work. We value the unique contributions of every member of the Cloud Four team. We welcome and seek diverse opinions and backgrounds.

We’re not interested in startup insanity. We support our families with reasonable hours, flexible schedules, and the ability to work from home when needed. We offer benefits including medical, dental, vision, and IRA.

We’re a small agency with big aspirations. We started Mobile Portland and host a community device testing lab. We speak at conferences and participate in web standards setting. We like exploring the frontiers of what’s possible on the web and sharing what we learn.

If this sounds like you, please send your resume and a cover letter explaining why you’re the right person to join our team to jobs@cloudfour.com.

by Lyza Gardner at November 19, 2014 04:02 PM

November 18, 2014

Martin's Mobile Technology Page

Perhaps it's time for 3G to LTE Handovers Now?

While most networks still use the Radio Bearer "Release with Rediredirect" method to switch from LTE to 3G when necessary, some networks have started using a real LTE to 3G packet handover procedure that significantly reduces the outage time of the data bearer. So far so good. The problem with this is that once a device is on the 3G layer there's no way for it today to get back to LTE until no data is transmitted anymore and the connection is put into Idle or Cell/URA-PCH state. This is especially problematic if a mobile device is used via tethering in combination with notebooks and other devices that send data all time as the switch back to LTE then never happens. Perhaps the time has come now to change this?

Before I go on explaining why the time might have come for this to change it's perhaps a good idea to have a quick look at the problem of a 3G to LTE handover. While active in UMTS, the mobile's transceiver is active all the time so it can't look on other channels and bands for a better radio technology. The only way to do this is for the network to schedule transmission gaps (the famous UMTS compressed mode) and to instruct the mobile device to look for LTE cells during those transmission and reception gaps. Obviously such a radio reconfiguration has a significant drawback: The data rate goes down. This is perhaps ok if an LTE signal is found but not very desirable if there is no LTE coverage to be found for some time. This is the reason why network operators have so far shied away from it. After all, 3G is quite a good technology for Internet access as well.

These days, LTE has become a lot better than UMTS, however, and when I look at network coverage maps there aren't a lot of places in many networks where 3G is deployed but LTE is not. In other words, if the unfortunate event occurs and the mobile is sent to 3G due to a lack of LTE network coverage, chances are very high that the user will be back in LTE coverage quite quickly. Therefore I think that with the LTE network coverage there is today it would make sense to think about 3G to LTE handovers.

P.S.: And it's not that changes from a slower RAT to a faster RAT while transferring data is unknown. This works great from GSM to UMTS for example. As GSM/GPRS uses timeslots, a mobile device has ample time even without network support to search for UMTS even while data is transferred. The same mechanism also works to switch from GPRS to LTE during a data transfer but so far only few mobile devices have implemented this. Fortunately first devices are now showing up that can do GPRS to LTE reselections during packet data transfer. So when I'm connected while being in a train I at least end up on LTE again if things get so bad for some time that my connectivity ends up on the GSM layer.

by mobilesociety at November 18, 2014 07:13 AM

First Carrier in Germany Starts LTE-Advanced Carrier Aggregation with 300 Mbit/s

In a number of European countries and elsewhere on the planet, a number of network operators have rolled out LTE-Advanced Carrier Aggregation in recent months. Most of them bundle a combination of 10, 15 or 20 MHz carriers. In Germany, the first mobile network operator has now also started Carrier Aggregation and has gone straight to the maximum that is possible today: Two full 20 MHz carriers for a theoretical top speed of 300 Mbit/s with LTE Category 6 devices.

Nicely enough, the carrier has also enhanced it's publicly available network coverage map to show where 2x20 MHz CA is available (click on the LTE 300 Mbit/s checkbox). When you are on the nationwide zoom level there's not much to be seen. But when zooming into the map over big cities such as Cologne, Düsseldorf, Berlin and many others, you can see that these are quite well covered already. I'm looking forward to the first reports by the tech press how much can be achieved in practice.

by mobilesociety at November 18, 2014 06:43 AM

November 17, 2014

London Calling

Would you use “Facebook at Work” as your corporate social network?

facebook-logoI was reading an interesting article in the Financial Times this morning (registration required) about the fact that Facebook is apparently developing a corporate version called “Facebook at work”.

ft-fbatwork-sml

It got me thinking, as I spend most of my professional time now convincing corporates that that they should be using an enterprise social network to collaborate, would I use (and trust) Facebook for my enterprise collaboration?

There are a number of existing enterprise-grade social networks such as IBM’s own IBM Connections (free trial here), Microsoft’s Yammer, Salesforce’s Chatter, and Jive.

All of these existing solutions offer a variety of features, from full file-sharing, document management and collaboration, through to communities, and real-time chat.

Many of these solutions can be hosted in the cloud, and some can also be hosted securely “on premise”, providing CIOs peace of mind that sensitive corporate data is secure on the company’s own network.

facebook-at-work

Now I’ve got my corporate social network, what next?

The issue I find my clients struggle with, regardless of the platform is how do you drive adoption?  It is all very well to have rolled out a brand new corporate social network, but you need users to actually want to use it – and this requires a cultural change.

Cultural change is hard, so how do you encourage people to change their behaviour and share what they are doing?

My view of how we will treat the value of collaboration in the future can be summed up in one line:

“In the future, your value to an organisation won’t be what you know, it will be what you share.”

The analogy I use all the time with clients and at conferences is this:

“..ten (or 20) years ago, If I said to you that you had to carry a piece of plastic (a mobile phone) with you everywhere you went, and be available for calls on the weekend, you probably would have asked me what additional pay I would receive as a result”.

The thing is, in 2014. when we join a new company, one of the first things we ask about is our mobile phone.  We need to get corporate social networks to the same “mobile phone” moment where everyone is asking about access to the network, and if we took it away, there would be a riot.

How would you separate Facebook from FB@Work?

So back to the FT article, where they hint that the new Facebook network is aimed to compete with existing networks such as LinkedIn and Google Docs.

I am not sure I would want to use the same network for both personal and business use, and given Facebook is driven by advertising, could we be assured that I would not see advertisements for the latest top secret deal I am working on alongside my posts?

I am also not sure that a FB@Work would compete directly with networks such as LinkedIn.

LinkedIn woks well as a directory of contacts, and also surfaces great business-related content from my contacts.

While I am sure LinkedIn is looking at how they might expand the site beyond being an excellent directory, the issue of adoption remains, and this is of course where companies such as IBM excel at taking an existing network, and developing processes to ensure they are properly adopted.

How do you become a social organisation?

Below you can see a video that explains how IBM Interactive Experience social collaboration experts designed a program and process to help 320,000 Tesco Colleagues communicate, collaborate and reward great work in real-time right across the country using Yammer.

In the words of Alison Horner, the Group Personnel Director: “How do we make a big business feel smaller?”

What we did at Tesco was a year-long project and required more than just training people on how to use the platform, as in every implementation, the focus needs to be on the cultural changes, not just the technology.  This will be something I focus on in an upcoming blog post.

I will be watching how the FB@Work progress progresses, and see if larger, more risk-averse organisations take to it, or it remains the “free” model for small and medium companies.

What are your views – could Facebook make the jump to becoming your company’s internal social network? Please leave me a comment below or Tweet @AndrewGrill

If you enjoyed this blog post you may like other related posts listed below under You may also like ...

To receive future posts you can subscribe via email or RSS, download the android app, or follow me on twitter @andrewgrill.



You may also like ...

by Andrew Grill at November 17, 2014 10:45 AM

November 14, 2014

Martin's Mobile Technology Page

LTE Carrier Aggregation: Intra-Band Non-Contiguous

Apart from the LTE Carrier Aggregation used in practice today that combines channels in different frequency bands for higher throughput there are also CA combinations that combine channels in the same frequency band that are not next to each other. Such combinations are called Intra-Band Non-Contiguous. Quite a mouthful. Now what would they be good for?

I don't have any practical examples but I think such combinations would make sense for network operators that have either received several chunks of spectrum in the same band over time or they have acquired additional spectrum, e.g. through a merger with another network operator.

When looking at this carrier aggregation table such combinations are foreseen for the US, Europe and China. In the US the non contiguous combination is foreseen in band 4 (1900/2100 MHz) which quite a lot of carriers seem to use. In Europe, band 3 (1800 MHz) and band 7 (2600 MHz) have such combinations defined as well. I wonder which carriers might want to use them in the near future. Any idea?

by mobilesociety at November 14, 2014 07:01 AM

November 11, 2014

mobiForge blog

HTML5 support in mobile devices

Now that the HTML5 set of standards has reached Recommendation status (the W3C's way of saying it's now a published standard) we thought we'd take a look at how support for HTML5 has grown in shipping mobile devices over the past few years.

by ronan at November 11, 2014 11:18 AM

November 09, 2014

Martin's Mobile Technology Page

The Next Step In LTE Carrier Aggregation: 3 Bands

The hot LTE topic of 2014 that made it into live networks certainly is Carrier Aggregation (CA). Agreed, there aren't too many devices that support CA at the end of 2014 but that's going to change soon. In the US, quite a number of carriers have deployed 10 + 10 MHz Carrier Aggregation to play catch up with the 20 MHz carriers used in Europe already.  In Europe, network operators will use 10 MHz + 20 MHz aggregations and some even 20  + 20 MHz for a stunning theoretical peak data rate of 300 Mbit/s. So where do we go from here? Obviously, aggregating 3 bands is the next logical step.

And it seems 3GPP is quite prepared for it. Have a look at this page which has an impressive list of all sorts of LTE carrier aggregation combinations and also shows for each in which 3GPP spec version it was introduced in the specification.

For Europe, especially the 3A_7A_20A combination (20 + 20 + 10 MHz) is interesting as there are network operators that have spectrum in each of these bands. Peak data rates with 50 MHz of downlink spectrum, which some network operators actually own, would be 375 Mbit/s.

For North America, there are literally dozens of potential combinations listed. Not sure which ones might actually be used. But I suspect it will be difficult to come up with 50 MHz of total aggregated bandwidth in this region, so Europe will continue to have an edge when it comes to speed.

by mobilesociety at November 09, 2014 07:33 PM

My Exodus from Truecrypt to DM-Crypt Is Complete

Back in August I wrote that I had started my exodus from Truecrypt as the software is no longer supported by its authors. Over the months I've experimented a lot with dm-crypt on Linux to see if it is a workable alternative for me. As it turns out, dm-crypt works great and here's how my migration went. It's a bit of a long story but since I did a couple of other things along the way that are typical maintenance tasks that have to be done when running out of disk space, I thought it's perhaps a story worthwhile to be told to pass on the tips and tricks I picked up along the way from different sources.

Migrating My Backup Drives To DM-Crypt

At first I migrated my backup hard drives from Truecrypt to dm-crypt while I stayed with Truecrypt on my PC. Instead of using a dm-crypt container file I chose to create a dm-encrypted partition on my backup drives with Ubuntu's “Disk Utility”. Ubuntu automatically recognizes the dm-crypt partition when I connect the backup hard drives to the PC and asks for the password. Pretty much foolproof.

Running Out Of Disk Space Faster Than I Thought

The next step came when my 500 GB SSD drive was close to becoming full and I had to get a bigger SSD. Fortunately prices have come down quite a bit over the last year once again and a 1 TB Samsung 840 EVO was to be had for little over 300 euros. I had some time to experiment with different migration options as the 840 EVO had a firmware bug that decreased file read speeds over time so I chose to wait with my migration until Samsung had a fix.

DM-Crypt Partitions Can Be Mounted During the Boot Process

A major positive surprise during those trial runs was that even my somewhat older Ubuntu 12.04 LTS recognizes the dm-crypt partition during the boot process when configured in the “fstab” and “crypttab” configuration files and asks for the password during the boot process before the user login screen is shown. Perfect!

Here's how my “/etc/cryptab” entry looks like:

# create a /dev/mapper device for the encrypted drive
data   /dev/sda3     none luks,discard

And here's how my “/etc/fstab” entry looks like:

# /media/data LUKS
/dev/mapper/data /media/data ext4 discard,rw 0 0

Sins Of The Past - Hard Disk Migration The Hard Way

When I initially upgraded my from a 350 GB hard drive to a 500 GB SSD I used Clonezilla to make a 1:1 copy of my hard drive to the SSD and used the extra space for a separate partition. After all, I couldn't imagine that I would run out of disk space on the initial 350 GB partition anytime soon. That was a bad mistake as it turned out pretty quickly, as the virtual machine images on that partition soon grew beyond 200 GB. As a consequence I moved my Truecrypt container file to the spare partition but that only delayed the inevitable for a couple of months. In the end I was stuck with about 50 GB left on the primary partition and 100 GB on the spare partition, with the virtual machine images threatening to eat up the remaining space in the next months.

As a consequence, I decided that once I moved to a 1 TB SSD, I would change my partitions and migrate to a classic separation of the OS in a small system partition and a large user data partition. I left the system partition unencrypted as the temp directory is in memory, the swap partition is a separately encrypted partition anyway and the default user directories are file system encrypted. In other words, I decided to only encrypt the second partition with dm-crypt in which I would store the bulk of my user data and to which I would link from my home directory.

Advantages of a Non-Encrypted System Partition

There are a couple of advantages of a non-encrypted system partition. The first one is that in case something goes wrong and the notebook refuses to boot, classic tools can be used to repair the installation. The second advantage is that Clonezilla can back up the system partition very quickly because it can see the file system and hence only needs to read and compress the sectors of the partition that are filled with data. In practice my system partition contains around 20 GB of data which Clonezilla can copy in a couple of minutes even on my relatively slow Intel i3 based notebook. If I used dm-crypt for the system partition, Clonezilla would have to back up each and every sector of the 120 GB partition.

Minimum Downtime Considerations

The next exodus challenge was how to migrate to the 1 TB SSD with minimum downtime. As this is quite a time intensive process during which I can't use the notebook I played with several options. The first one I tried was to use Clonezilla to only copy over the 350 GB primary partition to the new SSD and then shrink it down to around 120 GB. This works quite well but it requires shrinking the partition before recreating the swap partition and then manually reinstalling the boot sector.  Reinstalling the boot sector is a bit tricky if done manually but the Boot-Repair-Disk project pretty much automates the process. The advantage of only copying one partition obviously is that it speeds things up quite a bit. In the end I chose another option when the time came and that was to use Clonezilla to make a 1:1 copy of my 500 GB SSD including all partitions to the 1 TB SSD. This saved me the hassle of recreating the boot sector and I had the time for it anyway as I ran the job over night.

Tweaking, Recreating and Encrypting Partitions On The New SSD

Once that was done I had a fully functional image on the 1 TB SSD with a working boot sector and to continue the work, I put it into another notebook. This way I could finish the migration while I was still being able to work on my main notebook. At this point, I deleted all data on my spare partition on the 1 TB SSD and also the virtual machine images on the primary partition. This left about 20 GB on the system partition. I then booted from a live Ubuntu system from a CD and used “gparted” to decrease the system partition from 350 GB down to 120 GB and to recreate a Linux swap partition right after the new and smaller system partition. Like the 1:1 Clonezilla copy process eariler, this takes quite a while. This is not a problem, however, as I could still work on the 'old' SSD and even change data there as migrating the data would only come later. Once the new drive was repartitioned I rebooted into the system on my spare notebook and used Ubuntu's “Disk Utility” to create the dm-crypt user partition in the 880 GB of remaining space on the SSD.

Auto-Mounting The Encrypted Partition and Filling It With Data

As described above it's possible to auto-mount the encrypted partition during the boot process so the partition is available before user login. As in my previous installation where I mapped the “Documents” folder and a couple of other directories to the Truecrypt volume, I removed the logical links for that and recreated new ones that pointed to empty directories on the new dm-crypt volume. And once that was done it was time to migrate all my data including the virtual machine images to the new SSD. I did this by backing up all my data to one of my cold-storage backup disks as usual and restoring it from there to the new SSD. The backup only takes a couple of minutes as LuckyBackup is pretty efficient by only coping new and altered files. To keep the downtime to a minimum I swapped the SSDs after I made the copy to the backup drives and started working with the 1 TB SSD in my production notebook. Obviously I restored the email directory and the most important virtual machine images first so I could continue working with those while the rest of the data was copied over in the background.

Thunderbird Is A Special Bird

In my Truecrypt installation I used a logical link for the mail directory so I could have it on the Truecrypt volume while the rest of the Thunderbird installation remained in the user directory. At first I thought it was only necessary to replace the local link to the mail folder but it turned out that Thunderbird also keeps the full path in its settings and doesn't care much about logical links. Fortunately the full paths can be changed in "Preferences - Mail Setup".

Summary

There we go, this is the story of my migration away from Truecrypt, upgrading to bigger SSD and cleaning up my installation at the same time. I'm glad I could try all things on a separate notebook first without Ubuntu complaining or making things difficult when it detected different hardware as other operating systems perhaps would have. Quite a number of steps ended up in trial and error sessions that would have resulted in a lot of stress if I hadn't known about them during the real migration. It's been a lot of work but it was worth it!

by mobilesociety at November 09, 2014 07:16 PM

Kai Hendry's blog

Vhost docker container

23:19 <hendry> biggest feature missing for me is dockers lack of vhosting support. hosting off a random port is a bit silly, no?
23:20 <dstufft> hendry: you can't generically do vhosting
23:24 <hendry> dstufft: not quite sure why vhosting is SUCH a hard feature
23:24 <niloc132> hendry: for a start, its http-specific
23:26 <dstufft> and even for the protocols that do support a vhost feature, there isn't a standard protocol agnostic way of getting that information
23:26 <exarkun> You can set up your own vhosting in a container.  Docker doesn't /have/ to support it.  So it's probably better if Docker doesn't, given the lack of a single, obvious, complete (ie, supporing all protocols) solution.
23:26 <exarkun> And you can find lots of http vhosting images out there now because people do want to do this and have already solved the problem.
23:27 <hendry> i don't want to solve it in the container. I guess I need to study some nginx reverse proxy thing
23:27 <dstufft> nginx can do it
23:27 <dstufft> or haproxy
23:27 <dstufft> or any number of things
23:28 <dstufft> I like haproxy for it, it's a pretty good tool
23:28 <exarkun> Do you have a reason that you don't want to solve it in a container?  The way you state it, it sounds like an arbitrary constraint.
23:28 <hendry> dstufft: why haproxy over nginx?
23:28 <hendry> exarkun: because i would be building more complexity in my container that i want to keep dead simple? or is the functionality running in another seperate container?
23:29 <dstufft> hendry: haproxy isn't HTTP specific, so if you find yourself wanting to do more you don't need to drop in another thing to handle it
23:29 <dstufft> it would be running in another seperate container
23:29 <hendry> dstufft: everything i do is HTTP.... (though does Websockets run over port 80/443?)
23:29 <exarkun> hendry: As dstufft said, *not* in your container.
23:30 <exarkun> That's the point.  Independent, composeable units.  Containers.
23:30 <hendry> exarkun: can you point to such a solution for a container to dispatch vhosts IIUC to another container ?
23:30 <exarkun> You can find one with two minutes of Googleing, I think.
23:31 <hendry> "vhost docker container" not looking good
23:33 <hendry> https://registry.hub.docker.com/search?q=vhost not looking great either
23:33 <hendry> exarkun: i give up
23:35 <exarkun> To your credit, you did spend three minutes.
23:35 <exarkun> I don't know what more effort anyone could be asked to expend than that.
23:36 <exarkun> (I'm certainly not going to!)
23:37  * hendry sighs

November 09, 2014 03:49 PM

November 08, 2014

Kai Hendry's blog

'invalid value for project' google compute engine

Google wasted my time by having a distinction between PROJECT NAME & PROJECT ID.

The SDK will ask you to set gcloud config set project NAME

NAME is the PROJECT ID

When things go wrong:

ERROR: (gcloud.compute.instances.create) Some requests did not succeed:
 - Invalid value for project: localkuvat

When things go right:

$ gcloud config set project numeric-rig-758
$ gcloud compute instances create outyet \
    --image container-vm-v20140925 \
    --image-project google-containers \
    --metadata-from-file google-container-manifest=containers.yaml \
    --tags http-server \
    --zone us-central1-a \
    --machine-type f1-micro
Created [https://www.googleapis.com/compute/v1/projects/numeric-rig-758/zones/us-central1-a/instances/lk].
NAME ZONE          MACHINE_TYPE INTERNAL_IP   EXTERNAL_IP    STATUS
lk   us-central1-a f1-micro     10.240.89.159 146.148.60.109 RUNNING

Hat tip: https://blog.golang.org/docker

November 08, 2014 12:19 PM

November 05, 2014

MobileMonday London

Next Event: 12th November 5pm - 8pm at Apps World, Free Developer Plus Pass!

Time is running out to register for our next event - a drinks reception at Apps World, ExCeL on 12th November from 5pm.

Registration for our reception also gives you:
* a free Developer Plus Pass (worth £59) which includes access to the keynote from Brian Cox
* an exhibition pass for both 12th and 13th November
* access to the after-party at the O2 on the 12th (a short hop across the river on the Emirates Air Line cable car)

Come and see us at the MoMoLo StartUp Zone during the day on Wednesday or Thursday - in our village we'll be in great company with around thirty of the most diverse StartUps from our community. From double dating to parking and app testing to gaming, they'd love your support and will also be there for the drinks on Wednesday 12th from 5-8pm.

In the Zone will be: Ripple. Inc, Dna Dezign, Steer73, Smashed Crab Studio, Intoware, Mobikats Enterprise, Equaleyes Solutions, UCL Advances, Readly, ETAOI Systems, Techsis, eeGeo, AppyParking, Rounded Squarish, Triggertrap, Muzache, Playir, Mobileize, RefMe. co, Kites, QuizTix, Develapps, adsy. me, Swytch, Scramboo and Double.

If you also want to attend the various conference tracks you can enter MOMO15 and get a 15% discount on the ticket price. Check out the programme and book your tickets here.

Once more, do remember to register by 5pm Friday.
https://www.eventbrite.co.uk/e/momolo-goes-to-apps-world-registration-13270702027


Contribute to the Developer Economics Research Series on The App Economy and Developer Trends

While we’re here, and talking about Apps, we’re great supporters of the Developer Economics survey, which provides really important insights into what is happening and needs your support!

Now in its eighth iteration, the survey tracks responses from thousands of contributors across the globe. What do you think about the latest trends? Do you see more opportunity in Wearables, Smart Homes or Connected Cars? Who's making money from apps - and how? Are iOS developers adopting Swift? Is the web vs. native debate still relevant? It only takes ten minutes and you could win some great prizes.

We will send out a link to the free results of the survey in February. So do get involved here.

Looking forward to seeing you all next week.

by Julia Shalet (noreply@blogger.com) at November 05, 2014 05:05 PM

London Calling

Enterprise Social Affects More Than Just the Bottom Line

tighe-wall

By Guest Author Tighe Wall from IBM.

Over the past year many of my conversations with clients have turned the corner from “Why Social?” to “Which Social?” These organizations are realizing that the benefits of Social Enterprise tools extend far beyond simple measures of ROI. Though famously difficult to measure, Social tools, when chosen and rolled out effectively, lower costs by reducing duplicate efforts, grow revenue through more effective R&D, and reduce training and meeting costs through digital collaboration rather than face-to-face meetings.

My focus tends to be on the broad reaching positive benefits of Social Enterprise that are harder to measure, namely their impacts on employees and organizations. Of these, I’ve recently been spending time speaking and thinking about the following benefits of Enterprise Social tools.

Locating expertise. Last week I needed a project manager with experience in the fashion industry who speaks Turkish. I was able look through the skills, locations, experience, and languages spoken by the 430,000 people at IBM in a matter of seconds through our socially enabled Expertise Locator tool. This saved me the time of emailing and calling around my network, reduced my vetting time, and let me propose several candidates to my client that day rather than next week.

Reducing email. How many emails do you receive a day? How much time do you typically spend writing and responding to emails a day? Social tools limit the number of emails one receives and engages many more people than the typical one-to-one email dialogue.

Yes, a reduction in email is one of the most easily measured benefits of Social by lowering storage costs. But storage is incredibly cheap. Time isn’t.

Attracting top young talent. According to Nielsen, 74 percent of Millennials say technology makes their lives easier and spend 21 hours a month on average using Social. How will the top young talent your firm is recruiting react when you tell them they won’t be able to communicate and collaborate with their fellow employees using digital channels? Do you expect them to stick around? And if they do, do you expect them not to use these external channels to collaborate with their coworkers?

The truth is, they will use external channels, and if your current employees don’t already have an internal Social platform, they likely are, too.

Flattened hierarchy. Not only do social platforms give entry-level employees the opportunity to engage and connect with business leaders, it gives those leaders, and in the the case of IBM, the CEO, a channel with which to engage their employees. Sound scary?

According to the Harvard Business Review, flat organizations are more nimble than those with large hierarchies, tend to innovate more quickly, and more often have a shared purpose. Entrenched hierarchies, slow rates of innovation, and divergent purposes sound much, much scarier to me.

 

When considering which Social Enterprise tool to roll out, carefully weigh the capabilities of each option, link them to your business processes, and set benchmarks and targets for ROI calculations. But don’t neglect to consider the other wide-reaching benefits of your decision.

If you enjoyed this blog post you may like other related posts listed below under You may also like ...

To receive future posts you can subscribe via email or RSS, download the android app, or follow me on twitter @andrewgrill.



You may also like ...

by Tighe Wall at November 05, 2014 12:34 PM

Martin's Mobile Technology Page

A 2 Amp USB Charger Is Great - If A Device Makes Use Of It

The smallest 2 ampere USB charger I've come accross so far is from Samsung and my Galaxy S4 makes almost full use of its capabilities by drawing 1.6 amperes when the battery is almost empty. In case you are wondering how I know, have a look at the measurement tool I used for measuring the power consumption of a Raspberry Pi. What I was quite surprised about, however, was that all other devices I tried it with, including a new iPhone 6, only charge at 1 ampere at most. I wondered why that is so I dug a bit deeper. Here's a summary of what I've found:

One reason for not drawing more than 1A out of the charger is that some devices simply aren't capable to charge at higher rates, no matter which charger is used. The other reason is that USB charging is only standardized up to 900 mA and everything above is proprietary. Here's how it works:

  • When a device is first connected to USB it may only draw 100 mA until it knows what kind of power source is behind the cable.
  • If it's a PC or a hub, the device can request to get more power and, if granted, may draw up to 450 mA out of us USB2 connector. And that's as much as my S4 will draw out of the USB connector of my PC.
  • USB3 connectors can supply up to 900 mA with the same mechanism.
  • Beyond the 450 mA USB2 / 900 mA USB3, the USB Charging Specification v1.1 that was published in 2007 defines two types of charging ports. The first is called Charging Downstream Port (CDP). When a device recognizes such a USB2 port it can draw up to 900 mA of power while still transferring data.
  • The second type of USB charging port defined by v1.1 of the spec is the Dedicated Charging Port (DCP). No data transfers are possible on such a port but it can deliver a current between 500 mA and 1.5A. On such a port the D+ and D- data lines are shortened over a 200 Ohm resistor so the device can find out that it's not connected to a USB data port. Further, a device recognizes how much current it can draw out of such a port by monitoring the voltage drop when current consumption is increased.
  • With v1.2 of the charging specification, published in September 2010, a Dedicated Charging Port may supply up to 5A of current.

And that's as far as the standardized solutions go. In addition there are also some Apple and Samsung proprietary solutions to indicate the maximum current their chargers can supply:

  • Apple 2.1 Ampere
  • Apple 2.4 Ampere
  • Samsung 2.4 Ampere

There we go, quite a complicated state of affairs. No wonder, only one device I have makes use of the potential of my 2A travel charger. For more information, have a look at the USB article on Wikipedia that also contains links to the specifications and the external blog posts here, here and here.

by mobilesociety at November 05, 2014 06:50 AM

November 03, 2014

mobiForge blog

HTML5 best practice Web apps: Quartz

David Jensen is the head of development at popular UK free newspaper Metro. He has overseen the migration of metro.co.uk to become a responsively-designed web app. With his first-hand experience of HTML5 and web apps, mobiThinking asked Jensen about his favourite web app (other than Metro).

by mobiThinking at November 03, 2014 02:22 PM

Martin's Mobile Technology Page

Power Cycling My Backup Router With My Raspi

I am quite unhappy to admit it but when it comes to reliability, my LTE router that I use for backup connectivity for my home cloud comes nowhere close to my VDSL router. Every week or so after the daily power reset the router fails to connect to the network without any apparent reason. Sometimes it connects but the user plane is broken. Packets are still going out but my SSH tunnels do not come up while the authentication log on the other side shows strange error messages. The only way to get things back on track is to reboot the LTE router or to power cycle it. Rebooting the router can only be done from inside the network so when I'm traveling and the network needs to fall back to the backup link, there's nothing I can do should that fail.

When I recently stumbled over the 'EnerGenie EG-PM2' power strip that has switchable power sockets via a built in USB interface I knew the time had come to do something about this. At around 30 euros it's quite affordable as well and the software required on the Raspberry Pi, Ubuntu or Debian side are open source and already part of the software repository. A simple 'sudo apt-get install sispmctl' executed in a shell and the setup is up and running without further configuration. Individual power sockets are switched off and on via the following shell commands:

sudo sispmctl -f 3  #switches power socket 3 off

sudo sispmctl -o 3 #switches power socket 3 on

It couldn't be easier and I had the basic setup up and running in 2 Minutes. In a next step I wrote a short Python script that checks if Internet connectivity is available via the backup link and if not, power cycles the LTE router. I noticed that there's a Python wrapper for 'sispmctl' but it's also possible to just execute a command in a shell from Python as follows:

import subprocess
result_on  = subprocess.call ("sudo sispmctl -o 4", shell=True)

Perhaps not as elegant as using the wrapper but it works and the result variable can be checked for problems such as the USB link to the power strip being broken.

by mobilesociety at November 03, 2014 07:08 AM

Raising the Shields - Part 14: Skype Jumps Into My VPN Tunnel Despite The NAT

According to public wisdom, the days when Skype was secure are long gone and I use my own instant messaging server to communicate securely when it comes to text messaging. When it comes to video calling, however, there are few alternatives at the moment that are as universal, as easy to use and with a similar video quality. Under normal circumstances Skype video calls are peer to peer, i.e. there is no central instance on which the voice and video packets can be intercepted. That's a good thing and Skype has many ways to find out if a direct link between two Skype clients can be established.

And here's a really interesting scenario: Skype is even able to find out that a direct link can be established through my VPN link I usually establish with my VPN server at home when I'm traveling and a Skype client on a PC at home despite a NAT between the VPN link and the local home network. That means that when I'm traveling, Skype packets are routed directly between the Skype client running on a PC at home and the Skype client on my notebook that is connected to my home network over a VPN tunnel. At no time do such Skype packets traverse a link on the Internet outside the VPN tunnel. In other words, potential attackers that can passively collect packets between where I am and my home network are unable to decrypt my Skype traffic, should they have such an ability.

Sure, Skype and anyone who has access to Skype can still find out if and when I'm online, probably even where I'm online and when and to whom I make calls. The call content, however, can't be intercepted without me noticing, i.e. when the traffic suddenly is not peer-to-peer through the VPN tunnel anymore. Far from perfect, but something to work with for the moment.

by mobilesociety at November 03, 2014 06:45 AM

October 30, 2014

mobiForge blog

Why Swift Flies for iOS Developers

Now that the dust has settled somewhat on Swift, the new language on the block for developing iOS and OSX applications, we take a look at its impact and improvements over its predecessor, Objective-C. Apple claims Swift to be a modern, safe, and powerful language for developing for iOS and OSX. Just how powerful is Swift compared to the venerable Objective-C? And how does it make developing applications easier and safer?

by weimenglee at October 30, 2014 09:42 AM