And here is one more post about my 32C3 experience at the end of last year in Hamburg. This was the first conference I did not only “attend” but was actually “a part of”. There is a big difference between the to approaches: Normal conferences are fully organized and you go there to listen to the talks, to meet and talk to people you already know and perhaps, if you are the communicative type, meet a few new people that share your interests. The annual CCC conferences are different in this respect because here attendees are encouraged to help with many different aspects of the conference from checking tickets at the entrance, be part of the wardrobe team, become a camera man, help people find their way around the congress, help people with their network problems, etc. etc.
One the one hand this helps to keep the ticket prices down because the 1.500 volunteers who signed up as congress “angels” put in 10.000 work hours and all of them did it voluntarily and for free. That saves a lot of money. Like me, many might not only have altogether altruistic motives to volunteer. Apart from being happy to help I became a congress angel to get a glimpse of how and by whom the event is organized and how things work behind the scenes. I signed up for a couple of camera shifts and in addition spent some time at the network help desk. Not only did I learn a lot about how the congress is run but I also met a lot of people during my network help desk shifts, both people seeking help and other network angels in the same shifts, who freely shared their ideas on the stuff they were having fun with during the less busy times (after all this was a hacker conference so there weren't too many people who had network issues with their equipment they couldn't figure out themselves). If I had just “attended” the congress I would have never met all these people and it wouldn't have been half the fun it was! In other words, I'm fully hooked on the concept!
The crucial thing about becoming an angel at the congress and volunteering is that there is a system that makes it easy and flexible in the extreme. The major idea is that one is not assigned to do something but that one has complete control over what one wants to do and when. The place where work and volunteers come together is the web based “angel-system” that works equally well on big and small devices. Here, one can pick tasks and 2 hour timeslots before an during the conference that fit into one's overall schedule. I took camera shifts for presentations I wanted to attend anyway and network help desk duties at times in which there was no talk I wanted to go. During the congress my plans changed slightly and I could re-arrange my shifts in the “angel-system” in a jiffy from my smartphone. A great system that gives the conference the volunteers it needs and the volunteers the freedom to assign themselves tasks to do and be in control. Wonderful!
I'm totally hooked on the concept and I feel encouraged to be part of the event even more next time rather than just attending. So if you plan to come to a CCC congress in the future, sign up as an “angel” before you arrive and have more fun!
In case you have missed the previous two parts on Private Mobile Radio (PMR) services on LTE have a look here and here before reading on. In the previous post I've described the potential advantages LTE can bring to PMR services and from the long list it seems to be a done deal. On the other hand there is unfortunately an equally long list of challenges PMR poses for current 2G legacy technology it uses that will not go away when moving on the LTE. So here we go, part 3 focuses on the downsides that show quite clearly that LTE won't be a silver bullet for the future of PMR services:
Glacial Timeframes: The first and foremost problem PMR imposes on the infrastructure are the glacial timeframe requirements of this sector. While consumers change their devices every 18 months these days and move from one application to the next, a PMR system is static and a time frame of 20 years without major network changes was the minimum considered here in the past. It's unlikely this will significantly change in the future.
Network Infrastructure Replacement Cycles: Public networks including radio base stations are typically refreshed every 4 to 5 years due to new generations of hardware being more efficient, requiring less power, being smaller, having new functionalities, because they can handle higher data rates, etc. In PMR networks, timeframes are much more conservative because additional capacity is not required for the core voice services and there is no competition from other networks which in turn doesn't stimulate operators to make their networks more efficient or to add capacity. Also, new hardware means a lot of testing effort, which again costs money which can only be justified if there is a benefit to the end user. In PMR systems this is a difficult proposition because PMR organizations typically don't like change. As a result the only reason for PMR network operators to upgrade their network infrastructure is because the equipment becomes 'end of life' and is no longer supported by manufacturers and no spare parts are available anymore. The pain of upgrading at that point is even more severe as after 10 years or so when technology has advanced so far that there will be many problems when going from very old hardware to the current generation.
Hard- and Software Requirements: Anyone who has worked in both public and private mobile radio environments will undoubtedly have noticed that quality requirements are significantly different in the two domains. In public networks the balance between upgrade frequency and stability often tends to be on the former while in PMR networks stability is paramount and hence testing is significantly more rigorous.
Dedicated Spectrum Means Trouble: The interesting questions that will surely be answered in different ways in different countries is whether a future nationwide PMR network shall dedicated spectrum or shared spectrum also used by public LTE networks. In case dedicated spectrum is used that is otherwise not used for public services means that devices with receivers for dedicated spectrum is required. In other words no mass products can be used which is always a cost driver.
Thousands, Not Millions of Devices per Type: When mobile device manufacturers think about production runs they think in millions rather than a few ten-thousands as in PMR. Perhaps this is less of an issue today as current production methods allow the design and production run of 10.000 devices or even less. But why not use commercial devices for PMR users and benefit from economies of scale? Well, many PMR devices are quite specialized from a hardware point of view as they must be more sturdy and have extra physical functionalities, such as a big Push-To-Talk buttons, emergency buttons, etc. that can be pressed even with gloves. Many PMR users will also have different requirements compared to consumers when it comes the screen of the devices, such as being ruggedized beyond what is required for consumer devices and being usable in extreme heat, cold, wetness, when chemicals are in the air, etc.
ProSe and eMBMS Not Used For Consumer Services: Even though also envisaged for consumer use is likely that group call and multicast service will be limited in practice to PMR use. That will make it expensive as development costs will have to be shouldered by them.
Network Operation Models
As already mentioned above there are two potential network operation models for next generation PMR services each with its own advantages and disadvantages. Here's a comparisons:
A Dedicated PMR Network
A Commercial Network Is Enhanced For PMR
So, here we go, these are my thoughts on the potential problem spots for next generation PMR services based on LTE. Next is a closer look at the technology behind it, which might take a little while before I can publish a summary here.
Over the weekend I wanted to set up a cloud based project management software and after my default web hoster failed miserably, I took the opportunity to try a new hosting company I heard of some time ago called Uberspace. This post is probably only interesting for German speakers because they only have a German web presence, sorry about that. So you might wonder why I am reviewing a web hoster, that's quite out of the ordinary for this blog!? Right, but this web hoster is too!
Unlike other hosting services that I've been using for over a decade that have become big behemoths that are now only interested in the masses and offer a very limited feature set and not a tiny bit more, Uberspace offers a lot of features, an online documentation that is very nerdy and fun to read and they offer maximum freedom for my web hosting requirements. For starters, they don't want money up front, you can try for free for a month. If you walk away during that time the account is simply deleted, no questions asked. If you want to stay around you decide how much you want to pay per month. They give some guidance of what they think this should be (€5) and give a detailed overview of their own costs from power consumption to hardware purchases. I like transparency and details! They also point out that in case you are cash starved you can also pay less. Paying more is also possible of course. A wonderful approach that seems to work, they've been around for a while.
Apart from the easy signup process all the rest is pretty much straight forward as well. After FTPing the project manangement software to my virtual web server and creating a mySQL database via mySQLAdmin web frontend I could immediately start working with it and could access it over both HTTP and HTTPS with the default domain name given to my account. Adding my own domains for the web space is simple as well, a simple command in the shell and it is done. Afterward, the IPv4 and (optionally) the IPv6 address of the site needs to be provisioned in the DNS server which by the way they don't provide so you can and have to bring your own domain names. It worked like a charm both for IPv4 and IPv6. Wonderful!
To use HTTPS with my own domain name an SSL certificate is required. Uberspace offers two ways of doing this. The old fashioned way is to get an SSL certificate somewhere and then to import the certificate and key files to the web space. The cool way to do it since last December is to use Let's Encrypt and Uberspace is probably one of the first web hosters that has integrated Let's Encrypt. It took about two minutes and three commands on the shell to request the generation and installation of the certificate. It was so simple I couldn't believe it until I checked that the Let's Encrypt certificate is actually used when I browsed to my site. Awesome!
Freedom, IPv6, Let's Encrypt, a great nerdy online documentation and my website was up and running with my own domain and https in less than an hour, Uberspace certainly got me hooked!
Like every year Vodafone has released numbers on mobile network usage during New Year's eve between 8 pm and 3 am as this is one of the busiest times of the year. This year, Vodafone says that 185 TB were used during those 7 hours. Let's say uplink and downlink are roughly 9:1 which would result in a total amount of 166.5 TB downloaded during that time. Divided by 7 hours, 60 minutes and 60 seconds and then multiplied by 8 to to get bits instead of bytes results in an average downlink speed at the backhaul link to the wider Internet of 53 Gbit/s. An impressive number, so a single 40 Gbit/s fiber link won't do anymore (if they only had a single site and a single backhaul interconnection provider, which is unlikely). Back in 2011/2012 the same number was 'only' 7.9 Gibt/s.
On the other hand when you compare the 53 Gbit/s for all Vodafone Germany customers to the 30 Gbit/s reached by the uplink traffic during the recent 32C3 congress or the sustained 3 Gbit/s downlink data rate to the congress Wi-Fi generated by 8.000 mobile devices, the number suddenly doesn't look that impressive anymore. Or compare that to the 5000 Gbit/s interconnect peaks at the German Internet Exchange (DE-CIX). Yes, it's a matter of perspective!
If you've come across similar numbers for other network operators please let me know, it would be interesting to compare!
Back in 2014, I came up with a project to use a Raspberry Pi as a Wifi access point in hotels and other places when I travel to connect all my devices to a single Internet connection which can either be over Wifi or over an Ethernet cable. As an added (and optional) bonus, the Raspberry Pi also acts as a VPN client and tunnels the data of all my devices to a VPN server gateway and only from there out into the wild. At the time I put my scripts and operational details on Github for easy access and made a few improvements over time. Recently I made a couple of additional improvements which became necessary as Raspbian upgraded its underlying source from Debian Wheezy to Debian Jessie.
One major change that this has brought along was that IPv6 is now active by default. For this project, IPv6 needs to be turned-off by default as most VPN services only tunnel IPv4 but happily return IPv6 addresses in DNS responses which makes traffic go around the VPN tunnel if the local network offers IPv6. For details see here and here. Another change was that the OpenVPN client is now started during the boot process by default if installed while this was not the case before and does so reliably. As a consequence I could put a couple of 'iptables' commands in the startup script to enable NATing to the OpenVPN tunnel interface straight way.
In other words, things are better than ever and v1.41 on Github now reflects those changes. Enjoy!
Microsoft Windows 10 behaves like a spy under your fingertips these days, Apple gives you less and less freedom on its desktop OS, so there's never been a better time to regain privacy and freedom by switching to Linux on the desktop. Over the years I wrote many articles on this blog about my Linux delights but I haven't seen a better summary of why switching to Linux on the desktop full time is so great than Dan Gillmor's recent article on the topic. Highly recommended!
A date to remember for me: On 15th January 2016 I contacted my web server at home running Owncloud and Selfoss for the first time over IPv6. From an end user's point of view no difference is visible at all but from a technical point of view it's a great "first time" for me, made even sweeter by the fact that my PC was not connected to another IPv6-enabled fixed line but connected via tethering to a smartphone with dual-stack IPv4v6 cellular connectivity.
The Thing With Dynamic IPv6 Addresses for Servers
And it's been a bit of a struggle to put together, this IPv6 stuff is not as straight forward as I hoped it would be. For a crash course I wrote back in 2009 have a look, here, here, here and here. The major challenge that I had to overcome for this to happen is to find a dynamic DNS service that can handle not only dynamic IPv4 addresses but also dynamic IPv6. Noip.com, where I host my domain and where I use the dynamic DNS service can handle IPv6 addresses for my domain but only static entries. The response to a support question how to do dynamic IPv6 addresses with them resulted in the little informative answer that they are working on it but no date has been announced by when this will be available. Hm, looking at their record, they seem to be working on IPv6 already since 2011 so I won't get my hopes up that this will happen soon. Is it really that difficult? Shame on you!
O.k., another dynamic DNS service I use is afraid.org and they do offer dynamic DNS with IPv6. Unfortunately, they have a DNS entry time to live (TTL) for IPv6 of 3600 seconds, i.e. 1 hour. This is much too long for my purposes as my IPv6 prefix changes once a day and any change must be propagated as quickly as possible and not only after an hour in the worst case. They offer a lower TTL with a paid account, but their idea and my idea of how much this may cost are too far apart. I've found a couple of other dynamic IPv6 services but they were also not suitable for me because they also had TTLs that were too long for my purpose.
One option I found that didn't have this restriction is dynv6.com. Their service is free and they do offer IPv4 and IPv6 dynamic DNS with a TTL of 1 minute but only for their own domain. Not an option for me either, I want to be reachable via my own domain. Kind of a deadlock situation...
But here's how I finally go it to work: The Domain Name System has a forwarding mechanism, the "Canonical Name Record" (CNAME). By using this mechanism, I can forward DNS queries for my domain that is hosted at noip.com (let's say it's called www.martin.com) to my subdomain at dynv6.com (let's say my domain there is called martin.dynv6.com). So instead of updating the DNS entry for www.martin.com when my IPv6 address changes once a day I can now update martin.dynv6.com which has a TTL of 1 minute while the CNAME forwarding at noip.com from www.martin.com to martin.dynv6.com in the DNS system is static and remains unchanged.
As a result the web page name in the browser remains "www.martin.com" but I can use my dynamic IPv6 record at dynv6.com where customer specific domains are not offered. Not ideal but it will do until NO-IP.com gets their act together.
LTE for Public Safety Services, also referred to as Private Mobile Radio (PMR) is making progress in the standards and in the first part of this series I've taken a first general look. Since then I thought a bit about which advantages a PMR implementation might offer over current 2G Tetra and GSM PMR implementations and came up with the following list:
Voice and Data On The Same Network: A major feature 2G PMR networks are missing today is broadband data transfer capabilities. LTE can fix this issue easily as even bandwidth intensive applications safety organizations have today can be served. Video backhauling is perhaps the most demanding broadband feature but there are countless other applications for PMR users that will benefit from having an IP based data channel such as for example number plate checking and identity validation of persons, access to police databases, maps, confidential building layouts, etc. etc.
Clear Split into Network and Services: To a certain extent, PMR functionality is independent of the underlying infrastructure. E.g. the group call and push to talk (PTT) functionality is handled by the IP Multimedia Subsystem (IMS) that is mostly independent from the radio and core transport network.
Separation of Services for Commercial Customers and PMR Users: On option to deply a public safety network is to share resources with an already existing commercial LTE network and upgrade the software in the access and core network for public safety use. More about those upgrades in a future post. The specific point I want to make here is that the IP Multimedia Subsystem (IMS) infrastructure for commercial customers and their VoLTE voice service can be completely independent from the IMS infrastructure used for the Public Safety Services. This way, the two parts can evolve independently from each other which is important as Public Safety networks typically evolve much slower or and in fewer steps compared to commercial services as there is no competitive pressure to evolve things quickly.
Apps vs. Deep Integration on Mobile Devices: On mobile devices, PMR functionality could be delivered as apps rather than built into the operating system of the devices. This allows to update the operating system and apps independently and even to use the PMR apps on new devices.
Separation of Mobile Hardware and Software Manufacturers: By having over-the-top PMR apps it's possible to separate the hardware manufacturer from the provider of the PMR functionality except for a few interfaces which are required such as setting up QoS for a bearer (already used for VoLTE today, so that's already taken care of) or the use of eMBMS for a group call multicast downlink data flow. In contrast, current 2G group call implementations for GSM-R require deep integration into the radio chipset as pressing the talk button required DTAP messages to be exchanged between the mobile device and the Mobile Switching Center (MSC) which are sent in a control channel for which certain timeslots in the up- and downlink of a speech channel were reserved. Requesting the uplink in LTE PMR requires interaction with the PMR Application Server but this would be over an IP channel which is completely independent from the radio stack, it's just a message contained in an IP packet.
Device to Device Communication Standardized: The LTE-A Pro specification contains mechanisms to extend the network beyond the existing infrastructure for direct D2D communication, even in groups. This was lacking in the 2G GSM-R PMR specification. There were attempts by at least one company to add such a “direct” mode to the GSM-R specifications at the time but there were too many hurdles to overcome at time time, including questions around which spectrum to use for such a direct mode. As a consequence these attempts were not leading to commercial products in the end.
PMR not left behind in 5G: LTE as we know it today is not likely to be replaced anytime soon by a new technology. This is a big difference to PMR in 2G (GSM-R) which was built on a technology that was already set to be superseded by UMTS. Due to the long timeframes involved, nobody seriously considered upgrading UMTS with the functionalities required for PMR as by the time UMTS was up and running, GSM-R was still struggling to be accepted by its users. Even though 5G is discussed today, it seems clear that LTE will remain a cornerstone for 5G as well in a cellular context.
PMR On The IP Layer and Not Part of The Radio Stack (for the most part): PMR services are based on the IP protocol with a few interfaces to the network for multicast and quality of services. While LTE might gradually be exchanged for something faster or new radio transmission technologies might be put alongside it in 5G that are also interesting for PMR, the PMR application layer can remain the same. This is again unlike in 2G (GSM-R) where the network and the applications such as group calls were a monolithic block and thus no evolution was possible as the air interface and even the core network did not evolve but were replaced by something entirely new.
Only Limited Radio Knowledge Required By Software Developers: No deep and specific radio layer knowledge is required anymore to implement PMR services such as group calling and push to talk on mobile devices. This allows software development to be done outside the realm of classic device manufacturer companies and the select few software developers that know how things work in the radio protocol stack.
Upgradeable Devices In The Field: Software upgrades of devices has become a lot easier. 2G GSM-R devices and perhaps also Tetra devices can't be upgraded over the air which makes it very difficult to add new functionality or to fix security issues in these devices. Current devices which would be the basis for LTE-A Pro PMR devices can be easily upgraded over the air as they are much more powerful and because there is a broadband network that can be used for pushing the software updates.
Distribution of Encryption Keys for Group Calls: This could be done over an encrypted channel to the Group Call Server. I haven't dug into the specification details yet to find out if or how this is done but it is certainly possible without too much additional work. That was not possible in GSM-R, group calls were (and still are) unencrypted. Sure, keys could be distributed over GPRS to individual participants but the service for such a distribution was never specified.
Network Coverage In Remote Places: PMR users might want to have LTE in places that are not normally covered by network operators because it is not economical. If they pay for the extra coverage and in case the network is shared this could have a positive effect when sharing a network for both consumer and PMR services. However, there are quite a number of problems with network sharing one has to be careful when proposing this. Another option, which has also been specified, is to extend network coverage by using relays, e.g. installed in cars.
I was quite amazed how long this list of pros has become. Unfortunately my list of issues existing in 2G PMR implementations today that a 4G PMR system still won't be able to fix is equally long. More about this in part 3 of this series.
There is two extremes in the popular cloud space when it comes to ease of updating: Wordpress and Owncloud...
On one side is Wordpress which has about the most simple and most reliable update process that is fully automatic and doesn't even have to be triggered by the administrator for small upgrades and a simple click when going from one major release to the next. It hasn't failed me once in the past five years. And then there is Owncloud, which is the exact opposite.
Over the past year it failed me during each and every update with obscure error messages even for small security upgrades, broken installations and last resort actions such as deleting app directories and simply ignoring some warnings and to move ahead despite of them. If you think it can't be that bad, here's my account of one such update session last year. In the meantime I've become so frustrated and cautious as to clone my live Owncloud system and first try the update on a copy I can throw away. Only once I've found out how to run the upgrade process, which unfortunately changes every now and then as well, which things break and how to fix them do I run an upgrade on my production system. But perhaps there is some hope in sight?
My last upgrade a couple of days ago worked flawlessly, apart from the fact that the update process has changed again and it's now mandatory to finalize the upgrade process from a console. But at least it didn't fail. I was about to troll about the topic again but this morning I saw a blog post over at the Owncloud blog in which they finally admit in public that their upgrade process leaves a lot to be desired and that they have started to implement a lot of things to make it more robust and easier to understand. If you have trouble updating Owncloud as well I recommend to read the post, it might make you feel a bit better and give you some hope for the next update process.
And to the Owncloud developers I would recommend to go a bit beyond what they have envisaged so far: Blinking lights, more robustness and more information of what is going on during an update is a nice thing and will certainly improve the situation. In the end, however, I want an update process that is just like Wordpress'es: You wake up in the morning and have an email in your inbox from your Wordpress installation that tells you that it has just updated itself, that all is well and that you don't have to do anything anymore! That's how it should be!