Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Friday, July 29, 2011

What changes when "opened" vendor-specific technologies are better than "official" standards?

I've just been reading up on the history of PDF (Portable Document Format) on Wikipedia . A couple of lines to consider:


"PDF was originally a proprietary format controlled by Adobe, and was officially released as an open standard on July 1, 2008, and published by the International Organization for Standardization as ISO 32000-1:2008.......  granting a royalty-free rights for all patents owned by Adobe that are necessary to make, use, sell and distribute PDF compliant implementations"

"PDF's adoption in the early days of the format's history was slow. Adobe Acrobat, Adobe's suite for reading and creating PDF files, was not freely available..[....].... required longer download times over the slower modems common at the time; and rendering PDF files was slow on the less powerful machines of the day. Additionally, there were competing formats such as  [.....] Adobe soon started distributing its Acrobat Reader program at no cost, and continued supporting the original PDF, which eventually became the de facto standard for printable documents on the web"

Imagine, back in 1999, that you were a service provider, or the standardisation group for a number of SPs. And you'd just invented the concept of a "document conversion and viewing service". You'd created the .xyz document format, worked out the billing system and knew how much you wanted to charge to interconnect with the leading word processors and other applications of the day. You were going to sell monthly subscriptions to end users, allowing them to read web documents.

Sounds silly now, doesn't it? PDF instead took document viewing/creation down the route of being an application (free reader and paid authoring tool), through to being a feature of some web browsers, to today's existence of PDF-ing something as a mere function on a menu, or right-click-save-as. Early attempts to do PDF-creation-as-a-service disappeared.

I often use PDF as an example of the difference between delivering value as a service or as merely a feature/function of something else. This is hugely relevant in voice, and features in the Future of Voice Masterclass discussions around voice-enabled applications.

But this has also got me thinking about the general case of large technology companies releasing an existing successful or de-facto-standard as a fully-open technology, especially where it is better than an "official" standard developed through the usual committee-and-politics process.

What is the impact of this? Why would that company open up that standard in the first place - how do they monetise it? What's the other strategic value? My thoughts are that it:
  • Needs to be based on something so widespread already (eg PDF), or something so superior, that it can gain firm and enduring traction, even though it has a proprietary heritage.
  • Weakens any related technology that is rigidly dependent on the official standard, and which can't flex to accommodate the superior now-open one. This might be deliberate or an accidental side-effect.
  • Allows the original company to retain a strong share of the necessary software, even though it's free. And it can add in extra features or capabilities that help them monetise it via different products. For example, you don't need Adobe Reader to view PDFs, but most people have it anyway - and it also allows various still-proprietary technologies to be displayed
  • Gets more developers involved in using that standard
  • Helps to commoditise part of the value chain, shifting value (implicitly) elsewhere
There's probably some more, but I've only just started thinking about this.

Now, why does this matter in mobile?

Three things come to mind:

  • Skype's release of the SILK codec for VoIP
  • Google's release of WebRTC for browser-based communications, which also includes the iSAC codec it obtained with its GIPS acquisition
  • Apple's release of HLS (HTTP Live Streaming)
There's also Google release of the WebM video format, and Real's Helix technology a few years ago, plus others from Microsoft and probable a variety of others. Others such as Jabber/XMPP [for IM interoperability] have started life as open-source and then been adopted by large companies like Google and Cisco. Many of these are around audio and video, for which it's necessary to have a good population of viewers/clients in the field to avoid chicken and egg problems with content developers.

What I've been trying to work out is the impact of all these new standards (or drafts) on "official" alternatives that are baked-in to some wireless network infrastructure offerings and standards.

So for example, quite a number of people seem to believe that SILK is better than the AMR-WB codec, which forms a core part of VoLTE for delivering telephony on LTE. Given that VoLTE is less flexible than various other OTT-style voice platforms, in terms of creating "non-telephony" voice applications, this might have a serious long-term strategic impact on the overall voice marketplace. Coupled with smart use of the ex-GIPS Google acoustic toolkit, this could mean that OTT-style VoIP on LTE might actually have better performance than the official, QoS-integrated, IMS-enabled version, at least in certain circumstances.

Apple HLS is another teaser. Along with a couple of other web-based streaming protocols, this is an "adaptive rate" video format that can vary the quality/bandwidth used based on realtime prevailing network throughput rates. In other words, it watches network congestion and cleverly self-adjusts the bitrate to minimise delays and stalls from buffering. In other words, it kills quite a lot of the touted benefits of so-called "transparent video optimisation" in the operator's network, not least because HLS is (indirectly) under control and visibility of the video publisher.

WebRTC and in-browser communications is probably the most direct analogy to PDF. Potentially, it turns voice (and that's voice generally, not just "telephony" as an application) into a function, rather than a service. Now clearly there may need to be other services at the back end for certain use cases (eg interconnect with the PSTN), but it has the potential to completely disrupt parts of the communications infrastructure and operator business model - because it doesn't need infrastructure. It does the whole thing "in the cloud" - not as a dedicated technology like Skype, but simply as an integral part of the web.

The open question is why Apple, Google and Skype are doing this. Apple is probably the easiest - HLS seems to be part of its anti-Adobe crusade, plus it helps it to perpetuate iTunes and potentially use it to sell to non-Apple devices. Google and Skype might be trying to run a "codec war" with each other with iSAC vs. SILK (why? I'm not sure yet), and might just take out AMR-WB (and by extention, VoLTE) as collateral damage.

This is an area I want to dig into more deeply - and please paste comments and theories here to support / attack / extend this argument, as it's still only part-formed in my mind.

Thursday, July 28, 2011

Deep inspection of Allot's mobile data trends report

Quite a few technology vendors put out interesting research reports on mobile data - the best-known probably being Cisco's VNI data and forecasts, which gets cited by about half the rest of the industry.

However, a number of the smaller DPI and policy vendors (also WiFi specialists) also put out papers and reports, sometimes based on observed data from their own implementations, and sometimes from commissioned surveys.

(I've got absolutely no problem with this in principle - I've done various reports and papers for companies myself, although typically they've been for those wanting a "thought leadership" position associated with my often-contrarian opinions and analyses).

Clearly, all these reports are for principally for the purpose of raising awareness, and acting as marketing vehicles, by providing interesting newsworthy soundbites, and grabbing the attention of network purchasing folk at operators. But it's worth scrutinising them for what they say, what they don't say, and the methodology/assumptions involved.

One of these companies is Allot Communications, which has issued a series of reports on mobile data bandwidth use, applications and so forth. It's just published its H1 2011 report, downloadable here.

There's some good stuff in there, but also plenty which raises questions.


  • The source of the data for the report is Allot's own installed base of network elements, spanning "networks representing more than 250 million subscribers". It's entirely unclear which operators these are. That's critical, because it determines if this is a representative sample of the world's 5 billion or so subsciptions, or if it's somehow skewed because of particular operators' or countries' specific local circumstances. It's also unclear how many of the 250m are active data users at all - my reading is that's the "potential" reach of those networks, not the current user base.This is all critical, because if the survey is based on (say) developing-world operators, you'd expect to see a much higher overall growth rate for data than in mature markets.
  • The most glaring omission is any reference to the volume of traffic from laptops (3G dongles) versus smartphones or other devices.  This is hugely important in terms of interpreting the other statistics, as many dongles are sold as alternatives to fixed broadband, so you expect a broadly similar usage profile. It may well be that a large part of VoIP, P2P and mobile video streaming is consumed on PCs, which most operators cannot change easily - it's pretty hard to say that a USB modem service is "just like ADSL, except you can't use Skype. Or YouTube in HD" and remain competitive.
  • However, an important guide is filesharing. Generally, smartphones aren't significantly responsible for P2P traffic as far as I know. That suggests that the bulk of the 29% will come from PCs - and therefore, so will much of the video and web browsing, as almost nobody *just* uses PC and dongle to swap files. (As evidenced by Allot's chart on *fixed* broadband traffic) In other words, I expect that the contribution of PC mobile broadband is hugely skewing the overall study. I'm going to call out Allot and say it's trying to avoid this discussion - there is no mention of the word laptop, notebook, PC, modem or dongle in the whole document. The word "smartphone" appears 5 times.
  • There are no absolutes in terms of MB or GB, or in terms of actual numbers of unique users. So it's impossible to tell if growth is coming from more subscribers or more use per subscriber. I suspect that the average might actually be tending down as we see a shift from dongles to smartphones, and as late-adopters start getting smartphones.
  • It's unclear whether *all* data traffic gets funnelled through the Allot box in those networks. Does some get siphoned off "in front" of the box? (eg telco-hosted data services, BlackBerry traffic or corporate VPNs) Does some get injected in deeper in the network (eg with a CDN)? Where there are video compression / optimisation boxes or caches, is the data showing the compressed or uncompressed amounts of data? Are there any proprietary direct-tunnel or offload solutions involved that bypass the core network?
  • The report misuses the term "application" - video is not "an application" but a traffic type. An application (at a user level) can involve several different traffic types, for instance a web page with an embedded video advert or an audio plug-in.
  • Application-aware charging is something that most DPI vendors are huge fans of (unsurprisingly, as it typically needs DPI boxes), but which I'm a huge critic of. The study is based on analysis of vendors' stated policies or tariffs on the web, but it's unclear exactly what "application" means here - for example, many operators have very different charging for M2M data devices and applications than for smartphones or dongles. It's not clear from the Allot survey that the reported 32% of operators using app-aware charging are doing this with DPI, rather than (say) using a separate APN for BlackBerries or Facebook Zero or whatever. (It's worth noting that essentially *all* operators zero-rate internal data traffic used for device management)
  • Some of the definitions are pretty woolly. So-called VoIP traffic also includes video communications (eg Skype) and presumably also the associated IM and advertising data, although those are small at present. Given that a huge % of Skype calls are video-based (from laptops!) that's rather important. So we have the strange situation that some of the most-used mobile IM platforms - Skype, Facebook Chat and BlackBerry Messenger - don't appear in the chart at all, although BBM's absence could be attributed to a laptop-centric overall sample.
  • The word "signalling" doesn't appear at all
  • Neither does the word "encryption" or HTTPS - both of which are becoming increasingly important, and which are essentially opaque to most forms of DPI
Overall, there's some good data points there, but a "deep inspection" suggests that there's rather a lot that's not being said. In particular, the downplaying of the role of PC mobile broadband seems deliberate. Allot is also very keen to talk up "personalisation" in terms of "app-aware charging", but seems to have been pretty selective with its evidence to support its assertions.

Edit: it's just struck me that this is a piece of analysis that is based on interpretation and inferences, rather than direct collaboration or discussion with the content provider. Just like DPI and application-aware networking, in other words. (My self-determined acceptable rate for "false positives" on a piece of this type is 10%. If I've got more than that amount wrong, I'll do a re-write or retraction. I've yet to see a DPI vendor give a false-positive threshold).

    Monday, July 11, 2011

    Beware of traffic statistics....

    One of the problems with Twitter is that it forces people to abbreviate important details. Compounded with multiple layers of interpretation, it's quite possible for information to get filtered & misrepresented.

    A case in point - I've just seen a tweet linking to this blog post about "traffic" from mobile and "non-computer" devices hitting 6% of the total in the US. The blog post originated from this Comscore survey which looks like it's generating some interesting and useful data.

    However, that data is very specifically about Web Page viewing traffic by device type.

    The blog post and tweet don't really make it clear that this is (a) conflating two definitions of "traffic" - one is essentially web page hits, the other is a measure of the volume of data being sent across the network; or (b) that this is just the web, not the whole Internet (ie including most sorts of streaming, email, VoIP, presumably a lot of non-HTTP app traffic and so forth).

    If it had been a bit further in the future, there would be further confusion with HTML5 applications - what would constitute traffic / hits / consumption then?

    I'll bet that over the next few days, we see that data recycled to suggest that mobile devices are generating 6% of overall GB / TB / EB of data "tonnage" across the generalised Internet, probably linking the story to cellular capacity crunches, offload, spectrum etc etc.

    Incidentally, the red alarm for me when I spotted this was a lack of mention of Internet-connected TVs and set-tops. If I had to guess what "non-computer" devices generated *bulk* data across the Internet at large, I would expect a relatively small number of Roku's and Tivo's and similar TV-connected devices to consume huge volumes of video. (HDTV = about 5GB per hour). There's also presumably a huge chunk of (non-web) Internet traffic which is server-to-server.

    Edit - in future, it'll get even more complex because of things like adaptive rate streaming, which divides videos into "chunks" a few seconds long, typically each with a unique URL. Is each one a web-page hit?

    Thursday, July 07, 2011

    UK phone-hacking scandal - does this go beyond an issue about journalism?

    Like everyone in the UK, I've been listening in horror to the recent reports that the News of the World's journalists have listened to the private voicemails not just of celebrities and politicians, but those of victims of crime and terrorism.

    I certainly think that those responsible must face the force of both the law and public opprobrium.

    But it's also made me think about the process they used. While dastardly, it doesn't sound that difficult - basically either guessing users' default voicemail PIN codes (0000 etc) or - allegedly - bribing somebody to divulge them.

    This leads me to three conclusions:

    • I can't believe that the NoTW journalists were the only ones who invented and used this technique. Firstly, other journalists are probably equally implicated, as there's a lot of job mobility in that industry. But secondly, this technique has most probably also been used in other countries, and in other contexts. I've got to believe that this goes beyond news, and probably extends to industrial espionage, financial insider-dealing and assorted other forms of snooping and spying.
    • The mobile operators (and by implication their vendors/integrators) appear to have been seriously remiss about defining good practice and standards for voicemail security. This does not just extend to allowing default passwords to remain in use indefinitely, it also involves the accessibility of PINs to customer service or other staff. It seems that these PINs are much more weakly locked-down that banks' ATM codes. I also find it hard to believe that UK operators are uniquely lax about this - presumably it's an equal issue around the world. 
    • Lastly, this is another example of the "cloud" failing in its security. Just because this involved some "social engineering" does not make voicemail hacking any less scary than Sony's loss of customer details or other recent failures. Maybe there should be questions about whether the network is the right default place to store voicemails, rather than downloading them to handsets when connectivity is available.
    To my mind, the UK Information Commissioner needs to do a full review into how voicemail privacy and security is run in the telecoms industry. And other countries' authorities ought to be following suit. I think the unique intensity of the UK journalism / political sphere has broken the dam on this issue, but I'll be very surprised if one newspaper is the sole culprit when the rest of the story floods out.

    EDIT: this blog post (found easily on Google) discussed voicemail snooping and vulnerabilities, specifically as related to US mobile operators. Apparently many voicemail services just use Caller ID to identify when the inbound call is coming from a handset - so easily spoofed. Doesn't even use SIM-based authentication when calling from the phone itself. 

    Friday, July 01, 2011

    Zero-rating, sender-pays, toll-free data... the next business model for mobile broadband?

    I've noticed a sudden upswing in discussion around the idea of "zero-rating" of mobile data traffic recently. This is where certain types of data - specific websites, apps, times of day, locations etc - do not count against the user's monthly data cap or prepaid quota. Clearly, zero-rating makes no sense if the user has a completely flat dataplan anyway.

    Cisco has a blog post about the idea here , Andrew Bud of mBlox has been talking a good game on "sender-pays data" for some time, a company called BoxTop presented on its idea of "toll-free apps" at eComm, its cropped up in numerous discussions with operators recently - and its something I've been talking about for years in reports such as Mobile Broadband Computing (Dec 2008) and Telco 2.0 Fixed & Mobile Broadband Business Models (Mar 2010).

    It's got the great advantage of being easy to understand - and there's often a zero-rate function built into existing billing systems anyway (eg to zero-rate internal "operational" data usage by the telcos for updates etc) so there isn't the headache of re-writing half the BSS/OSS stack that some other business models imply.

    But in my mind though is a major question. Yes, certain data will definitely be zero-rated to the end user, but the big question is whether they will paid for by anyone else (ie an upstream party like an advertiser or app developer)? Or will the operator give away certain traffic "for free" as a marketing tool, or even as a way of (paradoxically) reducing their own costs?

    Cisco's article points out advertisers as low-hanging fruit, something I wrote about myself last year. This is also a discussion I've had with companies such as Yospace in the mobile video arena, although when I asked an advertising agency at a recent mobile conference the notion of paying for bandwidth resulted in a look of bemusement.

    However, there are some extra complexities to the model to consider:

    - Excess usage and fraud risk / management. Would the upstream party be effectively signing a blank cheque for an unlimited amount of data use? I'm not sure how this works for 1-800 numbers, for example.
    - Offload awareness. How does the model work for traffic which either does - or could - go via WiFi or femtocell access? Especially in the case where the data is backhauled through the operator core (femtos, or some new flavours of WiFi integration), I'd be mightily annoyed as the content provider if I was charged the same fee for data transmission even though the operators costs were 10x lower
    - Is there any discrimination between data sent to busy cells during busy hour, vs. data sent during quiet periods?
    - What happens with CDNs? Firstly, how do you account for and bill stuff routed via Akamai to a particular service provider? Secondly what happens if content comes from an operator's cache?
    - Do you charge for the amount of raw data sent by the content company, or that which comes out of the compression/optimisation box in the operator's network and sent to the user?
    - How do you deal with uplink traffic? And if the other party is paying, can I bankrupt the content company by emailing them a terabyte of random numbers?
    - How do you sell and market this to media and content companies? How do you bill them? Do you need a completely new IT system to manage all of this?
    - If the upstream company is paying, will they expect a strict SLA in terms of coverage, throughput rates - and for evidence that the telco has delivered on its obligations?
    - Roaming will need to be considered - few content companies will want to pay $20,000 for delivering a movie downloaded by a user on holiday.
    - Various types of problems identifying unique traffic streams when all this runs inside an HTML5 browser. Web mashups generally will cause a problem, for example if a "free" website has a YouTube video embedded on a page. Who pays for the YouTube traffic?

    As a result, I expect that the short-term approach for zero-rating will be for those use cases where no money changes hands. Getting "cold hard cash" from this type of two-sided models is fraught with complexity. Instead, we'll see this type of zero-rating used mostly for promotional purposes: "1GB a month plus free zero-rated YouTube!", or for zero-rating the operator's own content and apps, especially where they are done "telco-OTT style". For example, I'd expect Orange to zero-rate traffic for its 50%-owned DailyMotion Internet video arm to some subscribers.

    We may also see some zero-rating done as a way of encouraging content providers to use local CDNs, especially if they are run by the operator themselves. It would make sense for an Australian provider to tell Netflix that any content delivered from servers locally (and therefore not needing GB of data shipping across the Pacific needlessly by the operator) would get zero-rated to the end user. Obviously that would need to be set against radio and backhaul network load and would probably be part of a wider partnership deal.

    There is also a promotional angle to giving away a certain amount of usage to non-data subscribers, in the hope that some will see the value and sign up for a data plan at a later date. Facebook Zero seems to fall into this camp at the moment.

    Maybe some companies would stump up for the equivalent of 1-800 numbers. Maybe an airline's app, or a bank's? But in reality, the amounts are likely to be so small unless the apps are really heavy and frequently used (maybe 1MB per user per month for an airline app?) that the cost of sale might outweigh the revenues.

    Overall, I expect to see zero-rating becoming more important in various guises. But I'm doubtful that it's as easy to monetise as some seem to think.