I was at the Broadband Traffic Management conference in London last week, one of the largest events in the calendar for 3G data networks and policy/traffic management and charging solutions. I spoke to a wide range of vendors and operators, and moderated an afternoon stream about dealing with mobile video.
I came away from the event with a number of my beliefs about policy, WiFi offload, video optimisation and operator "politics" strengthened, and a number of new learnings and perpectives that I'll be sharing either on this blog, or in a report in early 2012. This particular post covers a couple of things about "service-based" or "application-based" charging and policy.
(As an aside: I'm going to be boycotting the BBTM event in 2012, for numerous reasons, not least of which was the ridiculous decision to host it in a place with no decent cellular coverage and £20 / day WiFi. I know from organising my own events that organisers have a lot of negotiating power with venues about the "delegate package". If the venue refuses because it has 3rd-party run WiFi with an inflexible contract [this venue used Swisscom] then go somewhere else. It's inexcusable).
I've said on numerous occasions (eg here, here and here) before that I don't believe that operators can (in general) successfully design mobile data or broadband services around application-specific policies and pricing. Despite continued hype from the industry and standards bodies, network cannot and never will be able to accurately detect and classify traffic, application or "services" on its own. With explicit cooperation from third-parties, or sophisticated on-device client software hooked into the policy engine, there's a bit more of a chance.
But I continue to hear rhetoric from the network-centric side of the policy domain about creating "a Facebook data plan", or "charging extra for video", or "zero-rating YouTube". I'm a serious skeptic of this model, believing instead that policy will be more about location, time, speed, user, device, congestion and other variables, but not an attempt to decode packets/streams etc. in an effort to guess what the user is doing. However, lots of DPI and PCRF vendors have spent a lot of money on custom silicon and software to crunch through "traffic" and have promoted standards like 3GPP's new "Traffic Detection Function", and are now determined to justify the hype.
Much of the story fits with the usual attitude of punishing (or "monetising") the so-called OTT providers of application and content, by enabling the network to act as a selective tollgate. On a road, it's easy to differentiate charges based on the number of wheels or axles a vehicle has, as you can count them. Not so true of mobile data - some of the reasons that I'm skeptical include mashups, encryption, obfuscation, offload, Web 2.0, M&A between service providers and so on. (And obviously, national or internationl laws on Net Neutrality, privacy, copyright, consumer protection and probably other bits of legislation).
But during the BBTM, I came to a neat way to encapsulate the problem: timing.
Applications change on a month-by-month or week-by-week basis. The Facebook app on my iPhone looks different to me (and the network) when it gets upgraded via the AppStore. I talked to a network acrhitect last night about the cat-and-mouse game he plays with Skype and its introduction of new protocols. Not only that, but different versions of the app, on different devices, on different OS's, all act differently. And according to someone I met at BBTM, different countries' versions of different phones might interact differently with the network too. And the OS might get updated every few months as well.
Operators can't work on timescales of weeks/months when it comes to policy and charging. The business processes can't flex enough, and neither can customer-facing T's and C's. How do you define "Facebook"? Does it include YouTube videos or web pages shared by friends viewed *inside the app*? What about plug-ins? Who knows what they're going to launch next week? What if they shift CDN providers so the source of data changes?
Unless you've got a really strong relationship with Facebook and hear about all upcoming changes under NDA, you'll only find out after it happens. And then how long will it take you to change your data plans, and/or change the terms of ones currently in force? What's the customer service impact when users realise they're charged extra for data they thought was zero-rated?
And if you think that's bad, what a year or two.
As we move towards HTML5 apps, I'd expect them to become more personalised. My Facebook app and your Facebook app might be completely different, just as my PC Facebook web page is. Maybe I've got encryption turned on, or maybe Mark Zuckerberg sets up the web server to put video ads on my wall, but not yours. Maybe I'm one of 5 million people out of 800 million who's testing a subtly different version of the app? Or that has a Netflix integration? Websites do that all the time - they can compare responsiveness or stickiness and test alternative designs on the real audience. And because it's all web-based, or widget-based, much of that configuration may be done on the server, on the fly.
How are you going to set up a dataplan & DPI that copes with the inherent differences between dean.m.facebook.com and personX.m.facebook.com? Especially when it changes on a day-by-day or session-by-session basis?
Yes, there will still be fairly static, "big chunks" of data that will remain understandable and predictable. If I download a 2GB movie, it's going to be fairly similar today and tomorrow. Although if I stream it with adaptive bitrate filtering, then maybe the network will find it harder to drive policy.
EDIT - another "gotcha" for application-based pricing is: How do you know that apps don't talk to each other? Maybe Facebook has a deal with Netflix to dump 8GB of movie files into my phone's memory (via the branded Facebook app & servers), which the Netflix app then accesses locally on the device? This is likely to evolve more in the future - think about your PC, and the way the applications can pass data to each other.
One last thing from the BBTM conference: we heard several speakers (and I heard several private comments) that the big pain is still signalling load of various types, not application data "tonnage". I've yet to hear a DPI vendor talk convincingly about charging per-app or per-service based on signalling, especially as much of the problem is "lower down" the network and outside of the visibility of boxes at the "back" of the network.
Yes, we'll continue to see experiments like the Belgian zero-rating one I mentioned recently. But I expect them to crumble under the realities of what applications - defined in the user's eyes, not the network's - really are, and how fast they are evolving.
UNSUBTLE SALES PITCH: if you want a deeper understanding of how application changes will impact network policy, or the fit of traffic management with WiFi offload, CDNs, optimisation, devices and user behaviour, get in touch to arrange a private workshop or in-depth advisory project with Dean Bubley of Disruptive Analysis . Email information AT disruptive-analysis DOT com
I came away from the event with a number of my beliefs about policy, WiFi offload, video optimisation and operator "politics" strengthened, and a number of new learnings and perpectives that I'll be sharing either on this blog, or in a report in early 2012. This particular post covers a couple of things about "service-based" or "application-based" charging and policy.
(As an aside: I'm going to be boycotting the BBTM event in 2012, for numerous reasons, not least of which was the ridiculous decision to host it in a place with no decent cellular coverage and £20 / day WiFi. I know from organising my own events that organisers have a lot of negotiating power with venues about the "delegate package". If the venue refuses because it has 3rd-party run WiFi with an inflexible contract [this venue used Swisscom] then go somewhere else. It's inexcusable).
I've said on numerous occasions (eg here, here and here) before that I don't believe that operators can (in general) successfully design mobile data or broadband services around application-specific policies and pricing. Despite continued hype from the industry and standards bodies, network cannot and never will be able to accurately detect and classify traffic, application or "services" on its own. With explicit cooperation from third-parties, or sophisticated on-device client software hooked into the policy engine, there's a bit more of a chance.
But I continue to hear rhetoric from the network-centric side of the policy domain about creating "a Facebook data plan", or "charging extra for video", or "zero-rating YouTube". I'm a serious skeptic of this model, believing instead that policy will be more about location, time, speed, user, device, congestion and other variables, but not an attempt to decode packets/streams etc. in an effort to guess what the user is doing. However, lots of DPI and PCRF vendors have spent a lot of money on custom silicon and software to crunch through "traffic" and have promoted standards like 3GPP's new "Traffic Detection Function", and are now determined to justify the hype.
Much of the story fits with the usual attitude of punishing (or "monetising") the so-called OTT providers of application and content, by enabling the network to act as a selective tollgate. On a road, it's easy to differentiate charges based on the number of wheels or axles a vehicle has, as you can count them. Not so true of mobile data - some of the reasons that I'm skeptical include mashups, encryption, obfuscation, offload, Web 2.0, M&A between service providers and so on. (And obviously, national or internationl laws on Net Neutrality, privacy, copyright, consumer protection and probably other bits of legislation).
But during the BBTM, I came to a neat way to encapsulate the problem: timing.
Applications change on a month-by-month or week-by-week basis. The Facebook app on my iPhone looks different to me (and the network) when it gets upgraded via the AppStore. I talked to a network acrhitect last night about the cat-and-mouse game he plays with Skype and its introduction of new protocols. Not only that, but different versions of the app, on different devices, on different OS's, all act differently. And according to someone I met at BBTM, different countries' versions of different phones might interact differently with the network too. And the OS might get updated every few months as well.
Operators can't work on timescales of weeks/months when it comes to policy and charging. The business processes can't flex enough, and neither can customer-facing T's and C's. How do you define "Facebook"? Does it include YouTube videos or web pages shared by friends viewed *inside the app*? What about plug-ins? Who knows what they're going to launch next week? What if they shift CDN providers so the source of data changes?
Unless you've got a really strong relationship with Facebook and hear about all upcoming changes under NDA, you'll only find out after it happens. And then how long will it take you to change your data plans, and/or change the terms of ones currently in force? What's the customer service impact when users realise they're charged extra for data they thought was zero-rated?
And if you think that's bad, what a year or two.
As we move towards HTML5 apps, I'd expect them to become more personalised. My Facebook app and your Facebook app might be completely different, just as my PC Facebook web page is. Maybe I've got encryption turned on, or maybe Mark Zuckerberg sets up the web server to put video ads on my wall, but not yours. Maybe I'm one of 5 million people out of 800 million who's testing a subtly different version of the app? Or that has a Netflix integration? Websites do that all the time - they can compare responsiveness or stickiness and test alternative designs on the real audience. And because it's all web-based, or widget-based, much of that configuration may be done on the server, on the fly.
How are you going to set up a dataplan & DPI that copes with the inherent differences between dean.m.facebook.com and personX.m.facebook.com? Especially when it changes on a day-by-day or session-by-session basis?
Yes, there will still be fairly static, "big chunks" of data that will remain understandable and predictable. If I download a 2GB movie, it's going to be fairly similar today and tomorrow. Although if I stream it with adaptive bitrate filtering, then maybe the network will find it harder to drive policy.
EDIT - another "gotcha" for application-based pricing is: How do you know that apps don't talk to each other? Maybe Facebook has a deal with Netflix to dump 8GB of movie files into my phone's memory (via the branded Facebook app & servers), which the Netflix app then accesses locally on the device? This is likely to evolve more in the future - think about your PC, and the way the applications can pass data to each other.
One last thing from the BBTM conference: we heard several speakers (and I heard several private comments) that the big pain is still signalling load of various types, not application data "tonnage". I've yet to hear a DPI vendor talk convincingly about charging per-app or per-service based on signalling, especially as much of the problem is "lower down" the network and outside of the visibility of boxes at the "back" of the network.
Yes, we'll continue to see experiments like the Belgian zero-rating one I mentioned recently. But I expect them to crumble under the realities of what applications - defined in the user's eyes, not the network's - really are, and how fast they are evolving.
UNSUBTLE SALES PITCH: if you want a deeper understanding of how application changes will impact network policy, or the fit of traffic management with WiFi offload, CDNs, optimisation, devices and user behaviour, get in touch to arrange a private workshop or in-depth advisory project with Dean Bubley of Disruptive Analysis . Email information AT disruptive-analysis DOT com
0 comments:
Post a Comment