I agree, but realise there's a flip side to this -- by standardising and making these functions explicit, we are encouraging their use, both tacitly (standardisation == approval) and by making the functionality more reliable (when implementations support those well-defined cowpaths). That effectively encourages they deployment of those enforcement functions.
The ever-so-careful line we need to walk in this work, I think, is to balance the amount of damage caused by the deployment of these mechanisms, while not encouraging an increase in their deployment by making them more viable. Of course, that's a judgement call, but if we didn't have those, standards would be much quicker...
Well, I'd argue that the functions being performed - Redirect, Drop, rate-limit, etc. (per flow) are all done today already. Sure, we unlock, perhaps, new 'features', but only because user interaction problems are solved (which is the same goal of the API).
> By addressing the core issue of ‘signaling captivity’ at the network layer, we are just annotating (e.g. with ICMP feedback) what is already happening in the network, in real-time, transparently (as in full-disclosure). This, in of itself, does not add new captive portal capabilities - only better information for the UE to provide better user experiences.
As per above, I'd dispute the "only" -- there may not be new capabilities, but making existing capabilities work more reliably, predictably and with the "air cover" of standards will change their deployment patterns.
In other words, the implicit goal of most interoperability work is to encourage increased deployment. We try to make HTTP more interoperable so that it's easier for more people to use, so they use it more, so that we enjoy the increased network effects of its use. I don't think increasing deployment of captive portals is a good goal. Others may disagree, for various reasons, of course.
I do think that where a captive portal is deployed, making it less painful is a good goal. Hopefully we can all agree on that.
It sounds like you want to intentionally keep captive portals broken, at least enough, to discourage their use... (isn't that exactly what we have today?) But, many network do not have a choice - say, if local laws require certain things.
> What is concerning with the API and direction of the WG, is that we are defining a new form of captivity … “self enforced”. Which will lead to the UEs doing probing to “confirm” the API captivity matches the Network captivity. Networks with API captivity can even be put on top of previously Open WiFi networks (that have no Network captivity). This is new - even if the UE allows the user to ignore the API captivity.
> Having written the problem statement, what are your thoughts on the direction today?
I have lots of questions about how it's going to play out, but at first glance it looks interesting -- if only because it provides a way for some portals to be less intrusive on the network. For non-financial cases (e.g., T&Cs, ads), it *might* be enough to deploy a captive portal without any blocking. One could argue that it might encourage increased deployment, but *if* it works out, the less intrusive nature might balance that out.
Networking putting the "API captivity" on networks without any real network captive portal might sound like an improvement, but I think it will not play out that way...
For starters, not all networks *can* do that (legal requirements, etc). And, as networks do deploy it on open networks, users (and UEs) will just be trained to ignore it - "skip the portal" button. However, the user/UE will also experience networks that have network captivity even after skipping the "API captivity"... and we are back to where we started... probing, https redirecting, and UEs guessing.