Thanks for the reply... See one follow-up inline.
Thanks for the read-through. Some responses inline!
Some comments (will not be at meeting)
Section 1. The introduction states the purpose of the API as:
The state of captivity (whether or not the host has access to the
o A URI that a host's browser can present to a user to get out of
o An encrypted connection (TLS for both the API and portal URI)
The state of captivity is not all or nothing and being detailed about "captivity" (e.g. listing walled garden resources up front, down to protocol, port, hostnames, etc) will be complex and error prone. The URI for the captive portal can be gotten by RFC 7710. Captivity signaling should be granular; because we can, and it includes more use-cases.
Agreed. The fundamental thing about this API draft is that it is not the exhaustive set of information that the portal may want to communicate to the UE. The example of the state being captive or not is a very simple case, and there can certainly be more values to pass down.
Section 2. The workflow is stated as:
1. Provisioning, in which a host discovers that a network has a
captive portal, and learns the URI of the API server
2. API Server interaction, in which a host queries the state of the
captive portal and retrieves the necessary information to get out
3. Enforcement, in which the enforcement device in the network
blocks disallowed traffic, and sends ICMP messages to let hosts
know they are blocked by the captive portal
My issue is with the ordering... At step 2, we have Capport compliant devices enforcing themselves.. while non-Capport devices wait for the network enforcement. We have the (likely) scenarios of API server saying one thing (implying "self enforcement"), while the network "Enforcement" is telling the UE something else... who is right? Answer: always the network.
This workflow is the summary of of the CAPPORT architecture document. I completely agree that the enforcement is handled by the network, which is why this is all in the third step. The API server does not imply enforcement—It's the way for the portal that is doing enforcement to communicate information to the UE.
Non-CAPPORT devices currently do:
1. Provisioning, without knowing about captivity
3. Enforcement via blocking and redirects, in which UE's need to guess what happened
The enforcement is always the network, but the UE can provide a better user experience with 2.
Do you envision CAPPORT devices doing 1, 2, and 3 before notifying the user of captivity? Or, will CAPPORT devices just do 1 and 2 before notifying the user (or attempting to "login")? It is the latter that I would call 'self enforcement' because the client doesn't *know* there is captivity yet.
Okay, I understand your point better now. You're right that until something goes wrong (3 blocks us) we don't truly *know* that we're captive and traffic will be blocked. However, I'd argue that this doesn't need to be the central point of the user experience going forward.
Once the device has done 1 and 2, it can present the user with some UI with a login page, or other experience. All this guarantees to the device and the user is that they are on some network that provides a portal landing page experience. That landing page can describe what the contract of the network is—maybe it's totally captive, but maybe some services are allowed through without any extra login (like texting on airplanes, which I'm seeing these days). In the future, maybe some network won't even block traffic, but just use this landing page advertisement as a way to get some information to the user.
As an OS vendor implementor, I view the responsibility of the OS as making sure the user sees the portal landing page if it exists, so as to allow the user to interact with the portal to remove captivity. Enforcement or making any guarantees about enforcement is not the job of the UE, it's the job of the network. The OS can choose whether or not to make a network it's default route for Internet traffic based on hints it has about captivity, as well as its experience of traffic working or not, but that's a reactionary step, not one of enforcement.
_______________________________________________Captive-portals mailing list[email protected]https://www.ietf.org/mailman/listinfo/captive-portals
Section 3.2. I think the JSON keys represent a too narrow view of walled gardens, and while simplicity is good, here it artificially and severely limits use-cases.
Obviously, the "permitted" field is a boolean, not reflecting the fact the location very likely has freely available resources in the walled garden.
The "expire-date" and "bytes-remaining" will be hard to synchronize with network enforcement... the enforcement function might not be counting *all* bytes (some might be "free"), session expiry could happen because of time and/or data limits - possibly being consumed by multiple concurrent devices/sessions - but also things like Idle Time or software restart (NAS / AP / etc) or a number of reasons.
As mentioned above, this is just the basic set of minimal keys. It is intended to be extendable as we flesh out the use cases.
What would be useful in the API, in my opinion, is for the API to provide a sort of validation, authentication, and additional information for and about network notifications.
Yes, I agree! This was partly discussed at the last meeting; when we had the HMAC key to validate network notifications, we decided to remove that until we had further specified the use cases for validating the network notifications.