[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Captive-portals] Questions about PvD/API



No, that is more or less my point exactly... 

One problem people seem to have with ICMP and think PvD is the solution is around security. You brought up GEOPRIV as an example of that. However, as you said, "Those can (and should) be authenticated". In public access (where you can't punt security to "Lower-layer protections, such as Layer 2 traffic separation might be used to provide some guarantees."), maybe that statement would be upgraded to "Those MUST be authenticated". In RFC 5687 is says " The client MUST authenticate the discovered LIS." ... however, I'm not clear what "authenticate" means here. My assumption is that is means more than just a valid SSL cert of an unfamiliar hostname (like you would expect in public/guest access PvD).

And, yes, I absolutely believe we will have a higher rate of misconfiguration using the L7/Application layer... To quote myself from earlier in the thread:  Vendors will likely have a "PvD URL: " configuration dialog ... and there will be many hotspot services companies updating their "Howto" instructions with their PvD URL info... pretty darn easy to misconfigure. Can you give an example of how ICMP could be misconfigured? 



On Tue, Aug 22, 2017 at 4:38 PM, Martin Thomson <[email protected]> wrote:
I wrote all of that text.  I'm not sure that I get your point though.
There's a discovery mechanism that is vulnerable to interception,
etc..., but that (mostly) happens using network-local mechanisms (DHCP
primarily).  That's the only weak link.  The identity is well known
and authenticated once later interactions happen (the text on
unsecured HTTP is a relic of a bygone era, no one actually implements
that).

In short, the design is to discover the identity of a service using
those lower-layer primitives and then use that to bootstrap into
something else.  If your assertion is that PvD - as a discovery
mechanism - is vulnerable, then the weakness is in the layer 3 parts,
not in the bits that use HTTP.  Those can (and should) be
authenticated.

Did I miss something?

On 23 August 2017 at 06:09, David Bird <[email protected]> wrote:
> I don't think that is a fair comparison...
>
> Geolocation isn't anything enforced by the network... and clearly not
> designed for public/guest access networks (as they exists today).
>
> You might re-read some of the Security Consideration in the docs surrounding
> GEOPRIV.
>
> HTTP-Enabled Location Delivery (HELD)
> https://datatracker.ietf.org/doc/html/rfc5985#page-22
>
>    HELD is a location acquisition protocol whereby the client requests
>    its location from a LIS.  Specific requirements and security
>    considerations for location acquisition protocols are provided in
>    [RFC5687].  An in-depth discussion of the security considerations
>    applicable to the use of Location URIs and by-reference provision of
>    LI is included in [RFC5808].
>
>    By using the HELD protocol, the client and the LIS expose themselves
>    to two types of risk:
>
>    Accuracy:  The client receives incorrect location information.
>
>    Privacy:  An unauthorized entity receives location information.
>
>    The provision of an accurate and privacy- and confidentiality-
>    protected location to the requestor depends on the success of five
>    steps:
>
>    1.  The client must determine the proper LIS.
>
>    2.  The client must connect to the proper LIS.
>
>    3.  The LIS must be able to identify the Device by its identifier (IP
>        address).
>
>    4.  The LIS must be able to return the desired location.
>
>    5.  HELD messages must be transmitted unmodified between the LIS and
>        the client.
>
>    Of these, only steps 2, 3, and 5 are within the scope of this
>    document.  Step 1 is based on either manual configuration or on the
>    LIS discovery defined in [RFC5986], in which appropriate security
>    considerations are already discussed.  Step 4 is dependent on the
>    specific positioning capabilities of the LIS and is thus outside the
>    scope of this document.
>
> Discovering the Local Location Information Server (LIS)
> https://datatracker.ietf.org/doc/html/rfc5986#page-11
>
>    The address of a LIS is usually well-known within an access network;
>    therefore, interception of messages does not introduce any specific
>    concerns.
>
>    The primary attack against the methods described in this document is
>    one that would lead to impersonation of a LIS.  The LIS is
>    responsible for providing location information, and this information
>    is critical to a number of network services; furthermore, a Device
>    does not necessarily have a prior relationship with a LIS.  Several
>    methods are described here that can limit the probability of, or
>    provide some protection against, such an attack.  These methods MUST
>    be applied unless similar protections are in place, or in cases --
>    such as an emergency -- where location information of dubious origin
>    is arguably better than none at all.
>
>    An attacker could attempt to compromise LIS discovery at any of three
>    stages:
>
>    1.  providing a falsified domain name to be used as input to U-NAPTR
>
>    2.  altering the DNS records used in U-NAPTR resolution
>
>    3.  impersonating the LIS
>
>    The domain name that used to authenticate the LIS is the domain name
>    input to the U-NAPTR process, not the output of that process
>    [RFC3958], [RFC4848].  As a result, the results of DNS queries do not
>    need integrity protection.
>
>    An HTTPS URI is authenticated using the method described in Section
>    3.1 of [RFC2818].  HTTP client implementations frequently do not
>    provide a means to authenticate based on a domain name other than the
>    one indicated in the request URI, namely the U-NAPTR output.  To
>    avoid having to authenticate the LIS with a domain name that is
>    different from the one used to identify it, a client MAY choose to
>    reject URIs that contain a domain name that is different to the
>    U-NAPTR input.  To support endpoints that enforce the above
>    restriction on URIs, network administrators SHOULD ensure that the
>    domain name in the DHCP option is the same as the one contained in
>    the resulting URI.
>
>    Authentication of a LIS relies on the integrity of the domain name
>    acquired from DHCP.  An attacker that is able to falsify a domain
>    name circumvents the protections provided.  To ensure that the access
>    network domain name DHCP option can be relied upon, preventing DHCP
>    messages from being modified or spoofed by attackers is necessary.
>    Physical- or link-layer security are commonly used to reduce the
>    possibility of such an attack within an access network.  DHCP
>    authentication [RFC3118] might also provide a degree of protection
>    against modification or spoofing.
>
>    A LIS that is identified by an HTTP URI cannot be authenticated.  Use
>    of unsecured HTTP also does not meet requirements in HELD for
>    confidentiality and integrity.  If an HTTP URI is the product of LIS
>    discovery, this leaves Devices vulnerable to several attacks.  Lower-
>    layer protections, such as Layer 2 traffic separation might be used
>    to provide some guarantees.
>
> Requirements for a Location-by-Reference Mechanism
> https://datatracker.ietf.org/doc/html/rfc5808#page-12
>
>    The method of constructing the location URI to include randomized
>    components helps to prevent adversaries from obtaining location
>    information without ever retrieving a location URI.  In the
>    possession model, a location URI, regardless of its construction, if
>    made publicly available, implies no safeguard against anyone being
>    able to dereference and get the location.  Care has to be paid when
>    distributing such a location URI to the trusted location recipients.
>    When this aspect is of concern, the authorization model has to be
>    chosen.  Even in this model, care has to be taken on how to construct
>    the authorization policies to ensure that only those parties have
>    access to location information that are considered trustworthy enough
>    to enforce the basic rule set that is attached to location
>    information in a PIDF-LO document.
>
>    Any location URI, by necessity, indicates the server (name) that
>    hosts the location information.  Knowledge of the server in some
>    specific domain could therefore reveal something about the location
>    of the Target.  This kind of threat may be mitigated somewhat by
>    introducing another layer of indirection: namely the use of a
>    (remote) presence server.
>
>    A covert channel for protocol message exchange is an important
>    consideration, given an example scenario where user A subscribes to
>    location information for user B, then every time A gets a location
>    update, an (external) observer of the subscription notification may
>    know that B has moved.  One mitigation of this is to have periodic
>    notification, so that user B may appear to have moved even when
>    static.
>
>
>
> On Sun, Aug 20, 2017 at 6:34 PM, Martin Thomson <[email protected]>
> wrote:
>>
>> Hi David,
>>
>> Can you explain more about why you believe that a lower-layer protocol
>> needs to be used?
>>
>> I remember a similar discussion about this about 10 years ago with
>> GEOPRIV.  Then it was asserted that DHCP was the only protocol that
>> could deliver location information to user equipment.  That discussion
>> took a long time, but ultimately ended with an HTTP-based protocol.  I
>> have no desire to repeat that experience.
>>
>> It seems like this is - at least in part - based in how this might be
>> configured.  That is, you believe that a lower-layer protocol offers
>> no option for misconfiguration.  Is that correct?  Have I missed
>> something?
>>
>>
>> On 20 August 2017 at 00:12, David Bird <[email protected]> wrote:
>> > HI Tommy,
>> >
>> > Agreed that RFC7710 is lacking notification of captive portal existence,
>> > it
>> > only provides configuration information. ICMP would provide the
>> > notification, as it does today for other forms of destination
>> > unreachable,
>> > port unreachable, etc. In your first reply, I thought you were
>> > suggesting
>> > that RFC7710 was at play along side PvD DHCP/RA (and I wasn't clear what
>> > you
>> > meant by DHCP/RA).
>> >
>> > I think we both also agree that taking about both a Capport API *and*
>> > PvD is
>> > adding to the confusion and we ultimately will not want two "APIs".
>> >
>> > ICMP today delivers more than hints... it provides signaling that can
>> > directly influence traffic. Yes, there are security concerns around
>> > ICMP,
>> > which is why it is common for it to be filtered out of networks (which
>> > is a
>> > good think for Capport ICMP, it is only for the edge network).
>> >
>> > Regarding ICMP and the "content details of the network" ... I think that
>> > statement conflates policy and enforcement. ICMP provides (as today)
>> > notification of enforcement, and RFC7710 provides where to find out
>> > about
>> > the policy (ToS, etc).
>> >
>> > The JSON API becomes a "web service" when it has a http(s):// in front
>> > of it
>> > :) .. but, indeed, my concern is in the transport protocol. It being a
>> > URL
>> > signals that this is meant to be deployed alongside the portal, or
>> > otherwise
>> > 'remotely'... Vendors will likely have a "PvD URL: " configuration
>> > dialog
>> > ... and there will be many hotspot services companies updating their
>> > "Howto"
>> > instructions with their PvD URL info... it is a web service.
>> >
>> > I welcome suggestions that put that JSON API into a lower layer network
>> > protocol. We could stuff JSON into ICMP :)
>> >
>> > Maybe there is a way to merge ICMP and PvD -- to where ICMP provides the
>> > notification (with tokens) and PvD provides the policy (based on these
>> > tokens) (?)
>> >
>> > With regard to vendor (NAS/UE) cooperation, perhaps PvD could be a new
>> > start, but thus far (as my quote of Cisco documentation suggests), it is
>> > more about what users/venues want. Cisco already today actively avoids
>> > iOS
>> > captive portal detection because the "pseudo-browser" (as they call it)
>> > does
>> > work with their portal. That is a problem that could be solve today by
>> > Apple/Cisco, couldn't it? Just by not using the pseudo browser...
>> > introducing PvD doesn't resolve the core problem there, but does make it
>> > easier to avoid that pseudo browser.
>> >
>> > You also said "We can still get to a captive portal once the user goes
>> > into
>> > the browser." ... However, this is increasingly untrue as the work moves
>> > to
>> > https... So, doing this avoidance of detection will still be a problem.
>> >
>> > Cheers,
>> > David
>> >
>> >
>> > On Fri, Aug 18, 2017 at 7:23 PM, Tommy Pauly <[email protected]> wrote:
>> >>
>> >> Hi David,
>> >>
>> >> My thoughts with regards to RFC 7710 is that it is not deployed as far
>> >> as
>> >> I know, and no client stack respects the value sent in 7710. Without
>> >> some
>> >> API extensions, it isn't directly better than what we currently have.
>> >> Ideally, this would not be an API that would get deployed if we were
>> >> also
>> >> using PvDs. My concern is that if PvDs are used for enterprise and
>> >> private
>> >> networks, we'll have a very similar but less complete path based on RFC
>> >> 7710. We could end up deprecating or replacing that RFC, which was
>> >> mentioned
>> >> in our last meeting. I don't think RFC 7710 can be used without a URL,
>> >> which
>> >> is why I think we need a solution that does a better job of indicating
>> >> the
>> >> lack of captive or other extended network info.
>> >>
>> >> I would hope that since both iOS and Android stack developers are
>> >> working
>> >> on the UE side, we would actually see UE deployment of PvDs before any
>> >> captive vendors adopt PvDs, and we'd be standardizing around Cisco/etc
>> >> enterprise deployments. By the time there were NAS vendors deploying,
>> >> they
>> >> would test with both iOS and Android devices to validate support.
>> >>
>> >> Basing our standards on the idea that devices (either networks or UE's)
>> >> may implement the RFCs incorrectly seems to be a difficult starting
>> >> point.
>> >>
>> >> I like the point you bring up of splitting network notifications from
>> >> web
>> >> APIs. There is a need to be judicious about what properties fall into
>> >> each
>> >> category. I think you're saying that the fact that there is a captive
>> >> network can be signaled via ICMP, etc, as a network-level property.
>> >> While
>> >> ICMP is a fine solution for giving the UE hints when something has
>> >> expired,
>> >> I am concerned that (possibly unsolicited) network signaling is not the
>> >> correct mechanism for the content details of the network, whether that
>> >> is
>> >> the enterprise network properties, or the captive network Terms &
>> >> Conditions, tokens, expiration timers, and URLs for various kinds of
>> >> user
>> >> interaction. An JSON API is one form of grabbing information—I don't
>> >> think
>> >> we should necessarily interpret that as something that is a high-level
>> >> Web
>> >> interaction. We could create some custom protocol over UDP like DNS
>> >> records
>> >> to get the information (that would be a lot of new protocol work here
>> >> that
>> >> people may not be willing to get into), but the key is that it needs to
>> >> be
>> >> the choice of a UE device that understands how to request and parse
>> >> content
>> >> that initiates a lookup, and can fetch information from the network
>> >> infrastructure.
>> >>
>> >> With regards to your assertion that we'll always revert to doing a
>> >> probe,
>> >> I still would like to believe that if we have a network that advertises
>> >> a
>> >> PvD with no extended information, or extended information that doesn't
>> >> include a captive portal, we can avoid the probe altogether. Will we
>> >> still
>> >> have networks that redirect HTTP requests? Yes. But that's no different
>> >> from
>> >> the scenario today in which a network whitelists our captive detection
>> >> probes. We can still get to a captive portal once the user goes into
>> >> the
>> >> browser. We can stop doing probes whenever the RA on the network
>> >> indicates
>> >> that it supports explicit signaling about network properties. If a
>> >> network
>> >> operator wants to invoke the system-level captive interaction, then
>> >> they
>> >> will follow the RFCs we come up in the CAPPORT group as long as UEs end
>> >> up
>> >> deploying support first. If they want to avoid it, or they have a
>> >> broken
>> >> network, things will be like networks that whitelist our probes today.
>> >> Not
>> >> great, but still possible for the user to get through. My main goal in
>> >> these
>> >> standards is to make it possible for a network to give the user a good
>> >> experience; not to make it impossible for the user to have a sub-par
>> >> experience (since I don't think that goal is achievable).
>> >>
>> >> Best,
>> >> Tommy
>> >>
>> >>
>> >> On Aug 18, 2017, at 5:52 PM, David Bird <[email protected]> wrote:
>> >>
>> >> Thanks Tommy,
>> >>
>> >> I don't dispute that PvD provides an elegant set of solutions --
>> >> particularly in enterprise and other 'private' networks. I question,
>> >> however, the value in public(/guest) access -- where everyone wants you
>> >> to
>> >> access their network over others, for 'retail analytic' or
>> >> branding/attribution(/exploit) purposes.
>> >>
>> >> Another way to see the PvD integration/deployment:
>> >>
>> >> 1. Today, we join a network, always do a probe, which redirects to
>> >> captive
>> >> portal
>> >> 2. A PvD URL is provide, so a captive portal notification is generated
>> >> to
>> >> the user (is that what 'we just make a connection directly' means?)
>> >> 3. We may have also gotten RFC7710 URL, there are potentially two APIs
>> >> in
>> >> play at the same time, which is extra confusing (?)
>> >> 4. The first NAS vendor release products with support, venues deploy
>> >> and
>> >> start 'fiddling' with the new feature and URL to PvD end-points
>> >> 5. The first UE vendor releases products with support, start using it
>> >> at
>> >> said venues... complain to vendor about problems unique to this new
>> >> device
>> >> 6. In some networks, users complain that *only* their new PvD device is
>> >> seeing a captive portal, while all their other devices do not. Staff at
>> >> the
>> >> coffee shop don't believe me; all their devices work too.
>> >>
>> >> I think there are fundamental issues in splitting what should be
>> >> 'network
>> >> notification' into web APIs....
>> >>
>> >> 1. Tomorrow, we join a network, always do a probe, which redirects to
>> >> captive portal
>> >>
>> >> It wasn't clear in your e-mail if RFC7710 can be used *without*
>> >> providing
>> >> a URL, or is there a PvD specific DHCP option?
>> >>
>> >> Thanks,
>> >> David
>> >>
>> >>
>> >> On Wed, Aug 16, 2017 at 9:20 AM, Tommy Pauly <[email protected]> wrote:
>> >>>
>> >>> Hi David,
>> >>>
>> >>> You mention in one of your emails that you'd expect there to be many
>> >>> "broken PvD" deployments, which would either necessitate ignoring PvD
>> >>> and
>> >>> using legacy mechanisms, or else having the user face a broken portal.
>> >>> My
>> >>> impression is that if client-side deployments fail closed—that is, if
>> >>> there
>> >>> is a PvD advertised, but it does not work well, then we treat the
>> >>> network as
>> >>> broken. If this client behavior is consistent from the start of
>> >>> deployment,
>> >>> then I would think that deployments would notice very quickly if they
>> >>> are
>> >>> broken. The fundamental part of the PvD being advertised is in the
>> >>> RAs—if
>> >>> our DHCP or RAs are broken on a network, we generally are going to be
>> >>> broken
>> >>> anyhow.
>> >>>
>> >>> As far as where the API resides, I appreciate your explanation of the
>> >>> various complexities. My initial take is this:
>> >>>
>> >>> - Where a PvD is being served is up to the deployment, and determined
>> >>> by
>> >>> the entity that is providing the RAs. To that end, the server that
>> >>> hosts the
>> >>> API for extended PvD information may be very different for
>> >>> enterprise/carrier scenarios than in captive portals for coffee shops.
>> >>> - For the initial take for Captive Portals, I would co-locate the "PvD
>> >>> API" server with the "Captive API" and "Captive Web Server".
>> >>> Presumably, the
>> >>> device that was previously doing the HTTP redirects would be able to
>> >>> do the
>> >>> similar coordination of making sure the PvD ID that is given out to
>> >>> clients
>> >>> matches the PvD API server (which is the same as the "Captive Web
>> >>> Server").
>> >>>
>> >>> For the captive use-case, I see the integration of PvDs as an
>> >>> incremental
>> >>> step:
>> >>>
>> >>> 1. Today, we join a network, always do a probe, which may get
>> >>> redirected
>> >>> to a captive web server
>> >>> 2. With RFC 7710, we would join a network and do the same as (1),
>> >>> unless
>> >>> the captive URL is given in the DHCP/RA and we just make a connection
>> >>> directly.
>> >>> 3. With the Captive API draft, we can interact with the portal other
>> >>> than
>> >>> just showing a webpage; but this may still be bootstrapped by 7710 if
>> >>> not
>> >>> using another mechanism
>> >>> 4. With PvDs, the mechanism in 7710 is generalized to support APIs
>> >>> other
>> >>> than just captive, and can indicate that no captive portal or other
>> >>> extended
>> >>> info is present; and the PvD API in this form is just a more generic
>> >>> version
>> >>> of the captive API that allows us to use the same mechanism for other
>> >>> network properties that aren't specifically captive (like enterprise
>> >>> network
>> >>> extended info, or walled gardens)
>> >>>
>> >>> Getting into the arms race of people avoiding the captive probes: if
>> >>> someone doesn't want to interact with the client OS's captive portal
>> >>> system,
>> >>> they can and likely will not change anything and just keep redirecting
>> >>> pages. Hopefully if a better solution becomes prevalent enough in the
>> >>> future, client OS's will be able to just ignore and reject any network
>> >>> that
>> >>> redirects traffic, and the only supported captive portals would be
>> >>> ones that
>> >>> interact in specified ways and advertise themselves as captive
>> >>> networks. In
>> >>> order to get to this point, there certainly needs to be a carrot to
>> >>> incentivize adoption. My goal with the more flexible interaction
>> >>> supported
>> >>> by PvD is that we will allow the networks to provide a better user
>> >>> experience to people joining their networks, and integrate with client
>> >>> OS's
>> >>> to streamline the joining process (notification of the network being
>> >>> available, who owns it, how to accept and how to pay), the maintenance
>> >>> process (being able to integrate time left or bytes left on the
>> >>> network into
>> >>> the system UI), and what is allowed or not on the network.
>> >>>
>> >>> Thanks,
>> >>> Tommy
>> >>>
>> >>>
>> >>> On Aug 16, 2017, at 6:51 AM, David Bird <[email protected]> wrote:
>> >>>
>> >>> My question about where the PvD API resides was somewhat rhetorical.
>> >>> In
>> >>> reality, I'm sure you will find all of the above - In the NAS (e.g.
>> >>> Cisco),
>> >>> at the hotspot services provider, and something hosted next to the
>> >>> venues
>> >>> website. It depends mostly on how this URL is configured, and by whom.
>> >>> (One
>> >>> could imagine people doing all sorts of things).
>> >>>
>> >>> My question more specifically for the authors is, how would Cisco
>> >>> implement PvD for Guest/Public access and would it actively stop
>> >>> avoiding
>> >>> Apple captive portal detection? Or, would turning on PvD just make
>> >>> that
>> >>> 'feature' easier to implement?
>> >>>
>> >>> On Tue, Aug 15, 2017 at 5:19 PM, Erik Kline <[email protected]> wrote:
>> >>>>
>> >>>> Randomly selecting Tommy and Eric so this bubbles up in their inbox.
>> >>>>
>> >>>> On 2 August 2017 at 10:36, David Bird <[email protected]> wrote:
>> >>>> > Could an author of PvD help me understand the following questions
>> >>>> > for
>> >>>> > each
>> >>>> > of the diagrams below I found on the Internet -- which represent
>> >>>> > some
>> >>>> > typical hotspot configurations out there...
>> >>>> >
>> >>>> > - Where would the API reside?
>> >>>> >
>> >>>> > - Who 'owns' the API?
>> >>>> >
>> >>>> > - How does the API keep in-sync with the NAS? Who's responsible for
>> >>>> > that
>> >>>> > (possibly multi-vendor, multi-AAA) integration?
>> >>>> >
>> >>>> > 1) Typical Hotspot service company outsourcing:
>> >>>> >
>> >>>> >
>> >>>> > http://cloudessa.com/wp-content/uploads/2013/08/shema-CaptivePortalSolution_beta2b.png
>> >>>> >
>> >>>> > 2) Same as above, except venue owns portal:
>> >>>> >
>> >>>> >
>> >>>> > http://cloudessa.com/wp-content/uploads/2013/07/solutions_hotspots-co-working-cloudessa_2p1.png
>> >>>> >
>> >>>> > 3) Now consider the above, but the venue has more roaming partners
>> >>>> > and
>> >>>> > multi-realm RADIUS setup in their Cisco NAS:
>> >>>> >
>> >>>> >
>> >>>> > http://www.cisco.com/c/en/us/td/docs/wireless/controller/8-3/config-guide/b_cg83/b_cg83_chapter_0100111.html
>> >>>> > describes many options -- including separate MAC authentication
>> >>>> > sources,
>> >>>> > optional portals for 802.1x (RADIUS) authenticated users, and so
>> >>>> > much
>> >>>> > more...
>> >>>> >
>> >>>> > "Cisco ISE supports internal and external identity sources. Both
>> >>>> > sources can
>> >>>> > be used as an authentication source for sponsor-user and guest-user
>> >>>> > authentication."
>> >>>> >
>> >>>> > Also note this interesting article:  the section Information About
>> >>>> > Captive
>> >>>> > Bypassing and how it describes how to avoid Apple captive portal
>> >>>> > detection!!! "If no response is received, then the Internet access
>> >>>> > is
>> >>>> > assumed to be blocked by the captive portal and Apple’s Captive
>> >>>> > Network
>> >>>> > Assistant (CNA) auto-launches the pseudo-browser to request portal
>> >>>> > login in
>> >>>> > a controlled window. The CNA may break when redirecting to an ISE
>> >>>> > captive
>> >>>> > portal. The controller prevents this pseudo-browser from popping
>> >>>> > up."
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > _______________________________________________
>> >>>> > Captive-portals mailing list
>> >>>> > [email protected]
>> >>>> > https://www.ietf.org/mailman/listinfo/captive-portals
>> >>>> >
>> >>>
>> >>>
>> >>>
>> >>
>> >> _______________________________________________
>> >> Captive-portals mailing list
>> >> [email protected]
>> >> https://www.ietf.org/mailman/listinfo/captive-portals
>> >>
>> >>
>> >
>> >
>> > _______________________________________________
>> > Captive-portals mailing list
>> > [email protected]
>> > https://www.ietf.org/mailman/listinfo/captive-portals
>> >
>
>