[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
IPv6 Netowrk Device Numbering BP
On Nov 4, 2012, at 5:15 AM, Tore Anderson <tore.anderson at redpill-linpro.com> wrote:
> * Owen DeLong
>> On Nov 4, 2012, at 1:55 AM, Tore Anderson
>> <tore.anderson at redpill-linpro.com> wrote:
>>> * Owen DeLong
>>>> What do you get from SIIT that you don't get from dual stack in a
>>> In no particular order:
>>> - Single stack is much simpler than dual stack. A single stack to
>>> configure, a single ACL to write, a single service address to
>>> monitor, staff needs to know only a single protocol, deveopment
>>> staff needs only develop and do QA for a single protocol, it's a
>>> single topology to document, a single IGP to run and monitor, a
>>> single protocol to debug and troubleshoot, one less attack vector
>>> for the bad guys, and so on. I have a strong feeling that the
>>> reason why dual stack failed so miserably as a transition
>>> mechanism was precisely because of the fact that it adds
>>> significant complexity and operational overhead, compared to single
>> Except that with SIIT, you're still dealing with two stacks, just
>> moving the place where you deal with them around a bit. Further,
>> you're adding the complication of NAT into your world (SIIT is a
>> form of NAT whether you care to admit that to yourself or not).
> The difference is that only a small number of people will need to deal
> with the two stacks, in a small number of places. They way I envision
> it, the networking staff would ideally operate SIIT a logical function
> on the data centre's access routers, or their in their backbone's
> core/border routers.
I suppose if you're not moving significant traffic, that might work.
In the data centers I deal with, that's a really expensive approach
because it would tie up a lot more router CPU resources that really
shouldn't be wasted on things end-hosts can do for themselves.
By having the end-host just do dual-stack, life gets a lot easier
if you're moving significant traffic. If you only have a few megabits
or even a couple of gigabits, sure. I haven't worked with anything
that small in a long time.
> A typical data centre operator/content provider has a vastly larger
> number of servers, applications, systems administrators, and software
> developers, than they have routers and network administrators. By making
> IPv4 end-user connectivity a service provided by the network, you make
> the amount of dual stack-related complexity a fraction of what it would
> be if you had to run dual stack on every server and in every application.
In a world where you have lots of network/system administrators that fully
understand IPv6 and have limited IPv4 knowledge, sure. In the real world,
where the situation is reversed, you just confuse everyone and make the
complexity of troubleshooting a lot of things that much harder because it
is far more likely to require interaction across teams to get things fixed.
> I have no problem admitting that SIIT is a form of NAT. It is. The ?T?
> in both cases stands for ?Translation?, after all.
>>> - IPv4 address conservation. If you're running out of IPv4
>>> addresses, you cannot use dual stack, as dual stack does nothing
>>> to reduce your dependency on IPv4 compared to single stack IPv4.
>>> With dual stack, you'll be using (at least) one IPv4 address per
>>> server, plus a bit of overhead due to the server LAN prefixes
>>> needing to be rounded up to the nearest power or two (or higher if
>>> you want to accommodate for future growth), plus overhead due to
>>> the network infrastructure. With SIIT, on the other hand, you'll be
>>> using a single IPv4 address per publicly available service - one
>>> /32 out of a pool, with nothing going to waste due to aggregation,
>>> network infrastructure, and so on.
>> Since you end up dealing with NAT anyway, why not just use NAT for
>> IPv4 conservation. It's what most engineers are already used to
>> dealing with and you don't lose anything between it and SIIT.
>> Further, for SIIT to work, you don't really conserve any IPv4
>> addresses, since address conservation requires state.
> Nope! The ?S? in SIIT stands for ?Stateless?. That is the beauty of it.
Right? As soon as you make it stateless, you lose the ability to
overload the addresses unless you're using a static mapping of ports,
in which case, you've traded dynamic state tables for static tables
that, while stateless, are a greater level of complexity and create
even more limitations.
> NAT44, on the other hand, is stateful, a very undesirable trait.
> Suddenly, things like flows per second and flow initiation rate is
> relevant for the overall performance of the architecture. It requires
> flows to pass bidirectionally across a single instance - the servers'
> default route must point to the NAT44, and a failure will cause the
> disruption of all existing flows. It is probably possible to find ways
> to avoid some or all of these problems, but it comes at the expense of
> added complexity.
> SIIT, on the other hand, is stateless, so you can use anycasting with
> normal routing protocols or load balancing using ECMP. A failure handled
> just like any IP re-routing event. You don't need the server's default
> route to point to the SIIT box, it is just a regular IPv6 route
> (typically a /96). You don't even have to run it in your own network.
> Assuming we have IPv6 connectivity between us, I could sell you SIIT
> service over the Internet or via a direct peering. (I'll be happy to
> give you a demo just for fun, give me an IPv6 address and I'll map up a
> public IPv4 front-end address for you in our SIIT deployment.)
Without state, how are you overloading the IPv4 addresses?
If I don't have a 1:1 mapping between public IPv4 addresses and IPv6
addresses at the SIIT box, what you have described doesn't seem
feasible to me.
If I have a 1:1 mapping, then, I don't have any address conservation
because the SIIT box has an IPv4 address for every IPv6 host that
> Finally, by putting your money into NAT44 for IPv4 conservation, you
> have accomplished exactly *nothing* when it comes to IPv6 deployment.
> You'll still have to go down the dual stack route, with the added
> complexity that will cause. With SIIT, you can kill both birds with one
But I'm not putting more money into NAT44? I'm deploying IPv6 on top
of my existing IPv4 environment where the NAT44 is already paid for.
>>> - Promotes first-class native IPv6 deployment. Not that dual stack
>>> isn't native IPv6 too, but I do have the impression that often,
>>> IPv6 in a dual stacked environment is a second class citizen. IPv6
>>> might be only partially deployed, not monitored as well as IPv4,
>>> or that there are architectural dependencies on IPv4 in the
>>> application stack, so that you cannot just shut off IPv4 and
>>> expect it to continue to work fine on IPv6 only. With SIIT, you get
>>> only a single, first-class, citizen - IPv6. And it'll be the only
>>> IPv6 migration/transition/deployment project you'll ever have to
>>> do. When the time comes to discontinue support for IPv4, you just
>>> remove your IN A records and shut down the SIIT gateway(s), there
>>> will be no need to touch the application stack at all.
>> Treating IPv6 as a second class citizen is a choice, not an inherent
>> consequence of dual-stack. IPv6 certainly isn't a second class
>> citizen on my network or on Hurricane Electric's network.
> Agreed, and I have no reason to doubt that HE does dual stack really
> well. That said, I don't think HE is the type of organisation for which
> SIIT makes the most sense - certainly not for your ISP activities. The
> type of organisation I picture using SIIT, is one that operates a bunch
> of servers and application cluster, i.e., they are controlling the
> entire service stack, and making them available to the internet through
> a small number of host names. Most internet content providers would fall
> in this category, including my own employer.
We have a lot of customers and professional services clients that fit exactly
what you describe. Not one of them has chosen SIIT over dual stack.
Admittedly, they also aren't using NAT44, but that's because everything has
a public IPv4 address, as things should be for server clusters.