Modeling Cost Versus Flexibility in Optical Transport Networks
Pedro, J. M.
Pires, J. J. O.
Journal of Lightwave Technology Vol. 37, Nº 1, pp. 61 - 74, January, 2019.
ISSN (print): 0733-8724
Journal Impact Factor: 2,567 (in 2015)
Digital Object Identifier: 10.1109/JLT.2018.2874058
Optical transport networks are progressively being designed around reconfigurability. Operators require an infrastructure capable of carrying large amounts of data, but also able to deliver that data when and where needed. While centralized control plane architectures based on software-defined networking are pushing flexibility in automation and interoperability, many of these objectives are reliant on a hardware-enabled flexible data plane. At the same time, reduced operating margins imply that capacity planning through resource overprovisioning is increasingly unsustainable. Therefore, a natural tradeoff between cost and flexibility emerges in many aspects of optical transport network architectures. More complex and reconfigurable hardware, such as flexible-rate transponders or switching fabrics, can be pitted against cheaper purpose-built modules as contending alternatives for incrementally deploying networks. The added value of adaptability to changing conditions must be evaluated against its potential upfront cost and capacity overprovisioning risk. In this context, technoeconomic analysis based on multiperiod capacity optimization plays a pivotal role in identifying the target network scenarios for fixed and flexible hardware. In this paper, use cases where the cost/flexibility tradeoff emerges in optical transport scenarios are identified, such as in the design of line cards and multilayer grooming architectures, and multiperiod optimization frameworks based on integer linear programming (ILP) models are presented. More generally, this paper also discusses scalability issues affecting the use of ILPs for multiperiod capacity optimization, and proposes some simple design and modeling guidelines to help overcome them. These include either reformulating the models themselves, or identifying specific subproblems within the global framework that can be offloaded without undermining the validity of the results.