See the post on SDxCentral here.

As seen recently in Barcelona at Mobile World Congress 2016, there is no doubt that the telecom industry is in a full-fledged transition toward new software-defined architectures, due to the many benefits they deliver. Having made the fundamental decision to move forward, operators now place a priority on ensuring that the levels of service and the overall experience that customers have come to expect are maintained during the transition from the old world to the new.

Fortunately, while many telecom operators may view this evolution as taking them into uncharted waters, there are many lessons they can learn from a recent and similar shift made in the enterprise segment. Specifically, the widespread adoption of virtualization technologies in the enterprise data center about seven years ago sets a meaningful example for today’s telco transformation.

In order to effectively compete against over-the-top (OTT) players, telecom service providers must not only deliver a similar portfolio of service offerings. They must also effectively leverage their inherent advantages over these players (network assets and infrastructure) to ensure that the overall consumer experience is superior to what can be achieved by providers without network infrastructure of their own.

We believe there are four critical success factors that service providers should carefully consider adopting for the years to come.

Embrace DevOps

The goals of service providers match the typical goals of DevOps very well:

  • Improved deployment frequency – shorter time to market
  • Lower failure rate of new releases
  • Shortened lead time between fixes
  • Faster mean time to recovery in the event of a new release crashing

The concept of DevOps has been defined in many different ways. But one fundamental part of it consists of forming smaller, cross-functional teams, which together include all the competence needed in order to take full responsibility for the lifecycle of a service. Effective teams should include both development and operations resources.

There is clear evidence that companies that effectively incorporate DevOps practices get more done. According to  Puppet Labs’ “2015 State of DevOps Report,” high-performing IT organizations that have embraced DevOps deploy 30-times more frequently with 200-times shorter lead times. They have 60-times fewer failures and recover 168-times faster. Such figures are a powerful incentive for organizations to go agile in order to compete effectively with the OTT guys.

Closer inspection of companies doing this well reveals a number of best-practices that enable such improvements. Among these are continuous integration, test automation, deployment automation, and version control.

Test Automation Is Key

Continuous integration and test automation are related practices, and they are key to achieving high efficiency and agility while maintaining high quality. It would be unthinkable, for example, to deploy a new software update to one of the virtual network functions (VNFs) in a service chain without testing it. However, one of the main drivers of network functions virtualization (NFV) is the desire to launch new services faster, and it is hard to see how that could be possible with manual testing.

From the telco perspective, the most important type of test is end-to-end service validation, encompassing the network and the complete service chain consisting of a number of VNFs. In other words, whenever a VNF vendor delivers a new software version to the service provider, the service provider must ensure that this does not break the end-to-end service in any way and that all components in the end-to-end service work together. Only after the service has been properly verified can the new software be deployed into the production network.

There is one difference here between OTT players and network-based telecom operators. While the OTT provider’s responsibility comprises service production from a central location and delivery to consumers on a best-effort basis, service provider responsibility spans a distributed and quality-assured network.

This is the main reason lab testing is not enough: It is impossible to simulate real, end-to-end networks in any lab – although many have tried, and failed. This means that when services are deployed in production environments, operators must also validate that the service works end-to-end from a customer perspective. This is known as a service activation test, and it will become much more important in future dynamic and programmable networks in order to provide real-time feedback to orchestrators that changes that have been made will actually work.

To the network operator, each VNF (e.g., the vFirewall or vCache) can be seen as a unit or component, and most importantly they should be tested by the VNF vendor before delivery to the service provider.

Deploy Small Improvements Continuously 

Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time.

Continuous deployment is the deployment or release of code to production as soon as it is ready.

While improvements can be continuously tested and delivered (CD), that does not mean they are instantly deployed into production. For continuous deployment to be practically achievable, two things must typically be in place:

  • Extensive test automation to ensure that new functionality does not result in incidents capable of degrading service levels or otherwise affecting customers
  • The necessary tools and architecture that enable the “recall” of new features when a defect has not been detected by automated tests

In other words, it is not possible to deliver new services into the network continuously without having a commitment to ongoing test automation. Nor is it possible to deploy services in the production network in an automated way unless the deployment, too, is tested in an automated way. This is critical to ensuring customer quality, as well as to protecting the reputation of the service provider when new services fail.

If the orchestrator gets instant feedback after a change, it should also roll back the change in case something goes wrong. Netconf, a commonly used protocol for network configuration by orchestrators, has a feature called “Rollback-on-Error Capability.”

Automate the Setup of Your Infrastructure

This is an area where the industry has already gained a lot of momentum, as can be seen from the continued adoption and support of OpenStack, heat orchestration templates, service orchestrators, and more. More of this is certainly to be expected, and soon.

Summary

The global telecom industry is embarking on a journey toward software-defined and dynamic networks. The good news is that these waters are not necessarily uncharted. By learning from the enterprise segment, which has already made a similar transition, telecom operators can avoid making the same mistakes – giving themselves the best possible chances of a smooth and seamless evolution.

Marcus Friman, Chief Product Officer & co-founder, Netrounds