Thomas van Briel is SVP, Architecture & Strategy at Deutsche Telekom. He talked to Annie Turner about DT’s end-game ambitions – creating value for customers.
Thomas van Briel has worked in architecture-related roles at Deutsche Telekom (DT) in Germany for five years. His background is as a software developer, starting with Nortel in 1998 then a long stint with Swisscom.
“The denominator that always draws me is a combination of technology development, organisational development and business development. That’s the heart of what I do and it fits very nicely with the topic of network automation,” he says.
This article first appeared on FutureNet World and is reproduced with kind permission.
Bridging the innovation gap
Reflecting on progress since he joined DT, van Briel comments, “DT has always been innovative and at the forefront of shaping technology, especially at group level, but we had something of a chasm between innovative parts of the organisation and the operational shopfloor. We have managed to bridge that gap quite a bit, [enabling] people to work together, to build new skills and bring new stuff into operations, leveraging DT’s excellent engineering and execution capabilities.”
One example is DT’s voice operation in Germany which has been rebuilt, he says, “in a cloud-native manner and [we] brought in brutal automation to realize the ‘three-two-one-zero vision’. This sets the goal of three months to roll out a feature (compared to the typical one-to-two years in a traditional IMS environment), two days to deploy new software after it is released by a vendor, one day to implement a new bug or security patch, and zero ‘nightshifts’.”
Another example is the transport network and IP core, “where we’ve introduced orchestration solutions such as accelerating setting up new network links or handling failover when a link goes down,” van Briel adds.
Organisational transformation is always challenging and difficult to set up properly. He explains, “To build or acquire competences, the way we work crosses functions and [involves] partners. We built cross-functional teams with internal people in other parts of the continent like in St. Petersburg, to bring expertise in software development, and modelling and orchestration to the existing operational units. That’s a very effective way to change the spirit in an organisation, to put teams [together] and persistently work on a subject…to make it work in practice.”
Making ONAP operational
DT has been intensively involved in the Open Network Automation Platform (ONAP) community for some time and originally had lab deployments before deciding, “that if we really want to give it a chance, we need operational scope. We combined it with our O-RAN efforts, to do O-RAN automation in a pilot production via ONAP,” van Briel says.
The agile delivery structure means operations in the O-RAN trial deployment were automated from the start, and DT is now focused on building and “delivering one use case after another in an independent service management and orchestration platform that can eventually scale. We make sure those innovation islands, ONAP is just one of them, have a degree of freedom so they can adopt new practices. That is the journey of the last couple of years,” he explains.
Drive network automation
Is there a grand plan to join up the islands? Van Briel says, “It’s where our vision comes in. We call it DNA for Drive Network Automation…It’s about customer experience, velocity and efficiency – to become an orchestrated network of networks and services”. How will this come about?
Van Briel says at domain level, DT uses a continuous integration and continuous delivery (CI/CD) pipeline, a test automation framework, real-time inventory, controllers for applications, cloud and infrastructure as well as data analytics and messaging, security capabilities and last but not least, intent driven-orchestration.
“Those are pretty much the components that we need, and most of them have degrees of open source tools in them. The degree of open source utilisation varies – from vendors that integrate open source components in their automation products to complete open source approaches like ONAP. We pick and choose our approach from the complete range of options.
“Some components are reused across domains, but not individually instantiated, like a data lake, or Kafka bus or common data ingest. Some parts of the CI/CD pipelines are engineered, and operated centrally, but then you still instantiate your own pipeline locally: you build your own practices, your own models, and put the things together there.”
Each DevOps team is in charge of its own automation process. “They create a bounded context within a domain that can live on its own, adding resilience, independence and also a means to deal with complexity overall,” van Briel elaborates.
Next, all the services are exposed from the domains, “So that they become usable by overarching services or by other domains – that’s where standards like ETSI’s Zero-touch network and service management (ZSM) and TM Forum’s Open APIs come in. We use [the APIs] for catalogue management, for configuration and lifecycle management, and wherever we find them useful,” he says, noting that as they are of necessity generic, it takes some effort to make them operational.
“Having a set of common standards, guidelines, interfaces, practices etcetera applied, no matter where this automation is happening, will enable us to put the bricks together and make it work across a huge organization – a challenge in itself,” he continues.
Lessons from virtualisation
Like most operators, DT learned hard lessons about automation through virtualization efforts. Van Briel says, “When we tried to deal with [virtualized] workloads, we always ended up with specific infrastructure tightly mingled with the application at hand, and limited automation…We also tried a group-wide set of services to be operated in a cloudified fashion and that proved difficult, mainly because the available workloads and underlying technologies were immature.”
In short, often the technologies hadn’t been created for that purpose and DT is determined not to repeat the experience. Van Briel explains, “We developed standard test cases so if any vendor says it is cloud native, we go into the lab and have a look against the use cases because we were burned in the past. For me, that’s a foundation.”
Creating value
Vendors are under pressure in another major area too: disaggregation. He states, “It is very important to us as a means to accelerate innovation; to make sure that we have sufficient choice within the vendor landscape and new players. This has some consequences: you must go deeper into value creation yourself, in integration, in testing, in automation, and adopt different practices to deal with that.”
He continues, “In that sense, there are three changes in the industry: cloudification that enables it all; disaggregation for smaller components to be handled and integrated; and automation to deal with the complexity and deliver services from all of it. This hattrick is changing the way operations run.”
Having created some value through its islands of innovation, “Now it’s about making it broader, and making sure that things can be orchestrated across networks and domains, and across layers to deliver more complex solutions to the customer,” van Briel says.
He sees 5G as foundational to creating value, pointing out that the 5G Core architecture was designed to be cloud native and to support better automation. He says, “By having a decomposed architecture, it is possible to combine pieces from different vendors to bring something like slicing to life where we combine aspects of the RAN, transport and core networks to deliver end-to-end service to a customer.”
How far are we from 5G slicing being a commercial product and a self-service one at that? Van Briel says, “This will evolve in steps. It’s a matter of how quickly the markets will adopt [them]…The technological capabilities are pretty much ready to go. The question now is how to bring it to the marketplace.”
Managing a network of networks
He adds, “I think we will first see some deployments in campuses and we are exploring other approaches like exposing APIs which you can use for slicing or more generally to control the quality of your connectivity and enable revenue streams that go beyond our own company’s through B2B2C models.”
In other words, an operator could expose its services to another operator to run over that second operator’s infrastructure, thereby extending the originator’s footprint to serve its own customers or its customers’ customers.
Van Briel says, “That’s the end-game. We call it managing a network of networks. There are a number of flavours. One is being able to integrate a connectivity service from a partners to deliver – especially in the beginning – to B2B customers.”
The intention is to orchestrate the provisioning, “by combining overlay and underlay services on- and off-footprints, making sure they deliver connectivity solution to B2B customer. That’s the first goal”.
He agrees the end-game is ambitious and challenging but points out, “This is not just driven out of the networking department, but a strategic imperative we set ourselves. We have great visionaries…they are thinking in terms up to global orchestration. We must be able to integrate our services, with the multi-cloud approach from the customer’s perspective, and make it work.
“It might be to the point where we enable innovation by utilising our services sitting on top of our network capabilities…we are then integrated into the value chain without having the end-customer relationship in every single instance.”
Van Briel concludes, “If we don’t build the muscle to integrate all that as quickly and as flexibly as customers expect for their multi-cloud solutions and applications…over time, we will lose relevance and be pushed down the chain in value creation.
“We’re in a great position to ensure that we are the ones to help our customers solve their needs…As we reach out with our capabilities – be that in security or connectivity for underlay networks or other things – we will really shine.”