scispace - formally typeset
Search or ask a question

Showing papers by "Deutsche Telekom published in 2020"


Journal ArticleDOI
TL;DR: The superheterodyne approach proves to be the most promising of all, being compliant with the new IEEE standard for 100 Gb/s wireless transmissions and showing compatibility to accessible, already available modems.
Abstract: This article presents a wireless communication point-to-point link operated in the low terahertz (THz) range, at a center frequency of 300 GHz. The link is composed of all-electronic components based on monolithic millimeter wave integrated circuits fabricated in an InGaAs metamorphic high electron mobility transistor technology. Different configurations and architectures are compared and analyzed. The superheterodyne approach proves to be the most promising of all, being compliant with the new IEEE standard for 100 Gb/s wireless transmissions and showing compatibility to accessible, already available modems. The first option of realizing the superheterodyne configuration is by combining the 300-GHz transmitter and receiver with of-the-shelf up and down converters operating at a center frequency of 10 GHz. In this case, data rates of up to 24 Gb/s are achieved. The second option employs a fast arbitrary waveform generator that uses a carrier frequency to up-convert the baseband data. In this case, data rates of up to 60 Gb/s and transmission distances of up to 10 m are achieved with complex modulated signals like 16-QAM and 32-QAM. The baseband signal is composed of pseudo-random binary sequences and is analyzed offline using fast analog to digital converters. In superheterodyne configuration, multichannel transmission is demonstrated. Channel data rates of 10.2 Gb/s using 64-QAM are achieved. The successful transmission of aggregated channels in this configuration shows the potential of THz communication for future high data rate applications.

63 citations


Proceedings ArticleDOI
19 Jun 2020
TL;DR: An Android-based tool deployed on a 5G test platform is used to record radio link parameters in the up- and downlink, which shows a significantly lower throughput, comparable to 4G, with a few tens of Mbit/s.
Abstract: We perform experiments on the wireless communication between a drone flying at different heights and a commercial 5G base station. An Android-based tool deployed on a 5G test platform is used to record radio link parameters in the up- and downlink. In the downlink, measurements show a throughput of 600 Mbit/s on average with peaks above 700 Mbit/s. The uplink has a significantly lower throughput, comparable to 4G, with a few tens of Mbit/s.

28 citations


Proceedings ArticleDOI
TL;DR: In this article, a head motion prediction model is proposed to reduce the motion-to-photon (M2P) latency for different look-ahead times in volumetric videos.
Abstract: Volumetric video is an emerging key technology for immersive representation of 3D spaces and objects. Rendering volumetric video requires lots of computational power which is challenging especially for mobile devices. To mitigate this, we developed a streaming system that renders a 2D view from the volumetric video at a cloud server and streams a 2D video stream to the client. However, such network-based processing increases the motion-to-photon (M2P) latency due to the additional network and processing delays. In order to compensate the added latency, prediction of the future user pose is necessary. We developed a head motion prediction model and investigated its potential to reduce the M2P latency for different look-ahead times. Our results show that the presented model reduces the rendering errors caused by the M2P latency compared to a baseline system in which no prediction is performed.

22 citations


Journal ArticleDOI
TL;DR: The paper presents a series of three new video quality model standards for the assessment of sequences of up to UHD/4K resolution, developed in a competition within the International Telecommunication Union (ITU-T), Study Group 12, in collaboration with the Video Quality Experts Group (VQEG).
Abstract: The paper presents a series of three new video quality model standards for the assessment of sequences of up to UHD/4K resolution. They were developed in a competition within the International Telecommunication Union (ITU-T), Study Group 12, in collaboration with the Video Quality Experts Group (VQEG), over a period of more than two years. A large video quality test set with a total of 26 individual databases was created, with 13 used for training and 13 for validation and selection of the winning models. For each database, video quality laboratory tests were run with at least 24 subjects each. The 5-point Absolute Category Rating (ACR) scale was used for rating, calculating Mean Opinion Scores (MOS) as ground-truth. To represent today’s commonly applied HTTP-based adaptive streaming context, the test sequences comprise a variety of encoding settings, bitrates, resolutions and framerates for the three codecs H.264/AVC, H.265/HEVC and VP9, applied to a wide range of source sequences of around 8 s duration. Processing was carried out with an FFmpeg-based processing chain developed specifically for the competition, and via upload and encoding through exemplary online streaming services. The resulting data represents the largest, lab-test-based dataset used for video quality model development to date, with a total of around 5,000 test sequences. The paper addresses the three models ultimately standardized in the P.1204 Recommendation series, resulting in different model types and for different applications: (i) Rec. P.1204.3, no-reference bitstream-based, with access to encoded bitstream information; (ii) P.1204.4, pixel-based, using information from the reference and the processed video, and (iii) P.1204.5, no-reference hybrid, using both bitstream-and pixel-information without knowledge of the reference. The paper outlines the development process and provides holistic details about the statistical evaluation, test databases, model algorithms and validation results, as well as a performance comparison with state-of-the-art models.

19 citations


Proceedings ArticleDOI
01 May 2020
TL;DR: A bitstream-based no-reference video quality model developed as part of the latest model-development competition conducted by ITU-T Study Group 12 and the Video Quality Experts Group (VQEG), “P.NATS Phase 2” is described.
Abstract: With the increasing requirement of users to view high-quality videos with a constrained bandwidth, typically realized using HTTP-based adaptive streaming, it becomes more and more important to determine the quality of the encoded videos accurately, to assess and possibly optimize the overall streaming quality In this paper, we describe a bitstream-based no-reference video quality model developed as part of the latest model-development competition conducted by ITU-T Study Group 12 and the Video Quality Experts Group (VQEG), “PNATS Phase 2” It is now part of the new P1204 series of Recommendations as P12043 It can be applied to bitstreams encoded with H264/AVC, HEVC and VP9, using various encoding options, including resolution, bitrate, framerate and typical encoder settings such as number of passes, rate control variants and speeds The proposed model follows an ensemble-modelling-inspired approach with weighted parametric and machine-learning parts to efficiently leverage the performance of both approaches The paper provides details about the general approach to modelling, the features used and the final feature aggregation The model creates per-segment and per-second video quality scores on the 5-point Absolute Category Rating scale, and is applicable to segments of 5–10 seconds duration It covers both PC/TV and mobile/tablet viewing scenarios We outline the databases on which the model was trained and validated as part of the competition, and perform an additional evaluation using a total of four independently created databases, where resolutions varied from 360p to 2160p, and frame rates from 15–60fps, using realistic coding and bitrate settings We found that the model performs well on the independent dataset, with a Pearson correlation of 0942 and an RMSE of 042 We also provide an open-source reference implementation of the described P12043 model, as well as the multi-codec bitstream parser required to extract the input data, which is not part of the standard

19 citations


Journal ArticleDOI
01 Dec 2020
TL;DR: SARGON ontology is offered which extends SAREF to cross-cut domain-specific information representing the smart energy domain and includes building and electrical grid automation together and is powered by smart energy standards and IoT initiatives as well as real use cases.
Abstract: The internet of things (IoT) is a paradigm where the fragmentation of standards, platforms, services, and technologies, often scattered among different vertical domains. Consequently, the smart energy system is one of the vertical domains in which IoT technology is investigated. At the early stages of studying the IoT domains that deal with big data and interoperability, a semantic layer can be served to approach the difficulty of heterogeneity in information and data representation from IoT devices. In 2015, smart appliance reference ontology (SAREF) was introduced to interconnect data of smart devices and facilitate the communication between IoT devices that use different protocols and standards. The modular design of SAREF concedes the definition of any new vertical domain describing functions that the devices perform. In this study, SARGON – SmArt eneRGy dOmain oNtology is offered which extends SAREF to cross-cut domain-specific information representing the smart energy domain and includes building and electrical grid automation together. SARGON ontology is powered by smart energy standards and IoT initiatives, as well as real use cases. It involves classes, properties, and instances explicitly created to cover the building and electrical grid automation domain. This study exhibits the development of SARGON and demonstrates it through a web application.

18 citations


Book ChapterDOI
14 Sep 2020
TL;DR: DANTE is presented: a framework and algorithm for mining darknet traffic that learns the meaning of targeted network ports by applying Word2Vec to observed port sequences and uses a novel and incremental time-series cluster tracking algorithm on observed sequences to detect recurring behaviors and new emerging threats.
Abstract: Trillions of network packets are sent over the Internet to destinations which do not exist. This ‘darknet’ traffic captures the activity of botnets and other malicious campaigns aiming to discover and compromise devices around the world. In this paper, we present DANTE: a framework and algorithm for mining darknet traffic. DANTE learns the meaning of targeted network ports by applying Word2Vec to observed port sequences. To detect recurring behaviors and new emerging threats, DANTE uses a novel and incremental time-series cluster tracking algorithm on the observed sequences. To evaluate the system, we ran DANTE on a full year of darknet traffic (over three Tera-Bytes) collected by the largest telecommunications provider in Europe, Deutsche Telekom and analyzed the results. DANTE discovered 1,177 new emerging threats and was able to track malicious campaigns over time.

18 citations


Proceedings ArticleDOI
07 Jun 2020
TL;DR: This research work presents conceptual considerations and quantitative evaluations into how integrating computation offloading to edge computing servers would offer a paradigm shift for an effective deployment of autonomous drones.
Abstract: This research work presents conceptual considerations and quantitative evaluations into how integrating computation offloading to edge computing servers would offer a paradigm shift for an effective deployment of autonomous drones. The specific mission that has been considered is collaborative autonomous navigation and mapping in a 3D environment of a small drone network. Specifically, in order to achieve this mission, each drone is required to compute a low latency, highly compute intensive task in a timely manner. The proposed model decides for each task, while considering the impact on performance and mission requirements, whether to (i) compute locally, (ii) offload to the edge server, or (iii) to the ground station. Extensive simulation work was performed to assess the effectiveness of the proposed scheme compared to other models.

17 citations


Proceedings ArticleDOI
15 Mar 2020
TL;DR: Measurement and simulation of 60 GHz in-street backhaul propagation channel shows strong channel sparsity when antennas are located at 3 meters above the ground, caused by the many in- street obstacles.
Abstract: The large gains provided by millimeter-wave (mmWave) frequencies in terms of available bandwidth have made them a popular choice to be included in different standards like the 5G-NR and IEEE 802.11ay. Although mmWave frequencies offer an opportunity for large capacities, they face many challenges related to the propagation channel such as strong blockage or attenuation losses. In this paper, 60 GHz in-street backhaul propagation channel is measured and evaluated along with ray-based simulations in two different scenarios; urban canyoning and residential. The channel sounder allows for bi-directional path-loss measurements with highly-directive beamforming at both sides. And the simulator takes benefit from highly accurate LiDAR point cloud data in order to identify the obstacles and compute losses along the direct and indirect paths. Both the measurements and simulations show strong channel sparsity when antennas are located at 3 meters above the ground, caused by the many in-street obstacles.

16 citations


Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this paper, a flexible scheme for direct and low-latency state transfer leveraging Software-Defined Networking (SDN) is proposed, which reduces the service migration time by 80%.
Abstract: It is vital for Tactile Internet to constantly maintain a low-latency control loop between sensors, actuators, and their controlling software applications. Mobile Edge Computing (MEC) is an essential technology that brings the elasticity of cloud computing to run controlling applications at a close proximity to controlled objects, thus reducing latency. To support the mobility of the objects, the underlying network has to be capable of migrating MEC applications seamlessly to guarantee the close proximity. However, it is challenging to migrate application states quickly and flexibly without interrupting the control loop. We propose FAST, a flexible scheme for direct and low-latency state transfer leveraging Software-Defined Networking (SDN). Evaluation results show that compared to state of the art, FAST reduces the service migration time by 80%.

15 citations


Proceedings ArticleDOI
10 Jun 2020
TL;DR: A head motion prediction model is developed and investigated its potential to reduce the M2P latency for different look-ahead times and results show that the presented model reduces the rendering errors caused by the M1P latency compared to a baseline system in which no prediction is performed.
Abstract: Volumetric video is an emerging key technology for immersive representation of 3D spaces and objects. Rendering volumetric video requires lots of computational power which is challenging especially for mobile devices. To mitigate this, we developed a streaming system that renders a 2D view from the volumetric video at a cloud server and streams a 2D video stream to the client. However, such network-based processing increases the motion-to-photon (M2P) latency due to the additional network and processing delays. In order to compensate the added latency, prediction of the future user pose is necessary. We developed a head motion prediction model and investigated its potential to reduce the M2P latency for different look-ahead times. Our results show that the presented model reduces the rendering errors caused by the M2P latency compared to a baseline system in which no prediction is performed.

Journal ArticleDOI
TL;DR: The paper addresses the different aspects or levels of knowledge representation and management from cognitive theories and modeling processes through notation up to processing, tooling and implementation and presents the ISO 23903 Interoperability and Integration Reference Architecture.
Abstract: Multidisciplinary and highly dynamic pHealth ecosystems according to the 5P Medicine paradigm require careful consideration of systems integration and interoperability within the domains knowledge space. The paper addresses the different aspects or levels of knowledge representation (KR) and management (KM) from cognitive theories (theories of knowledge) and modeling processes through notation up to processing, tooling and implementation. Thereby, it discusses language and grammar challenges and constraints, but also development process aspects and solutions, so demonstrating the limitation of data level considerations. Finally, it presents the ISO 23903 Interoperability and Integration Reference Architecture to solve the addressed problems and to correctly deploy existing standards and work products at any representational level including data models as well as data model integration and interoperability.

Proceedings ArticleDOI
27 May 2020
TL;DR: This application contains generic interfaces that allow for easy deployment of various augmented/mixed reality clients using the same server implementation and uses 6DoF head movement prediction techniques, WebRTC protocol and hardware video encoding to ensure low-latency in the processing chain.
Abstract: Volumetric video is an emerging technology for immersive representation of 3D spaces that captures objects from all directions using multiple cameras and creates a dynamic 3D model of the scene. However, processing volumetric content requires high amounts of processing power and is still a very demanding task for today's mobile devices. To mitigate this, we propose a volumetric video streaming system that offloads the rendering to a powerful cloud/edge server and only sends the rendered 2D view to the client instead of the full volumetric content. We use 6DoF head movement prediction techniques, WebRTC protocol and hardware video encoding to ensure low-latency in different parts of the processing chain. We demonstrate our system using both a browser-based client and a Microsoft HoloLens client. Our application contains generic interfaces that allow for easy deployment of various augmented/mixed reality clients using the same server implementation.

Journal ArticleDOI
TL;DR: In this article, a textile integrated force-sensor array is presented that is based on the cross coupling of optical fibers and provides multi-touch capability, which relies on cross coupling between two fibers that are aligned orthogonally to each other.
Abstract: A $3\times 3$ textile integrated force-sensor array is presented that is based on the cross coupling of optical fibers and provides multi-touch capability. The force sensing relies on cross coupling between two fibers that are aligned orthogonally to each other. The actual coupling is improved by an increased contact area and higher scattering at the fiber crossing. A systematic analysis of different parameters is done for the driving and the sensing fiber, which involves materials, size, fabrication parameters as well as the cross-section shape. The results reveal that the combination of a rather stiff driving fiber and a flexible sensing fiber from elastomer (TPU) lead to the highest sensitivities in the range of 45.5 pW/N. The application of non-circular cross sections can improve the coupling efficiency by directing the side-emitted light better towards the sensing fiber. A trilobal-shaped driving fiber lead to an increase of 5 dBs, whereas the shaping of the sensing fiber improved the efficiency by another 7.5 dB.

Journal ArticleDOI
TL;DR: This work validates the concept of superheterodyne architecture for integration in a beyond-5G network, supplying important guidelines that have to be taken into account in the design steps of a future wireless system.
Abstract: A superheterodyne transmission scheme is adopted and analyzed in a 300 GHz wireless point-to-point link. This was realized using two different intermediate frequency (IF) systems. The first uses fast digital synthesis which provides an IF signal centered around a carrier frequency of 10 GHz. The second involves the usage of commercially available mixers, which work as direct up- and down-converters, to generate the IF input and output. The radio frequency components are based on millimeterwave monolithic integrated circuits at a center frequency of 300 GHz. Transmission experiments over distances up to 10 m are carried out. Data rates of up to 60 Gbps using the first IF option and up to 24 Gbps using the second IF option are achieved. Modulation formats up to 32QAM are successfully transmitted. The linearity of this link and of its components is analyzed in detail. Two local oscillators (LOs), a photonics-based source and a commercially available electronic source are employed and compared. This work validates the concept of superheterodyne architecture for integration in a beyond-5G network, supplying important guidelines that have to be taken into account in the design steps of a future wireless system.

Journal ArticleDOI
TL;DR: An enhanced version of SCF is presented which achieves much lower bitrates for images with less than 8 000 colours than other state-of-the-art compression schemes and achieves savings of about 33%, 4%, and 11% on average.
Abstract: The compression of screen content has attracted the interest of researchers in the last years as the market for transferring data from computer displays is growing. It has already been shown that especially those methods can effectively compress screen content which are able to predict the probability distribution of next pixel values. This prediction is typically based on a kind of learning process. The predictor learns the relationship between probable pixel colours and surrounding texture. Recently, an effective method called ‘soft context formation’ (SCF) had been proposed which achieves much lower bitrates for images with less than 8 000 colours than other state-of-the-art compression schemes. This paper presents an enhanced version of SCF. The average lossless compression performance has increased by about 5% in application to images with less than 8 000 colours and about 10% for images with up to 90 000 colours. In comparison to FLIF, FP8v3, and HEVC (HM − 16.20 + SCM − 8.8), it achieves savings of about 33%, 4%, and 11% on average. The improvements compared to the original version result from various modifications. The largest contribution is achieved by the local estimation of the probability distribution for unpredictable colours in stage II of the compression scheme.


Journal ArticleDOI
TL;DR: An example of an architecture applicable to fifth generation mobile networks, based on the requirements published in the latest ITU-T Recommendations, and the technology of optical time transfer (OTT) is proposed, which allows dissemination of an accurate timescale such as a UTC realization to selected nodes of a coherent network primary reference time clock.
Abstract: Any mobile telecommunications network requires syntonization among its various elements. The need to exploit the available radio spectrum efficiently and providing new kinds of services render frequency syntonization insufficient. Phase and time-of-day synchronization will be necessary in the future. Thus, network operators are searching for efficient and reliable synchronization architectures. In this article, an example of such an architecture, applicable to fifth generation mobile networks, is presented, based on the requirements published in the latest ITU-T Recommendations. For supervision of the performance of the synchronization network, the technology of optical time transfer (OTT) is proposed, which allows dissemination of an accurate timescale such as a UTC realization to selected nodes of a coherent network primary reference time clock. The OTT realized so far and its current and future role in the network of Deutsche Telekom are discussed, and representative measurement results are shown.

Proceedings ArticleDOI
16 Jun 2020
TL;DR: A structured approach to transforming healthcare towards personalized, preventive, predictive, participative precision (P5) medicine and the related organizational, methodological and technological requirements is introduced.
Abstract: The paper introduces a structured approach to transforming healthcare towards personalized, preventive, predictive, participative precision (P5) medicine and the related organizational, methodological and technological requirements Thereby, the deployment of autonomous systems and artificial intelligence is inevitably The paper discusses opportunities and challenges of those technologies from a humanistic and ethical perspective It shortly introduces the essential concepts and principles, and critically discusses some relevant projects Finally, it offers ways for correctly representing, specifying, implementing and deploying autonomous and intelligent systems under an ethical perspective

Book ChapterDOI
01 Jan 2020
TL;DR: In the era of digital transfer, computing within the network is the key enabler for new services offering increased security, lower latency, increased resilience, and many additional features the authors describe in this chapter.
Abstract: In this chapter, we describe the transformation of current communication networks to future communication systems. Communication networks are always prone to transformation due to requests for new services by their users. Initially, communication networks addressed voice services. Later, data services were added. The digital transfer requires a more disruptive transformation, supporting machine-to-machine and later human-to-machine type communications. State-of-the-art communication systems are solely conveying information in an agnostic fashion between two places, where a very limited number of applications is hosted. Communication links are often addressed as dumb pipes. Future communication networks are becoming intelligent as information is increasingly processed within the communication network, rather than solely in the end points, for a massive number of heterogeneous applications. Once computing is introduced into networks, the role of the network operator will change dramatically. In the era of digital transfer, computing within the network is the key enabler for new services offering increased security, lower latency, increased resilience, and many additional features we describe in this chapter.

Proceedings ArticleDOI
01 Aug 2020
TL;DR: The paper describes the ongoing activities in the EUJapan project ThoR towards suitable propagation and channel models applicable for the simulation and planning of 300 GHz wireless links, based on ray tracing models.
Abstract: The need for high data rates of several Gbit/s in 5G and beyond wireless networks will require capacities of 100 Gbit/s in the backhaul and fronthaul links. 300 GHz wireless links are promising candidates to provide these links. The paper describes the ongoing activities in the EUJapan project ThoR towards suitable propagation and channel models applicable for the simulation and planning of such links. The corresponding modelling activities are based on ray tracing models, which are enhanced by taking into account atmospheric propagation effects, measured characteristic of building materials and the effect of wind to the poles, where the antennas are mounted. First results applied to realistic simulation scenarios are presented.

Proceedings ArticleDOI
04 Dec 2020
TL;DR: The presented interactive XR experience showcases photorealistic volumetric representations of two humans, as the user moves in the scene, one of the virtual humans follows the user with his head, conveying the impression of a true conversation.
Abstract: This demo presents a mixed reality (MR) application that enables free-viewpoint rendering of interactive high-quality volumetric video (VV) content on Nreal Light MR glasses, web browsers via WebXR and Android devices via ARCore. The application uses a novel technique for animation of VV content of humans and a split rendering framework for real-time streaming of volumetric content over 5G edge-cloud servers. The presented interactive XR experience showcases photorealistic volumetric representations of two humans. As the user moves in the scene, one of the virtual humans follows the user with his head, conveying the impression of a true conversation.

Journal ArticleDOI
TL;DR: This work proposes novel low-complexity coordinated resource allocation methods based on standard linear precoding schemes that not only maximize the sum-SE and protect the primary users from harmful interference, but they also satisfy the quality-of-service demands of the mobile users.
Abstract: 5G cellular networks will heavily rely on the use of techniques that increase the spectral efficiency (SE) to meet the stringent capacity requirements of the envisioned services. To this end, the use of coordinated multi-point (CoMP) as an enabler of underlay spectrum sharing promises substantial SE gains. In this work, we propose novel low-complexity coordinated resource allocation methods based on standard linear precoding schemes that not only maximize the sum-SE and protect the primary users from harmful interference, but they also satisfy the quality-of-service demands of the mobile users. Furthermore, we devise coordinated caching strategies that create joint transmission (JT) opportunities, thus overcoming the mobile backhaul/fronthaul throughput and latency constraints associated with the application of this CoMP variant. Additionally, we present a family of caching schemes that outperform significantly the “de facto standard” least recently used (LRU) technique in terms of the achieved cache hit rate while presenting smaller computational complexity. Numerical simulations indicate that the proposed resource allocation methods perform close to their interference-unconstrained counterparts, illustrate that the considered caching strategies facilitate JT, highlight the performance gains of the presented caching schemes over LRU, and shed light on the effect of various parameters on the performance.

Patent
Bischinger Kurt1
01 Dec 2020
TL;DR: A communication system for communication in a communication network having a first subnetwork and a second subnetwork includes a first identification entity assigned to the first sub-network and configured to receive an identity of a communication terminal and identify the communication terminal on the basis of the identity for communication over the second sub network as discussed by the authors.
Abstract: A communication system for communication in a communication network having a first subnetwork and a second subnetwork includes a first identification entity assigned to the first subnetwork and configured to receive an identity of a communication terminal and identify the communication terminal on a basis of the identity for communication over the first subnetwork. The communication system additionally includes a second identification entity assigned to the second subnetwork and configured to receive the identity of the communication terminal and identify the communication terminal on the basis of the identity for communication over the second subnetwork. The communication system further includes a management entity configured to authenticate the communication terminal for communication over a particular subnetwork.

Proceedings ArticleDOI
TL;DR: In this paper, a volumetric video streaming system that offloads the rendering to a powerful cloud/edge server and only sends the rendered 2D view to the client instead of the full volume content is proposed.
Abstract: Volumetric video is an emerging technology for immersive representation of 3D spaces that captures objects from all directions using multiple cameras and creates a dynamic 3D model of the scene. However, processing volumetric content requires high amounts of processing power and is still a very demanding task for today's mobile devices. To mitigate this, we propose a volumetric video streaming system that offloads the rendering to a powerful cloud/edge server and only sends the rendered 2D view to the client instead of the full volumetric content. We use 6DoF head movement prediction techniques, WebRTC protocol and hardware video encoding to ensure low-latency in different parts of the processing chain. We demonstrate our system using both a browser-based client and a Microsoft HoloLens client. Our application contains generic interfaces that allow for easy deployment of various augmented/mixed reality clients using the same server implementation.

22 Jun 2020
TL;DR: This work shows that a 2-dimensional (2D-)knapsack solution covers arbitrary request pattern, which selects dynamically changing content yielding maximum caching value for any predefined request sequence, and summarizes a comprehensive picture of the demands and efficiency criteria for web caching, including updating speed and overheads.
Abstract: Caching strategies have been evaluated and compared in many studies, most often via simulation, but also in analytic methods. Knapsack solutions provide a general analytical approach for upper bounds on web caching performance. They assume objects of maximum (value/size) ratio being selected as cache content, with flexibility to define the caching value. Therefore the popularity, cost, size, time-to-live restrictions etc. per object can be included an overall caching goal, e.g., for reducing delay and/or transport path length in content delivery. The independent request model (IRM) leads to basic knapsack bounds for static optimum cache content. We show that a 2-dimensional (2D-)knapsack solution covers arbitrary request pattern, which selects dynamically changing content yielding maximum caching value for any predefined request sequence. Moreover, Belady’s optimum strategy for clairvoyant caching is identified as a special case of our 2D-knapsack solution when all objects are unique. We also summarize a comprehensive picture of the demands and efficiency criteria for web caching, including updating speed and overheads. Our evaluations confirm significant performance gaps from LRU to advanced GreedyDual and score-based web caching methods and to the knapsack bounds.

Proceedings ArticleDOI
20 Oct 2020
TL;DR: DeepFlow as mentioned in this paper is a system that processes complete ingress traffic flow data on a carrier scale and produces forecasts for all traffic flows using Machine Learning techniques, which is shown by comparing different prediction methods on recent and realworld data that covers three years from 2016 to 2019.
Abstract: Describing incoming web traffic – as seen from large eyeball networks, i.e. ingress traffic – and estimating it into the future, are necessary operations for network service providers who need to efficiently organize the essential tasks of more dynamic network planning and capacity management. For that a network-wide view on ingress traffic processes and their predictions is necessary. We propose DeepFlow, a system that processes complete ingress traffic flow data on a carrier scale and produces forecasts for all traffic flows using Machine Learning techniques. The viability of DeepFlow is shown by comparing different prediction methods on recent and real-world data that covers three years from 2016 to 2019. We use neural and non-neural methods that produce accurate results in predicting the three largest ingress traffic flows. Furthermore, we investigate the case where the traffic time series data has high volatility. We also use a VAR model to generate directed acyclic graphs to get insights into the relationships between the different ASes. DeepFlow is currently deployed in a lab environment of a large European service provider. The initial evaluation results demonstrate the feasibility to realize system-wide, continuous, near real-time and configurable traffic flow prediction at large scale.

Proceedings ArticleDOI
28 Sep 2020
TL;DR: This paper examines deployment options for converged optical-wireless access networks in urban areas regarding their cost efficiency and implementation flexibility, and presents results of a case study for a coordinated planning of optical fixed access and wireless backhaul solutions for 5G networks.
Abstract: Currently, a lot of efforts are put in place to design and implement a high-capacity, flexible and highly efficient 5G network infrastructure. A very important consideration that should be kept in mind when designing future 5G networks is developing an adequate network architecture and high-capacity and future-proof backhaul and fronthaul links. Especially novel network concepts such as the cloud radio access network (C-RAN) set very high requirements on the backhaul and fronthaul connectivity and call for a solution that ensures high-capacity, flexibility, practicability and future readiness while providing an easy and economical migration path.This paper examines deployment options for converged optical-wireless access networks in urban areas regarding their cost efficiency and implementation flexibility. It presents results of a case study for a coordinated planning of optical fixed access and wireless backhaul solutions for 5G networks by considering following options: i) roll-out of a converged fiber-to-the-home (FTTH) network infrastructure that provides both fixed internet access and integrated cellular backhaul, ii) use of the existing fiber-to-the-curb (FTTC) network infrastructure to provide the backhaul for cellular networks, and iii) deployment of a dedicated optical fiber network to act as backhaul of cellular networks. A techno-economic study was carried out to compare the economic viability of the three deployment options under realistic assumptions.

Proceedings ArticleDOI
01 May 2020
TL;DR: This work proposes a new routing approach called SourceShift to resiliently handle dynamic networks in the absence of current network status information, which requires less than half the airtime of state of the art routing protocols in more than 60% of the evaluated cases.
Abstract: Wireless networks have to support an increasing number of devices with increasing demands on mobility and resilience Mesh network routing protocols provide an elegant solution to the problem of connecting mobile nodes, due to their ability to adapt to topology changes However, with increasing number of nodes and increasing mobility of the nodes, maintaining sufficiently recent routing information becomes increasingly challenging Existing routing protocols fail to operate reliably in case of sudden link or node failuresIn this work, we propose a new routing approach called SourceShift to resiliently handle dynamic networks in the absence of current network status information SourceShift uses opportunistic routing and network coding, like MORE, but also makes use of link local feedback, like ExOR We evaluate SourceShift in random network topologies with link and node failures and compare the results with the state of the art The evaluation shows that SourceShift can ensure the delivery of the message when feasible Additionally, the use of local feedback can improve the airtime efficiency compared to other routing protocols, even in cases without link or node failures As a result, SourceShift requires less than half the airtime of state of the art routing protocols in more than 60% of the evaluated cases

Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this paper, an optimization problem to maximize the minimum throughput of all aircraft is formulated and solved using realistic aircraft and base station positions and also model physical limitations such as maximum number of antennas per aircraft and interference.
Abstract: While connectivity is available almost anytime and anywhere on ground, aircraft during flight still lack high-throughput communication. We investigate air-to-ground networks consisting of direct air-to-ground, air-to-air and satellite links for providing high throughput to aircraft. We formulate an optimization problem to maximize the minimum throughput of all aircraft. We solve the problem using realistic aircraft and base station positions and also model physical limitations such as maximum number of antennas per aircraft and interference. We investigate different scenarios and parameters and analyze the influence of the parameters on the max-min throughput per aircraft. We show that the satellite and direct air-to-ground links are the bottleneck, as all throughput can be distributed among aircraft. Furthermore we show that air-to-air communication is dispensable for achieving a high throughput when having direct air-to-ground coverage.