Publications

2022

Abstract

Wireless jammer activity from malicious or malfunctioning devices cause significant disruption to mobile network services and user QoE degradation. In practice, detection of such activity is manually intensive and costly, taking days and weeks after the jammer activation to detect it. We present a novel data-driven jammer detection framework termed JADE that leverages continually collected operator-side cell-level KPIs to automate this process. As part of this framework, we develop two deep learning based semi-supervised anomaly detection methods tailored for the jammer detection use case. JADE features further innovations, including an adaptive thresholding mechanism and transfer learning based training to efficiently scale JADE for operation in real-world mobile networks. Using a real-world 4G RAN dataset from a multinational mobile network operator, we demonstrate the efficacy of proposed jammer detection methods relative to commonly used anomaly detection methods. We also demonstrate the robustness of our proposed methods in accurately detecting jammer activity across multiple frequency bands and diverse types of jammers. We present real-world validation results from applying our methods in the operator’s network for online jammer detection. We also present promising results on pinpointing jammer locations when our methods spot jammer activity in the network along with cell site location data.

Abstract

We propose XRC, an explicit rate control algorithm that overcomes the poor performance of commonly used TCP variants in cellular networks. XRC exploits explicit feedback from the radio access network that is aware of the physical, network and transport layer information of all UEs as well as resource distribution policies for users with different traffic characteristics. XRC co-exists fairly with other XRC and non-XRC flows at the wireless and non-wireless bottlenecks while it strictly controls queuing delay within a small threshold. We implement XRC in NS-3 and examine its performance across a range of network loads and dynamics. When competing with CUBIC at a wireless bottleneck, XRC achieves a Jain’s fairness index of 99.7% while providing a 3x lower median queuing delay compared to when CUBIC competes with CUBIC in the same setup.

Abstract

Mobile network traffic data offers unprecedented opportunities for innovative studies within and beyond networking. However, progress is hindered by the very limited access that the research community at large has to the real-world mobile network data that is needed to develop and dependably test mobile traffic data-driven solutions. As a contribution to overcome this barrier, we propose CartaGenie, a generator of realistic mobile traffic snapshots at city scale. Taking a deep generative modeling approach and through a tailored conditional generator design, CartaGenie can synthesize high-fidelity and artifact-free spatial traffic snapshots using only contextual information about the target geographical region that is easily found in public repositories. Hence, CartaGenie allows researchers to create their own realistic datasets of spatial traffic from open data about their region of interest. Experiments with real-world mobile traffic measurements collected in multiple metropolitan areas show that CartaGenie can produce dependable network traffic loads for areas where no prior traffic information is available, significantly outperforming a comprehensive set of benchmarks. Moreover, tests with practical case studies demonstrate that the synthetic data generated by CartaGenie is as good as real data in supporting diverse research-oriented mobile traffic data-driven applications.

Abstract

Light detection and ranging (LIDAR) has become a cost-effective and accessible sensor for a broad range of embedded devices including mobile phones and drones. Vision applications of these embedded devices require fast and accurate inferences to drive them, while at the same time power consumption should be kept low. Achieving both these requirements is hard due to the size of high quality LIDAR point cloud data streams – significantly larger than vision inputs such as images. The complexity of point cloud segmentation adds further difficulty for achieving high quality, realtime LIDAR data driven vision applications on battery powered embedded devices. We consider edge offloading as a potential approach to reconcile these conflicting requirements. Specifically, we present an experimental characterization study exploring the benefit of edge-assisted LIDAR point cloud segmentation, considering diverse set of embedded devices and state-of-the-art semantic segmentation models. Our results indicate that edge offloading is always beneficial from a device energy efficiency perspective and can also significantly reduce inference latency, especially with compressive edge offloading. These latency improvements however fall short of meeting real-time requirements. We outline a number of potential follow-on research directions to enable edge assisted accurate and real-time LIDAR point cloud segmentation.

2021

Abstract

City-scale spatiotemporal mobile network traffic data can support numerous applications in and beyond networking. However, operators are very reluctant to share their data, which is curbing innovation and research reproducibility. To remedy this status quo, we propose SpectraGAN, a novel deep generative model that, upon training with real-world network traffic measurements, can produce high-fidelity synthetic mobile traffic data for new, arbitrary sized geographical regions over long periods. To this end, the model only requires publicly available context information about the target region, such as population census data. SpectraGAN is an original conditional GAN design with the defining feature of generating spectra of mobile traffic at all locations of the target region based on their contextual features. Evaluations with mobile traffic measurement datasets collected by different operators in 13 cities across two European countries demonstrate that SpectraGAN can synthesize more dependable traffic than a range of representative baselines from the literature. We also show that synthetic data generated with SpectraGAN yield similar results to that with real data when used in applications like radio access network infrastructure power savings and resource allocation, or dynamic population mapping.

Abstract

Networked storage applications cannot fully benefit from fast persistent memory (PM), because of data management overheads incurred to implement storage properties, such as integrity, consistency, search efficiency and flexibility. To address this problem, we explore a new approach that turns networking overheads into assets, repurposing the transport protocol and network stack features, some of which can be offloaded to the NIC hardware, for implementing the storage properties particularly for the PM devices.

Abstract

Given the wide interest on mobile core systems and their pivotal role in the operations of current and future mobile network services, we focus on the issue of their effective evaluation, considering the radio access network (RAN) emulation methodology. While there exist a number of different RAN emulators, following different paradigms, they are limited in their scalability and flexibility, and moreover there is no one commonly accepted RAN emulator. Motivated by this, we present Nervion, a scalable and flexible RAN emulator for mobile core system evaluation that takes a novel cloud-native approach. Nervion embeds innovations to enable scalability via abstractions and RAN element containerization, and additionally supports an even more scalable control-plane only mode. It also offers ample flexibility in terms of realizing arbitrary RAN emulation scenarios, mapping them to compute clusters, and evaluating diverse core system designs. We develop a prototype implementation of Nervion that supports 4G and 5G standard compliant RAN emulation and integrate it into the Powder platform to benefit the research community. Our experimental evaluations validate its correctness and demonstrate its scalability relative to representative set of existing RAN emulators. We also present multiple case studies using Nervion that highlight its flexibility to support diverse types of mobile core evaluations.

Abstract

RAN energy consumption is a major OPEX source for mobile telecom operators, and 5G is expected to increase these costs by several folds. Moreover, paradigm-shifting aspects of the 5G RAN architecture like RAN disaggregation, virtualization and cloudification introduce new traffic-dependent resource management decisions that make the problem of energy-efficient 5G RAN orchestration harder. To address such a challenge, we present a first comprehensive virtualized RAN (vRAN) system model aligned with 5G RAN specifications, which embeds realistic and dynamic models for computational load and energy consumption costs. We then formulate the vRAN energy consumption optimization as an integer quadratic programming problem, whose NP-hard nature leads us to develop GreenRAN, a novel, computationally efficient and distributed solution that leverages Lagrangian decomposition and simulated annealing. Evaluations with real-world mobile traffic data for a large metropolitan area are another novel aspect of this work, and show that our approach yields energy efficiency gains up to 25% and 42%, over state-of-the-art and baseline traditional RAN approaches, respectively.

Abstract

Resource provisioning in multi-tenant stream processing systems faces the dual challenges of keeping resource utilization high (without over-provisioning), and ensuring performance isolation. In our common production use cases, where streaming workloads have to meet latency targets and avoid breaching service-level agreements, existing solutions are incapable of handling the wide variability of user needs. Our framework called Cameo uses fine-grained stream processing (inspired by actor computation models), and is able to provide high resource utilization while meeting latency targets. Cameo dynamically calculates and propagates priorities of events based on user latency targets and query semantics. Experiments on Microsoft Azure show that compared to state-of-the-art, the Cameo framework: i) reduces query latency by 3X in single-tenant settings, ii) reduces query latency by 5X in multi-tenant scenarios, and iii) weathers transient spikes of workload.

Abstract

Object storage systems, which store data in a flat name space over multiple storage nodes, are essential components for providing data-intensive services such as video streaming or cloud backup. Their bottleneck is usually either the compute or the network bandwidth of customer-facing frontend machines, despite much more such capacity being available at backend machines and in the network core. Prism addresses this problem by combining the flexibility and security of traditional frontend proxy architectures with the performance and resilience of modern key-value stores that optimize for small I/O patterns and typically use custom, UDP-based protocols inside a datacenter. Prism uses a novel connection hand-off protocol that takes the advantages of a modern Linux kernel feature and programmable switch, and supports both unencrypted TCP and TLS, and a corresponding API for easy integration into applications. Prism can improve throughput by a factor of up to 3.4 with TLS and by up to 3.7 with TCP, when compared to a traditional frontend proxy architecture.

Abstract

Spurred by the recent advances in deep learning to harness rich information hidden in large volumes of data and to tackle problems that are hard to model/solve (e.g., resource allocation problems), there is currently tremendous excitement in the mobile networks domain around the transformative potential of data-driven AI/ML based network automation, control and analytics for 5G and beyond. In this article, we present a cautionary perspective on the use of AI/ML in the 5G context by highlighting the adversarial dimension spanning multipletypes of ML (supervised/unsupervised/RL) and support this through three case studies. We also discuss approaches to mitigate this adversarial ML risk, offer guidelines for evaluating the robustness of ML models, and call attention to issues surrounding ML oriented research in 5G more generally.

2020

Abstract

This study is a first attempt to experimentally explore the range of performance bottlenecks that 5G mobile networks can experience. To this end, we leverage a wide range of measurements obtained with a prototype testbed that captures the key aspects of a cloudified mobile network. We investigate the relevance of the metrics and a number of approaches to accurately and efficiently identify bottlenecks across the different locations of the network and layers of the system architecture. Our findings validate the complexity of this task in the multi-layered architecture and highlight the need for novel monitoring approaches that intelligently fuse metrics across network layers and functions. In particular, we find that distributed analytics performs reasonably well both in terms of bottleneck identification accuracy and incurred computational and communication overhead.

Abstract

When using distributed machine learning (ML) systems to train models on a cluster of worker machines, users must configure a large number of parameters: hyper-parameters (e.g. the batch size and the learning rate) affect model convergence; system parameters (e.g. the number of workers and their communication topology) impact training performance. In current systems, adapting such parameters during training is ill-supported. Users must set system parameters at deployment time, and provide fixed adaptation schedules for hyper-parameters in the training program.

We describe KungFu, a distributed ML library for Tensor-Flow that is designed to enable adaptive training. KungFu allows users to express high-level Adaptation Policies (APs) that describe how to change hyper- and system parameters during training. APs take real-time monitored metrics (e.g. signal-to-noise ratios and noise scale) as input and trigger control actions (e.g. cluster rescaling or synchronisation strategy updates). For execution, APs are translated into monitoring and control operators, which are embedded in the data flowgraph. APs exploit an efficient asynchronous collective communication layer, which ensures concurrency and consistency of monitoring and adaptation operations.

Abstract

We address the challenge of backhaul connectivity for rural and developing regions, which is essential for universal fixed/mobile Internet access. To this end, we propose to exploit the TV white space (TVWS) spectrum for its attractive properties: low cost, abundance in under-served regions and favorable propagation characteristics. Specifically, we propose a system called WhiteHaul for the efficient aggregation of the TVWS spectrum tailored for the backhaul use case. At the core of WhiteHaul are two key innovations: (i) a TVWS conversion substrate that can efficiently handle multiple non-contiguous chunks of TVWS spectrum using multiple low cost 802.11n/ac cards but with a single antenna; (ii) novel use of MPTCP as a link-level tunnel abstraction and its use for efficiently aggregating multiple chunks of the TVWS spectrum via a novel uncoupled, cross-layer congestion control algorithm. Through extensive evaluations using a prototype implementation of WhiteHaul, we show that: (a) WhiteHaul can aggregate almost the whole of TV band with 3 interfaces and achieve nearly 600Mbps TCP throughput; (b) the WhiteHaul MPTCP congestion control algorithm provides an order of magnitude improvement over state of the art algorithms for typical TVWS backhaul links. We also present additional measurement and simulation based results to evaluate other aspects of the WhiteHaul design.

Abstract

We consider the problem of monitoring in the context of emerging and future mobile networks which are shaping up to feature diverse set of services composed of customized chains of virtual network functions (VNFs) realized over (edge) cloud environments. In such a setting, not only is monitoring a critical component for service quality assurance, but it also needs to be efficient, adaptable and flexible. Informed by the experience analyzing state-of-the-art management and orchestration (MANO) platforms and monitoring solutions for softwarized mobile networks, we present our monitoring system design termed PliMon that aims to meet the above requirements by exploiting diverse temporal variability characteristics across different metrics (measurement features) and VNFs, and by grouping such metrics into tiers based on their relative significance. Using an experimental testbed, we verify the hypothesis that different measurement features and VNFs exhibit diversity in their variability and crucially show substantial reduction in monitoring overhead compared to representative monitoring solution from the literature. Additionally, we integrate PliMon with OSM, a well known open source MANO platform, and demonstrate the salient aspects of our approach using the integrated PliMon-OSM system.

Abstract

Narrowband IoT (NB-IoT) is a new and attractive low power wide area cellular technology for low-capability and low-cost IoT devices, that is starting to see real-world deployments. NB-IoT devices are expected to operate unattended, potentially in inaccessible and signal-challenged locations, for at least 10 years on a single battery charge, making the NB-IoT device energy consumption significantly important. Despite the importance of their function, the communication protocols have largely been copied from older generations of cellular networks to preserve interoperability, without considering their specific characteristics and needs. In this paper, we perform a detailed energy consumption analysis for NB-IoT devices, that we use as a basis to develop an energy consumption model for realistic energy consumption assessment. Finally, we take the insights from our analysis and propose optimizations to significantly reduce the energy consumption of NB-IoT devices in different traffic conditions. These optimizations are also complementary to current 3GPP optimizations towards the 10-year battery goal, and assess their performance.

2019

Abstract

Emergence of shared spectrum, such as the 3.5-GHz citizen broadband radio service (CBRS) band in the U.S., promises to broaden the mobile operator ecosystem and lead to proliferation of small cell deployments. We consider the inter-operator interference problem that arises when multiple small cell networks access the shared spectrum. Towards this end, we take a novel communication-free approach that seeks implicit coordination between operators without explicit communication. The key idea is for each operator to sense the spectrum through its mobiles to be able to model the channel vacancy distribution and extrapolate it for the next epoch. We use reproducing kernel Hilbert space kernel embedding of channel vacancy and predict it by vector-valued regression. This predicted value is then relied on by each operator to perform independent but optimal channel assignment to its base stations taking traffic load into account. Via numerical results, we show that our approach, aided by the above channel vacancy forecasting, adapts the spectrum allocation over time as per the traffic demands and more crucially, yields as good as or better performance than a coordination-based approach, even without accounting the overhead of the latter.

Abstract

We consider indoor mobile access, a vital use case for current and future mobile networks. For this key use case, we outline a vision that combines a neutral-host based shared small-cell infrastructure with a common pool of spectrum for dynamic sharing as a way forward to proliferate indoor small-cell deployments and open up the mobile operator ecosystem. Towards this vision, we focus on the challenges pertaining to managing access to shared spectrum (e.g., 3.5GHz US CBRS spectrum). We propose Iris, a practical shared spectrum access architecture for indoor neutral-host small-cells. At the core of Iris is a deep reinforcement learning based dynamic pricing mechanism that efficiently mediates access to shared spectrum for diverse operators in a way that provides incentives for operators and the neutral-host alike. We then present the Iris system architecture that embeds this dynamic pricing mechanism alongside cloud-RAN and RAN slicing design principles in a practical neutral-host design tailored for the indoor small-cell environment. Using a prototype implementation of the Iris system, we present extensive experimental evaluation results that not only offer insight into the Iris dynamic pricing process and its superiority over alternative approaches but also demonstrate its deployment feasibility.

Abstract

We investigate spatial patterns in mobile service consumption that emerge at national scale. Our investigation focuses on a representative case study, i.e., France, where we find that: (i) the demand for popular mobile services is fairly uniform across the whole country, and only a reduced set of peculiar services (mainly operating system updates and long-lived video streaming) yields geographic diversity; (ii) even for such distinguishing services, the spatial heterogeneity of demands is limited, and a small set of consumption behaviors is sufficient to characterize most of the mobile service usage across the country; (iii) the spatial distribution of these behaviors correlates well with the urbanization level, ultimately suggesting that the adoption of geographically-diverse mobile applications is linked to a dichotomy of cities and rural areas. We derive our results through the analysis of substantial measurement data collected by a major mobile network operator, leveraging an approach rooted in information theory that can be readily applied to other scenarios.

Abstract

Knowledge of cell tower locations enables multiple applications including identifying unserved or poorly served regions. We consider the problem of estimating the locations of cell towers using crowdsourced measurements, which is challenging due to the uncontrolled nature of the sample collection process. Using large-scale crowdsourced datasets from OpenCelliD with ground-truth cell tower locations, we find that none of the several commonly used localization algorithms (e.g., Weighted Centroid) nor the state of the art Filtered Weighted Centroid (FWC) approach that filters out less predictive measurements manage to deliver robust localization performance. We propose a novel supervised machine learning based approach termed as Adaptive Algorithm Selection (AAS) that adaptively selects the localization algorithm likely to provide the most accurate localization performance for a given cell and its crowdsourced samples. We show that AAS not only significantly outperforms the state-of-the-art FWC approach, with median error improvement over 65%, but also achieves localization performance within 20% of an idealized Oracle solution. We validate the applicability of AAS in new and different settings (including WLAN AP localization) before presenting case studies in three different African countries that demonstrate the use of AAS based cell tower localization to reliably infer mobile infrastructure in developing countries.

2018

Abstract

While experimental work in the context of 5G has gained significant traction over the past few years, the focus has mainly been on testing the features and capabilities of novel designs and architectures using very simple testbed setups. However, with the emergence of network slicing as a key feature of 5G, creating larger scale infrastructures capable of supporting virtualized end-to-end mobile network services is of paramount importance for experimentation. In this work, we describe our experience in building such a prototype cross-domain testbed targeting 5G use cases, by enabling multi-tenancy through the virtualization of the underlying infrastructure. The capabilities of the testbed are demonstrated through the use case of neutral-host indoor small-cell deployments, followed by a discussion on the challenges we faced while building the testbed, which open up new research opportunities in this space.

Abstract

Mobile coverage maps increasingly rely on user-side measurements such as those collected from crowdsourced mobile apps. These measurements inherently span a multitude of devices, differing in models and vendors, with different radio signal reception characteristics. We show measurement based evidence on the significant deviations in received signal strength distribution seen by different devices, all other factors being equal. More crucially, we examine the accuracy of coarse-grained/fine-grained measurement based mobile coverage maps as seen from a device’s perspective. Our key finding is that mobile coverage maps based on measurements from a diversity of devices are still fairly reliable from a device’s perspective so long as it is among the set of devices used to collect measurements. Our study also offers guidelines on ways towards reliable measurement based mobile coverage maps in presence of device diversity.

Abstract

Emergence of shared spectrum, such as the 3.5-GHz citizen broadband radio service (CBRS) band in the U.S., promises to broaden the mobile operator ecosystem and lead to proliferation of small cell deployments. We consider the inter-operator interference problem that arises when multiple small cell networks access the shared spectrum. Towards this end, we take a novel communication-free approach that seeks implicit coordination between operators without explicit communication. The key idea is for each operator to sense the spectrum through its mobiles to be able to model the channel vacancy distribution and extrapolate it for the next epoch. We use reproducing kernel Hilbert space kernel embedding of channel vacancy and predict it by vector-valued regression. This predicted value is then relied on by each operator to perform independent but optimal channel assignment to its base stations taking traffic load into account. Via numerical results, we show that our approach, aided by the above channel vacancy forecasting, adapts the spectrum allocation over time as per the traffic demands and more crucially, yields as good as or better performance than a coordination-based approach, even without accounting the overhead of the latter.

Abstract

Narrowband IoT (NB-IoT) is a new cellular network technology that has been designed for low capability, low power consumption devices that are expected to operate for more than 10 years on a single battery. These types of devices will be inexpensive (less than $5) and deployed on massive scales. This long life expectancy will lead to the need for occasional software updates, to very large groups of devices. While a new multicast mechanism has recently been proposed for the efficient multicast transmission of such updates, it assumes that devices can be grouped together and synchronized in order to receive the multicast data. In this paper, we explore three different approaches to achieve device grouping, with different trade-offs between bandwidth usage, energy consumption and compliance with the NB-IoT standard. To assess the performance of each pproach, we conducted a thorough experimental evaluation under realistic operating conditions.

Abstract

Mobile phones and innovative data oriented mobile services have the potential to bridge the digital divide in Internet access and have transformative developmental impact. However as things stand currently, economics come in the way for traditional mobile operators to reach out and provide high-end services to under-served regions. We propose a do-it-yourself (DIY) model for deploying mobile networks in such regions that is in the spirit of earlier community cellular networks but aimed at provisioning high-end (4G and beyond) mobile services. Our proposed model captures and incorporates some of the key trends underlying 5G mobile networks and look to expand their scope beyond urban areas to reach all by empowering small-scale local operators and communities to build and operate modern mobile networks themselves. We showcase a particular instance of the proposed deployment model through a trial deployment in rural UK to demonstrate its practical feasibility.

Abstract

Various notions of privacy preservation have been proposed for mobile trajectory data sharing/publication. The privacy guarantees provided by these approaches are theoretically very different and cannot be directly compared against each other. They are motivated by different adversary models, making varying assumptions about adversary’s background knowledge and intention. A clear comparison between existing mechanisms is missing, making it difficult when a data aggregator/owner needs to pick a mechanism for a given application scenario. We seek to fill this gap by proposing a measure called STRAP that allows comparison of different trajectory privacy mechanisms on a common scale. We also study the trade-off between privacy and utility i.e., how different mechanisms perform when utility constraints are imposed over them. Using STRAP over two real mobile trajectory datasets, we compare state of the art mechanisms for trajectory data privacy and demonstrate the value of the proposed measure.

Abstract

Narrowband-Internet of Things (NB-IoT) has been released by 3GPP to provide extended coverage and low energy consumption for low-cost machine-type devices. Requiring only a reasonably low-cost hardware update to the already deployed long term evolution base stations and being compatible with current core network and enhanced core solutions that aim to reduce the battery consumption and minimize the signaling, NB-IoT deployments are quickly increasing, making NB-IoT a dominating technology for low-power wide area networks. To this aim, in this paper, we focus on group communications (i.e., multicast) in NB-IoT to efficiently support the transmission of firmware, software, task updates, or commands toward a large set of devices. We discuss the architectural and procedural enhancements needed to support the unique features of group communications in machine-type environments, such as customer-driven group formation. We also extend the NBIoT frame to include a channel for multicast transmissions. Finally, we propose two transmission strategies for multicast content delivery and evaluate their performance considering the impact on the downlink background traffic and the channel occupancy.

Abstract

The attractiveness of TV white space (TVWS) spectrum for last mile access in rural and developing regions has been recognized before. In this paper, we complement this existing work and draw attention to the potential of TVWS spectrum for enabling low cost middle mile connectivity to the Internet backbone. In particular, we examine the amount and nature of TVWS spectrum available towards this end, considering a representative rural setting in the UK, TV transmitter locations and their configuration, terrain information and antenna type. We introduce a new notion of receiver side usable spectrum that differs from the commonly considered available spectrum at transmitter side obtained from consulting a geolocation database. We find that cumulative interference from multiple nearby TV transmitters can severely reduce the amount of usable TVWS spectrum and also heavily fragments it. However, the use of directional antennas, as would be the case for TVWS backhaul links, negates this effect and suggests the possibility of high speed TVWS backhaul links via spectrum aggregation.

Abstract

The availability of multiple collocated wireless networks using heterogeneous technologies and the multiaccess support of contemporary mobile devices have allowed wireless connectivity optimization, enabled through vertical handover (VHO) operations. However, this comes at high energy consumption on the mobile device due to the inherently expensive nature of some of the involved operations. This work proposes exploiting short-range cooperation among collocated mobile devices to improve the energy efficiency of vertical handover operations. The proactive exchange of handover-related information through low-energy short-range communication technologies, like Bluetooth, can help in eliminating expensive signaling steps when the need for a VHO arises. A model is developed for capturing the mean energy expenditure of such an optimized VHO scheme in terms of relevant factors by means of closed-form expressions. The descriptive power of the model is demonstrated by investigating various typical usage scenarios and is validated through simulations. It is shown that the proposed scheme has superior performance in several realistic usage scenarios considering important relevant factors, including network availability, the local density of mobile devices, and the range of the cooperation technology. Finally, the paper explores cost/benefit trade-offs associated with the short-range cooperation protocol. It is demonstrated that the protocol may be parametrized so that the trade-off becomes nearly optimized and the cost is maintained affordable for a wide range of operational scenarios.

2017

Abstract

Emerging 5G mobile networks are envisioned to become multi-service environments, enabling the dynamic deployment of services with a diverse set of performance requirements, accommodating the needs of mobile network operators, verticals and over-the-top (OTT) service providers. Virtualizing the mobile network in a flexible way is of paramount importance for a cost-effective realization of this vision. While virtualization has been extensively studied in the case of the mobile core, virtualizing the radio access network (RAN) is still at its infancy. In this paper, we present Orion, a novel RAN slicing system that enables the dynamic on-the-fly virtualization of base stations, the flexible customization of slices to meet their respective service needs and which can be used in an end-to-end network slicing setting. Orion guarantees the functional and performance isolation of slices, while allowing for the efficient use of RAN resources among them. We present a concrete prototype implementation of Orion for LTE, with experimental results, considering alternative RAN slicing approaches, indicating its efficiency and highlighting its isolation capabilities. We also present an extension to Orion for accommodating the needs of OTT providers.

Abstract

The accuracy of measurement-driven mobile coverage maps depends on the quality, density and pattern of the signal strength observations. Thus, identifying an efficient measurement data collection methodology is essential, especially when considering the cost associated with the measurement collection approaches (e.g., drive tests, crowd approaches). We propose ZipWeave, a novel measurement data collection and fusion framework for building efficient and reliable measurement-based mobile coverage maps. ZipWeave incorporates a novel nonuniform sampling strategy to achieve reliable coverage maps with reduced sample size. Assuming prior knowledge of the propagation characteristics of the region of interest, we first examine the potential gains of this non-uniform sampling strategy in different cases via a measurement-based statistical analysis methodology; this involves irregular spatial tessellation of the region of interest into sub-regions with internally similar radio propagation characteristics and sampling based on these sub-regions. We then present a practical form of ZipWeave nonuniform sampling strategy that can be used even without any prior information. In all our evaluations, we show that the ZipWeave non-uniform sampling approach reduces the samples by half compared to the common systematic-random sampling, while maintaining similar accuracy. Moreover, we show that the other key feature of ZipWeave to combine high-quality controlled measurements (that present limited geographic footprint similar to drive tests) with crowdsourced measurements (that cover a wider footprint) leads to more reliable mobile coverage maps overall.

Abstract

Wearables and mobile devices see the world through the lens of half a dozen low-power sensors, such as, barometers, accelerometers, microphones and proximity detectors. But differences between sensors ranging from sampling rates, discrete and continuous data or even the data type itself make principled approaches to integrating these streams challenging. How, for example, is barometric pressure best combined with an audio sample to infer if a user is in a car, plane or bike? Critically for applications, how successfully sensor devices are able to maximize the information contained across these multi-modal sensor streams often dictates the fidelity at which they can track user behaviors and context changes. This paper studies the benefits of adopting deep learning algorithms for interpreting user activity and context as captured by multi-sensor systems. Specifically, we focus on four variations of deep neural networks that are based either on fully-connected Deep Neural Networks (DNNs) or Convolutional Neural Networks (CNNs). Two of these architectures follow conventional deep models by performing feature representation learning from a concatenation of sensor types. This classic approach is contrasted with a promising deep model variant characterized by modality-specific partitions of the architecture to maximize intra-modality learning. Our exploration represents the first time these architectures have been evaluated for multimodal deep learning under wearable data — and for convolutional layers within this architecture, it represents a novel architecture entirely. Experiments show these generic multimodal neural network models compete well with a rich variety of conventional hand-designed shallow methods (including feature extraction and classifier construction) and task-specific modeling pipelines, across a wide-range of sensor types and inference tasks (four different datasets). Although the training and inference overhead of these multimodal deep approaches is in some cases appreciable, we also demonstrate the feasibility of on-device mobile and wearable execution is not a barrier to adoption. This study is carefully constructed to focus on multimodal aspects of wearable data modeling for deep learning by providing a wide range of empirical observations, which we expect to have considerable value in the community. We summarize our observations into a series of practitioner rules-of-thumb and lessons learned that can guide the usage of multimodal deep learning for activity and context detection.

2016

Abstract

Although the radio access network (RAN) part of mobile networks offers a significant opportunity for benefiting from the use of SDN ideas, this opportunity is largely untapped due to the lack of a software-defined RAN (SD-RAN) platform. We fill this void with FlexRAN, a flexible and programmable SD-RAN platform that separates the RAN control and data planes through a new, custom-tailored southbound API. Aided by virtualized control functions and control delegation features, FlexRAN provides a flexible control plane designed with support for real-time RAN control applications, flexibility to realize various degrees of coordination among RAN infrastructure entities, and programmability to adapt control over time and easier evolution to the future following SDN/NFV principles. We implement FlexRAN as an extension to a modified version of the OpenAir Interface LTE platform, with evaluation results indicating the feasibility of using FlexRAN under the stringent time constraints posed by the RAN. To demonstrate the effectiveness of FlexRAN as an SD-RAN platform and highlight its applicability for a diverse set of use cases, we present three network services deployed over FlexRAN focusing on interference management, mobile edge computing and RAN sharing.

Abstract

OFDM is currently the most popular PHY-layer carrier modulation technique, used in the latest generations of cellular, Wi-Fi and TV standards. OFDM systems use cycle prefix to mitigate inter-symbol interference. However, most of the existing systems over-provision the size of the cycle prefix considering the worst case scenarios which rarely occur. We propose a novel OFDM PHY receiver design, called CPRecycle , that exploits the redundant cycle prefix to reduce the effects of interference from neighboring nodes. CPRecycle is based on the key observation that the starting position of the FFT window within the cyclic prefix at the OFDM receiver does not affect the received signal but can substantially reduce interference from concurrent transmissions. We further develop an algorithm that is able to find the optimal starting position of the FFT window for each subcarrier using a Gaussian kernel density function and a fixed sphere maximum likelihood detector. Through implementation and extensive evaluations using USRP and off- the-shelf IEEE 802.11g transmitters/interferers, we show the effectiveness of CPRecycle in significantly mitigating interference. CPRecycle only requires local modifications at the receiver and does not require changes in standards, making it incrementally deployable.

Abstract

Licensed Shared Access (LSA) is a new shared spectrum access model that is gaining traction for unlocking incumbent spectrum to mobile network operators in a form similar to licensed spectrum, thus having the potential to alleviate the spectrum crunch below 6 GHz. Short-term spectrum auctions can pave the way for dynamic LSA in the future and to create incentives for incumbents to voluntarily participate in the LSA model, thereby increase spectrum availability. Different from existing auction schemes that are mostly based on the sealed-bid auction format, we consider an ascending bid format which is theoretically equivalent to a sealed bid format but comes with better behavioral properties. We develop a novel auction mechanism called GAVEL that follows the ascending bid auction format and is well-suited for the dynamic LSA context. GAVEL, besides being strategy-proof, satisfies the three additional desirable properties of supporting heterogeneous spectrum, fine-grained spectrum sharing and bidder privacy protection. In fact, GAVEL is the first mechanism to satisfy all these properties. Through simulation-based evaluations, GAVEL is shown to outperform two recently proposed schemes in terms of revenue, social welfare, number of winners and achieving high spectrum utilization while at the same time performing close to the LP based optimal solution.

Abstract

The coexistence of LTE-Unlicensed (LTE-U) and WiFi in unlicensed spectrum is studied in the context of airtime sharing. We consider core problem where a set of LTE-U cells from different operators share the same channel as a co-located WiFi access point (AP). We assume that LTE-U cells utilize Listen-Before-Talk (LBT) as the default channel access mechanism. Principally, we deal with the following question: how should an operator’s LTE-U cell adjust its contention window in order to provide a fair coexistence both with WiFi and co-located LTE-U cells of other operators? We consider that LTE-U cells behave altruistically both among themselves and to WiFi. Cooperation of LTE-U cells is studied using a coalition formation game framework which is based on the well-known Shapley value. We define a payoff configuration scheme in the coalition game which involves altruism. We prove that the coalitional game is always zero-monotonic, and Shapley value is also max-min fair. We compare airtime sharing performance of Shapley value with weighted proportional fairness via numerical results and show that Shapley value provides much better fairness than proportional fairness as determined by entropy and Jain’s index metrics while having roughly equal average airtime.

Abstract

Using the plethora of apps on smartphones and tablets entails giving them access to different types of privacy sensitive information, including the device’s location. This can potentially compromise user privacy when app providers share user data with third parties (e.g., advertisers) for monetization purposes. In this paper, we focus on the interface for data sharing between app providers and third parties, and devise an attack that can break the strongest form of the commonly used anonymization method for protecting the privacy of users. More specifically, we develop a mechanism called Comber that given completely anonymized mobility data (without any pseudonyms) as input is able to identify different users and their respective paths in the data. Comber exploits the observation that the distribution of speeds is typically similar among different users and incorporates a generic, empirically derived histogram of user speeds to identify the users and disentangle their paths. Comber also benefits from two optimizations that allow it to reduce the path inference time for large datasets. We use two real datasets with mobile user location traces (MobileData Challenge and GeoLife) for evaluating the effectiveness of Comber and show that it can infer paths with greater than 90% accuracy with both these datasets.