scispace - formally typeset
Search or ask a question

Showing papers on "Paging published in 2015"


Proceedings ArticleDOI
13 Jun 2015
TL;DR: Redundant Memory Mappings (RMM) is proposed, which leverage ranges of pages and provides an efficient, alternative representation of many virtual-to-physical mappings, reducing the overhead of virtual memory to less than 1% on average.
Abstract: Page-based virtual memory improves programmer productivity, security, and memory utilization, but incurs performance overheads due to costly page table walks after TLB misses. This overhead can reach 50% for modern workloads that access increasingly vast memory with stagnating TLB sizes. To reduce the overhead of virtual memory, this paper proposes Redundant Memory Mappings (RMM), which leverage ranges of pages and provides an efficient, alternative representation of many virtual-to-physical mappings. We define a range be a subset of process's pages that are virtually and physically contiguous. RMM translates each range with a single range table entry, enabling a modest number of entries to translate most of the process's address space. RMM operates in parallel with standard paging and uses a software range table and hardware range TLB with arbitrarily large reach. We modify the operating system to automatically detect ranges and to increase their likelihood with eager page allocation. RMM is thus transparent to applications. We prototype RMM software in Linux and emulate the hardware. RMM performs substantially better than paging alone and huge pages, and improves a wider variety of workloads than direct segments (one range per program), reducing the overhead of virtual memory to less than 1% on average.

150 citations


Proceedings ArticleDOI
Yutao Liu1, Tianyu Zhou1, Kexin Chen1, Haibo Chen1, Yubin Xia1 
12 Oct 2015
TL;DR: SeCage retrofits commodity hardware virtualization extensions to support efficient isolation of sensitive code manipulating critical secrets from the remaining code, and is designed to work under a strong adversary model where a victim application or even the OS may be controlled by the adversary, while supporting large-scale software with small deployment cost.
Abstract: Exploiting memory disclosure vulnerabilities like the HeartBleed bug may cause arbitrary reading of a victim's memory, leading to leakage of critical secrets such as crypto keys, personal identity and financial information. While isolating code that manipulates critical secrets into an isolated execution environment is a promising countermeasure, existing approaches are either too coarse-grained to prevent intra-domain attacks, or require excessive intervention from low-level software (e.g., hypervisor or OS), or both. Further, few of them are applicable to large-scale software with millions of lines of code. This paper describes a new approach, namely SeCage, which retrofits commodity hardware virtualization extensions to support efficient isolation of sensitive code manipulating critical secrets from the remaining code. SeCage is designed to work under a strong adversary model where a victim application or even the OS may be controlled by the adversary, while supporting large-scale software with small deployment cost. SeCage combines static and dynamic analysis to decompose monolithic software into several compart- ments, each of which may contain different secrets and their corresponding code. Following the idea of separating control and data plane, SeCage retrofits the VMFUNC mechanism and nested paging in Intel processors to transparently provide different memory views for different compartments, while allowing low-cost and transparent invocation across domains without hypervisor intervention. We have implemented SeCage in KVM on a commodity Intel machine. To demonstrate the effectiveness of SeCage, we deploy it to the Nginx and OpenSSH server with the OpenSSL library as well as CryptoLoop with small efforts. Security evaluation shows that SeCage can prevent the disclosure of private keys from HeartBleed attacks and memory scanning from rootkits. The evaluation shows that SeCage only incurs small performance and space overhead.

132 citations


Patent
06 Aug 2015
TL;DR: In this paper, a user equipment (UE) may determine when to monitor for downlink communications such as paging messages based on both a received extended idle discontinuous reception (eI-DRX) cycle and a change in a downlink channel reliability condition of the UE.
Abstract: A user equipment (UE) may determine when to monitor for downlink (DL) communications such as paging messages based on both a received extended idle discontinuous reception (eI-DRX) cycle and a change in a downlink channel reliability condition of the UE. A base station may also adjust its transmission of paging information to a UE based on a eI-DRX cycle.

41 citations


Patent
28 Sep 2015
TL;DR: In this article, the authors present a computer-implemented apparatus for controlling a power saving mode characteristic of a device on a network, which includes a non- transitory memory with instructions for controlling power saving modes characteristic of the device and a processor operably coupled with the memory.
Abstract: The present application is directed to computer-implemented apparatus for controlling a power savings mode characteristic of a device on a network. The apparatus includes a non- transitory memory with instructions for controlling power saving mode characteristic of the device and a processor operably coupled thereto. The processor performs the step of receiving a request to update the characteristics of the device. The processor also performs the step of updating the characteristics of the device based upon the request. The processor further performs the step of sending an acknowledgment that the characteristic has been updated. The application is also directed to a computer-implemented apparatus on a network for supporting buffering and data handling for a power savings mode of a device on the network.

38 citations


Proceedings Article
08 Jul 2015
TL;DR: This paper presents the design and implementation of SecPod, a practical and extensible framework for virtualization-based security systems that can provide both strong isolation and the compatibility with modern hardware.
Abstract: The OS kernel is critical to the security of a computer system Many systems have been proposed to improve its security A fundamental weakness of those systems is that page tables, the data structures that control the memory protection, are not isolated from the vulnerable kernel, and thus subject to tampering To address that, researchers have relied on virtualization for reliable kernel memory protection Unfortunately, such memory protection requires to monitor every update to the guest's page tables This fundamentally conflicts with the recent advances in the hardware virtualization support In this paper, we propose SecPod, an extensible framework for virtualization-based security systems that can provide both strong isolation and the compatibility with modern hardware SecPod has two key techniques: paging delegation delegates and audits the kernel's paging operations to a secure space; execution trapping intercepts the (compromised) kernel's attempts to subvert SecPod by misusing privileged instructions We have implemented a prototype of SecPod based on KVM Our experiments show that SecPod is both effective and efficient

34 citations


Journal ArticleDOI
TL;DR: A dynamic resource allocation (DRA) scheme which dynamically adjusts the reserved RAOs for group paging based on the estimated number of contending users in each RA slot is presented.
Abstract: Group paging is one of the solutions proposed to deal with the radio access network (RAN) overload problem resulted from bursty machine-type communications (MTC) traffic in long-term evolution-advanced (LTE-A) networks. In group paging, the base station normally reserves a fixed amount of random access opportunities (RAOs) for the grouped users to perform random access (RA) during a paging access interval. However, the number of contending users is quickly decreased and thus, static allocation of RAOs is not efficient. This paper presents a dynamic resource allocation (DRA) scheme which dynamically adjusts the reserved RAOs for group paging based on the estimated number of contending users in each RA slot. Simulation results demonstrate that, compared with the traditional static allocation scheme, the proposed DRA scheme can improve the utilization of RAOs by 9% under a target access success probability constraint of 90%.

32 citations


Journal ArticleDOI
Seongwook Jin1, Jeongseob Ahn1, Jinho Seol1, Sanghoon Cha1, Jaehyuk Huh1, Seungryoul Maeng1 
TL;DR: A HW-based approach to protect guest VMs even under an untrusted hypervisor with the proposed mechanism, memory isolation is provided by the secure hardware, which is much less vulnerable than the software hypervisor.
Abstract: With increasing demands on cloud computing, protecting guest virtual machines (VMs) from malicious attackers has become critical to provide secure services. The current cloud security model with software-based virtualization relies on the invulnerability of the software hypervisor and its trustworthy administrator with the root permission. However, compromising the hypervisor with remote attacks or root permission grants the attackers with a full access capability to the memory and context of a guest VM. This paper proposes a HW-based approach to protect guest VMs even under an untrusted hypervisor. With the proposed mechanism, memory isolation is provided by the secure hardware, which is much less vulnerable than the software hypervisor. The proposed mechanism extends the current hardware support for memory virtualization based on nested paging with a small extra hardware cost. The hypervisor can still flexibly allocate physical memory pages to virtual machines for efficient resource management. In addition to the system design for secure virtualization, this paper presents a prototype implementation using system management mode. Although the current system management mode is not intended for security functions and thus limits the performance and complete protection, the prototype implementation proves the feasibility of the proposed design.

31 citations


Patent
28 Sep 2015
TL;DR: In this paper, a processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a host physical address (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization-based system.
Abstract: A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system.

29 citations


Patent
Chih-Yuan Tsai1, Chi-Chen Lee1
17 Nov 2015
TL;DR: In this article, a wireless communication method for multi-SIM dual standby (DSDSDS) technology is applied on Multi-SIM user equipment (UE) which is capable of carrier aggregation (CA) or dual connectivity (DuCo), which includes the steps of determining whether a packet switch (PS) or circuit switch (CS) paging is received on a second SIM card when a first PS call is ongoing in a first SIM card.
Abstract: A wireless communication method and Multi-SIM user equipment are provided. The wireless communication method for Multi-SIM dual standby (DSDS) technology is applied on Multi-SIM user equipment (UE) which is capable of carrier aggregation (CA) or dual connectivity (DuCo). The wireless communication method includes the steps of determining whether a packet switch (PS) or circuit switch (CS) paging is received on a second SIM card when a first PS call is ongoing in a first SIM card; and suspending the first PS call which is ongoing on the first SIM card if the packet switch (PS) or circuit switch (CS) paging is received on the second SIM card.

26 citations


Journal ArticleDOI
TL;DR: Results show that the proposed pre-backoff method can effectively enhance the performance of group paging and reduce the collision probability of random access requests.
Abstract: Group paging can simultaneously activate hundreds of user equipments (UEs) using a single paging message. Upon receiving the group paging message, all UEs should immediately transmit their paging response messages through the random access channels (RACHs). Simultaneous channel access from a huge group of UEs may result in severe collisions of the RACHs during a very short period of time. In this paper, we propose a pre-backoff method to reduce the collision probability of random access requests. We develop an analytical model to investigate the performance and optimize the setting of the pre-backoff method. The accuracy of the analytical model is verified through computer simulation. Results show that the proposed pre-backoff method can effectively enhance the performance of group paging.

25 citations


Patent
24 Aug 2015
TL;DR: In this article, various mechanisms for paging link-budget-limited (LBL) devices are disclosed, where the network node informs a base station of the device's LBL status as part of a paging message and the device is paged either with a dedicated P-RNTI and/or with a particular boosting.
Abstract: Various mechanisms for paging link-budget-limited (LBL) devices are disclosed. The network node informs a base station of the device's LBL status as part of a paging message and the device is paged either with a dedicated P-RNTI and/or with a particular boosting.

Patent
05 Aug 2015
TL;DR: In this article, the authors describe an extended DRX (e-DRX) operation using hyper frame extension signaling, which may extend the system frame number (SFN) range while maintaining backward compatibility for legacy devices not configured to use the extended SFN range.
Abstract: Extended DRX (e-DRX) operation using hyper frame extension signaling are described. The hyper frame extension signaling may extend the system frame number (SFN) range while maintaining backward compatibility for legacy devices not configured to use the extended SFN range. The hyper-SFN extension signaling may include an index to a hyper- SFN transmitted as part of system information different than that used for transmission of the SFN. UEs configured to use the hyper-SFN may effectively use a longer or extended SFN range that includes the legacy SFN range and the hyper-SFN range. The hyper-SFN extension may be used in an extended idle DRX (eI-DRX) mode which may coexist with existing I-DRX mode on the same paging resources. Additionally or alternatively, paging may be differentiated for eI-DRX mode UEs using separate paging occasions or a new paging radio network temporary identifier (RNTI).


Journal ArticleDOI
TL;DR: A new online variant of caching, called caching with rejection, is studied, which designs deterministic and randomized algorithms for this problem and presents a lower bound of 2k+1 on the competitive ratio of any deterministic algorithm for the variant with rejection.
Abstract: In the file caching problem, the input is a sequence of requests for files out of a slow memory. A file has two attributes, a positive retrieval cost and an integer size. An algorithm is required to maintain a cache of size k such that the total size of files stored in the cache never exceeds k. Given a request for a file that is not present in the cache at the time of request, the file must be brought from the slow memory into the cache, possibly evicting other files from the cache. This incurs a cost equal to the retrieval cost of the requested file. Well-known special cases include paging (all costs and sizes are equal to 1), the cost model, which is also known as weighted paging, (all sizes are equal to 1), the fault model (all costs are equal to 1), and the bit model (the cost of a file is equal to its size). If bypassing is allowed, a miss for a file still results in an access to this file in the slow memory, but its subsequent insertion into the cache is optional. We study a new online variant of caching, called caching with rejection. In this variant, each request for a file has a rejection penalty associated with the request. The penalty of a request is given to the algorithm together with the request. When a file that is not present in the cache is requested, the algorithm must either bring the file into the cache, paying the retrieval cost of the file, or reject the file, paying the rejection penalty of the request. The objective function is the sum of total rejection penalty and the total retrieval cost. This problem generalizes both caching and caching with bypassing. We design deterministic and randomized algorithms for this problem. The competitive ratio of the randomized algorithm is O(logk), and this is optimal up to a constant factor. In the deterministic case, a k-competitive algorithm for caching, and a (k+1)-competitive algorithm for caching with bypassing are known. Moreover, these are the best possible competitive ratios. In contrast, we present a lower bound of 2k+1 on the competitive ratio of any deterministic algorithm for the variant with rejection. The lower bound is valid already for paging. We design a (2k+2)-competitive algorithm for caching with rejection. We also design a different (2k+1)-competitive algorithm, that can be used for paging and for caching in the bit and fault models.

Patent
Zheng Yu1, Fang Nan1
24 Aug 2015
TL;DR: In this paper, the authors present a paging optimization method, apparatus, and system, so as to ensure that a terminal normally receives paging messages sent by a system, where the first paging message is used to page a terminal; in response to the first message, determining a transmission parameter according to channel loss information of the terminal; and sending control information with enhanced coverage to the terminal according to the transmission parameter.
Abstract: Embodiments of the present invention, relating to the communications field, disclose a paging optimization method, apparatus, and system, so as to ensure that a terminal normally receives a paging message sent by a system. A specific solution carried by an access network node is: receiving a first paging message sent by a core network node, where the first paging message is used to page a terminal; in response to the first paging message, determining a transmission parameter according to channel loss information of the terminal; and sending control information with enhanced coverage to the terminal according to the transmission parameter, wherein the control information with enhanced coverage is used to schedule a second paging message with enhanced coverage. The present invention is used in a paging optimization process.

Proceedings ArticleDOI
08 Jun 2015
TL;DR: Numerical results demonstrate that TSFGP highly improves the performance of GP in terms of several performance metrics, such as success probability, collision probability, and access delay.
Abstract: Machine-Type-Communication (MTC) is a promising service of the envisioned 5G mobile networks However, deploying a massive number of MTC devices in these networks remains a challenge due to the overload that may appear at the Radio Access Network (RAN), hence degrading the Quality of Services (QoS) for both MTC and Non-MTC devices One of the methods used to address the congestion's problem in RAN is Group Paging (GP), wherein a single message is used to activate a group of devices Whilst the GP method has several advantages, its performance quickly decreases when the number of MTC devices increases In this paper, we devise a new method, namely Traffic Scattering For Group Paging (TSFGP) to improve the performance of the GP method for massive deployment of MTC devices Numerical results demonstrate that TSFGP highly improves the performance of GP in terms of several performance metrics, such as success probability, collision probability, and access delay

Journal ArticleDOI
TL;DR: The nested segmentation, flat page tables, and speculative shadowing improve a state-of-the-art 2D page walker by 10, 7, and 14 percent respectively.
Abstract: Recently, there have been several improvements in architectural supports for two-level address translation for virtualized systems. However, those improvements including HW-based two-dimensional (2D) page walkers have extended the traditional multi-level page tables, without considering the memory management characteristics of virtual machines. This paper exploits the unique behaviors of the hypervisor, and proposes three new nested address translation schemes for virtualized systems. The first scheme called nested segmentation is designed for static memory allocation, and uses HW segmentation to map the VM memory directly to large contiguous memory regions. The second scheme proposes to use a flat nested page table for each VM, reducing memory accesses by the current 2D page walkers. The third scheme uses speculative inverted shadow paging, backed by non-speculative flat nested page tables. The speculative mechanism provides direct translation with a single memory reference for common cases without page table synchronization overheads. We evaluate the proposed schemes with the Xen hypervisor running on a full system simulator. Nested segmentation can reduce the overheads of two-level translation significantly for a certain cloud computing model. The nested segmentation, flat page tables, and speculative shadowing improve a state-of-the-art 2D page walker by 10, 7, and 14 percent respectively.

Patent
14 Jan 2015
TL;DR: In this paper, a method, computer program, network control node, user equipment and base station are disclosed which allow a wireless communication network to support different types of user equipment which have particular signalling requirements in particular low complexity devices that require signals having low transport block sizes and those that require a coverage enhanced mode where messages are repeated.
Abstract: A method, computer program, network control node, user equipment and base station are disclosed which allow a wireless communication network to support different types of user equipment which have particular signalling requirements In particular, low complexity devices that require signals having low transport block sizes and those that require a coverage enhanced mode where messages are repeated are supported Information regarding their particular capabilities are transmitted to and stored in the network control node which then transmits this information as paging information with any paging request

Patent
19 Aug 2015
TL;DR: In this paper, a page file corresponding to a page to be displayed is used to indicate the priority of the first page element and the second page element in the page file to be loaded.
Abstract: The invention provides a method and apparatus for loading pages The method comprises acquiring a page file corresponding to a page to be displayed, loading a first page element and a second page element according to the page file, and outputting the loaded page to be displayed The page file is used for indicating page elements to be loaded in the page to be displayed The priority of the first page element is higher than that of the second page element, the first page element represents a page element to be loaded in a visual area or a setting display area, and the second page element represents a page element to be loaded out of the visual area or the setting display area By adopting the method and apparatus for loading the pages, the technical issue that the loading speed of page element in a visual area or a setting display area in the present page loading scheme is low is solved

Patent
14 Jul 2015
TL;DR: In this article, the authors proposed a method and system for influencing operation of a plurality of UEs in an LTE wireless network, which comprises partitioning the UEs into a plurality and determining a non-paging group including one or more of the plurality of groups which include UEs that will not be paged in an upcoming paging occasion.
Abstract: The present invention provides a method and system for influencing operation of a plurality of UEs in an LTE wireless network. The method comprises partitioning the plurality of UEs into a plurality of groups; determining a non-paging group including one or more of the plurality of groups which include UEs that will not be paged in an upcoming paging occasion; transmitting a message to the plurality of UEs indicative of said non-paging group; and for UEs belonging to said non-paging group, entering a sleep mode upon successful reception of the message.

Book ChapterDOI
24 Jan 2015
TL;DR: The verification of the isolation properties of a hypervisor design that uses direct paging and is done in the HOL4 theorem prover, thus allowing to re-use the existing HOL4 ARMv7 model developed in Cambridge.
Abstract: In order to host a general purpose operating system, hypervisors need to virtualize the CPU memory subsystem. This entails dynami- cally changing MMU resources, in particular the page tables, to allow a hosted OS to reconfigure its own memory. In this paper we present the verification of the isolation properties of a hypervisor design that uses direct paging. This virtualization approach allows to host commodity OSs without requiring either shadow data structures or specialized hardware support. Our verification targets a system consisting of a commodity CPU for embedded devices ARMv7, a hypervisor and an untrusted guest running Linux.The verification involves three steps: i Formalization of an ARMv7 CPU that includes the MMU, ii Formalization of a system behavior that includes the hypervisor and the untrusted guest iii Verification of the isolation properties. Formalization and proof are done in the HOL4 theorem prover, thus allowing to re-use the existing HOL4 ARMv7 model developed in Cambridge.

Patent
02 Nov 2015
TL;DR: In this paper, a method for improved tracking area planning and handling is proposed, comprising of assigning a single tracking area code to a plurality of eNodeBs at a messaging concentrator gateway, the gateway situated in a network between the plurality of ENodeBs and the core network, storing, at the gateway, at least one indicator of a last known location of a user equipment (UE).
Abstract: A method is disclosed for improved tracking area planning and handling, comprising: assigning a single tracking area code to a plurality of eNodeBs at a messaging concentrator gateway, the messaging concentrator gateway situated in a network between the plurality of eNodeBs and the core network; storing, at the messaging concentrator gateway, at least one indicator of a last known location of a user equipment (UE) other than the single tracking area code; receiving a paging message from the core network at the messaging concentrator gateway for a UE; and performing a paging sequence using the at least one indicator to identify a set of eNodeBs to page the UE, thereby allowing larger tracking area list sizes to be used without increasing signaling traffic between the radio access network and the core network.

Patent
14 Jan 2015
TL;DR: In this paper, the paging schedule for a public land mobile network (PLMN) search is disclosed, where the UE can initiate a search for a second PLMN between consecutive paging occasions, and may read information blocks on a broadcast channel of a cell of the secondPLMN.
Abstract: In various aspects, the disclosure provides user equipment (UE) capable of conducting a public land mobile network (PLMN) search by determining a paging schedule for a serving cell of the UE, the serving cell being associated with a first PLMN and the paging schedule defining one or more paging occasions. The UE may initiate a search for a second PLMN between consecutive paging occasions, and may read information blocks on a broadcast channel of a cell of the second PLMN. The UE may discontinue reading a partially-read information block when the partially-read information block is scheduled for transmission at least partially concurrently with a paging occasion on the serving cell if the partially-read information block does not include information for identifying the second PLMN. The UE may ignore the first paging occasion when the partially-read information block includes the information for identifying the second PLMN.

Patent
Sangbum Kim1, Jaehyuk Jang1, Kyeong-In Jeong1, Gert Jan Van Lieshout1, Soeng-Hun Kim1 
23 Jun 2015
TL;DR: In this paper, a method for receiving a paging message in a wireless communication system for supporting MTC comprises the steps of determining whether there is a terminal within a normal coverage (NC) or an extended coverage (EC), transmitting, to a network, state information including EC function support information and/or area display information of the NC or the EC according to the determination result.
Abstract: The present invention relates to a method and a device for machine type communication in a wireless communication system, and according to one embodiment of the present invention, a method for receiving a paging message in a wireless communication system for supporting MTC comprises the steps of: determining whether there is a terminal within a normal coverage (NC) or an extended coverage (EC), transmitting, to a network, state information including EC function support information and/or area display information of the NC or the EC according to the determination result; determining a paging receiving time according to an operation mode related to the state information, and receiving the paging message according to the determined paging receiving time.

Patent
18 Feb 2015
TL;DR: In this article, a set of memory pages from a working set of a program process are compressed into a compressed store prior to being written to a page file, after which the memory pages can be repurposed by a memory manager.
Abstract: A set of memory pages from a working set of a program process, such as at least some of the memory pages that have been modified, are compressed into a compressed store prior to being written to a page file, after which the memory pages can be repurposed by a memory manager. The compressed store is made up of multiple memory pages, and the compressed store memory pages can be repurposed by the memory manager after being written to the page file. Subsequent requests from the memory manager for memory pages that have been compressed into a compressed store are satisfied by accessing the compressed store memory pages (including retrieving the compressed store memory pages from the page file if written to the page file), decompressing the requested memory pages, and returning the requested memory pages to the memory manager.

Patent
Yuanyuan Zhang1, Yu-Syuan Jheng, Feifei Sun, Li Chen, I-Kang Fu1 
08 May 2015
TL;DR: In this article, a paging area is used for CE UEs requiring coverage extension/coverage enhancement, where the UE reports its CE status to the MME and the eNB stores UE CE information and forwards it to neighboring eNBs in the same paging areas.
Abstract: Methods and apparatus are provided for paging transmission and reception for UEs requiring coverage extension/coverage enhancement. In one novel aspect, the UE reports the CE status to the MME. CE level related information and the corresponding cell ID are provided from eNB to MME. MME sends paging information including the repetition number to all eNBs in the corresponding tracking area when paging the UE. In another novel aspect, a paging area is used for CE UEs. The UE receives paging area information, notifies the network, and updates the stored paging area information upon detecting changes between the received and the stored paging areas. In another embodiment, the UE reports its CE status upon detecting CE status changes. The eNB stores UE CE information and forwards it to neighboring eNBs in the same paging area. The eNB pages UEs on its CE UE list with repetition while paging other UEs normally.

Patent
29 Jul 2015
TL;DR: In this article, a method and a device for transmitting a paging message is described, which is characterized by comprising the steps of determining the total number of repeat transmission times of the paging messages within a Paging period T and the number of repeated transmission times n of the Paging message on each wireless frame; determining an initial reference wireless frame time difference configuration parameter in an initial-reference wireless frame calculation formula, notifying the determined initial reference WF time different configuration parameter to user equipment UE.
Abstract: The invention provides a method and a device for transmitting a paging message. The method is characterized by comprising the steps of determining the total number of repeat transmission times of the paging message within a paging period T and the number of repeat transmission times n of the paging message on each wireless frame; determining an initial reference wireless frame time difference configuration parameter in an initial reference wireless frame calculation formula, notifying the determined initial reference wireless frame time different configuration parameter to user equipment UE, wherein the initial reference wireless frame time difference corresponding to the initial reference wireless frame time different configuration parameter is not less than ceiling (M/n) wireless frames; determining an initial reference wireless frame of the paging message according to the determined initial reference wireless frame time different configuration parameter; determining sub-frames in which the paging message is transmitted according to the determined initial reference wireless frame, the M and the n; and transmitting the paging message at the position of each determined sub-frame according to a repeated physical downlink shared channel (PDSCH). According to the invention, a problem of repeat transmission of the paging message in coverage enhancement can be solved, and a system scheduling problem can also be simplified.

Patent
29 Jun 2015
TL;DR: In this paper, an evolved Node-B (eNB) may transmit multiple EC paging messages to user equipment (UE) over at least one paging cycle, where each EC message may contain the same paging information.
Abstract: Devices and methods of enhanced coverage (EC) paging are generally described. An evolved Node-B (eNB) may transmit multiple EC paging messages to user equipment (UE) over at least one paging cycle. Each EC paging message may contain the same paging information. The UE may combine the individual EC paging messages to achieve a predetermined link budget and subsequently may decode the EC combined paging message to determine whether the combined paging message is directed to the UE. The EC paging messages may contain information for more than one UE and a legacy P-RNTI or a specific P-RNTI for EC mode UEs. The EC paging messages may be transmitted in legacy occasions over several paging cycles or non-legacy paging occasions over one or more paging cycles. The EC paging messages may be transmitted in continuous or non-continuous subframes in a particular paging cycle.

Patent
Lixue Zhang1, Zhenxing Hu1
05 May 2015
TL;DR: In this paper, a UE paging method, a base station, and a UE are disclosed, which includes receiving a paging message that is used for paging the UE and delivered by a core network side.
Abstract: A UE paging method, a base station, and a UE are disclosed. The method includes receiving, by the base station, a paging message that is used for paging the UE and delivered by a core network side, where the paging message includes an eDRX cycle that serves as a first parameter, and a second parameter used for identifying a quantity of super frames for which a normal state lasts in the eDRX cycle; calculating, by the base station according to a UE identifier of the UE, the first parameter, and the second parameter, a super frame used for paging the UE; and if it is determined that a current super frame matches with the super frame used for paging the UE, that is, the current super frame is in a normal state, delivering, by the base station, the paging message to the UE in the current super frame.

Patent
22 Sep 2015
TL;DR: In this paper, a neighbor aware network (NAN) is considered, where a message from a first electronic device to a second electronic device of the NAN via a first communication channel of a plurality of communication channels during a discovery window is transmitted.
Abstract: A method of communication includes transmitting a message from a first electronic device of a neighbor aware network (NAN) to a second electronic device of the NAN via a first communication channel of a plurality of communication channels during a discovery window. The message indicates that the first electronic device is available to communicate. The method also includes monitoring a second communication channel of the plurality of communication channels during a first paging window of a transmission window. The first paging window includes a beginning portion of the transmission window, and electronic devices of the NAN are in an active state during the first paging window.