scispace - formally typeset
Search or ask a question

Showing papers by "Deutsche Telekom published in 2008"


Proceedings ArticleDOI
26 May 2008
TL;DR: An efficient distributed algorithm is presented to construct multiple disjoint barriers in a large sensor network to cover a long boundary area of an irregular shape on long strip areas of irregular shape without any constraint on crossing paths.
Abstract: Constructing sensor barriers to detect intruders crossing a randomly-deployed sensor network is an important problem. Early results have shown how to construct sensor barriers to detect intruders moving along restricted crossing paths in rectangular areas. We present a complete solution to this problem for sensors that are distributed according to a Poisson point process. In particular, we present an efficient distributed algorithm to construct sensor barriers on long strip areas of irregular shape without any constraint on crossing paths. Our approach is as follows: We first show that in a rectangular area of width w and length l with w = Ω(log l), if the sensor density reaches a certain value, then there exist, with high probability, multiple disjoint sensor barriers across the entire length of the area such that intruders cannot cross the area undetected. On the other hand, if w = o(log l), then with high probability there is a crossing path not covered by any sensor regardless of the sensor density. We then devise, based on this result, an efficient distributed algorithm to construct multiple disjoint barriers in a large sensor network to cover a long boundary area of an irregular shape. Our algorithm approximates the area by dividing it into horizontal rectangular segments interleaved by vertical thin strips. Each segment and vertical strip independently computes the barriers in its own area. Constructing "horizontal" barriers in each segment connected by "vertical" barriers in neighboring vertical strips, we achieve continuous barrier coverage for the whole region. Our approach significantly reduces delay, communication overhead, and computation costs compared to centralized approaches. Finally, we implement our algorithm and carry out a number of experiments to demonstrate the effectiveness of constructing barrier coverage.

285 citations


Proceedings ArticleDOI
09 Dec 2008
TL;DR: LISP-DHT is designed to take full advantage of the DHT architecture in order to build an efficient and secured mapping lookup system while preserving the locality of the mapping.
Abstract: Recent activities in the IRTF (Internet Research Task Force), and in particular in the Routing Research Group (RRG), focus on defining a new Internet architecture, in order to solve scalability issues related to interdomain routing. The research community has agreed that the separation of the end-systems' addressing space (the identifiers) and the routing locators' space will alleviate the routing burden of the Default Free Zone. Nevertheless, such approach, adding a new level of indirection, implies the need of storing and distributing mappings between identifiers and routing locators. In this paper we present LISP-DHT, a mapping distribution system based on Distributed Hash Tables (DHTs). LISP-DHT is designed to take full advantage of the DHT architecture in order to build an efficient and secured mapping lookup system while preserving the locality of the mapping. The paper describes the overall architecture of LISP-DHT, explaining its main points and how it works.

119 citations


Journal ArticleDOI
TL;DR: The psychological underpinnings of users' willingness to expend effort to personalise ICT are examined to help design personalisation features so that they promote the acceptance and adoption of information and communication technology (ICT).

105 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: It is shown that for regimes with symmetric users who share the same level of willingness to pay, the optimal revenue is concave and increasing in the number of users in the network.
Abstract: We study the problem of pricing uplink power in wide-band cognitive radio networks under the objective of revenue maximization for the service provider and while ensuring incentive compatibility for the users. User utility is modeled as a concave function of the signal-to-noise ratio (SNR) at the base station, and the problem is formulated as a Stackelberg game. Namely, the service provider imposes differentiated prices per unit of transmitting power and the users consequently update their power levels to maximize their net utilities. We devise a pricing policy and give conditions for its optimality when all the users are to be accommodated in the network. We show that there exist infinitely many Nash equilibrium points that reward the service provider with the same revenue. The pricing policy charges more from users that have better channel conditions and more willingness to pay for the provided service. We then study properties of the optimal revenue with respect to different parameters in the network. We show that for regimes with symmetric users who share the same level of willingness to pay, the optimal revenue is concave and increasing in the number of users in the network. We analytically obtain achievable SNRs for this special case, and finally present a numerical study in support of our results.

95 citations


Proceedings ArticleDOI
02 Jun 2008
TL;DR: This work sets out to assist network intrusion detection systems by understanding and predicting the CPU and memory consumption of such systems, and to assist operators with tuning trade-offs between detection accuracy versus resource requirements.
Abstract: When installing network intrusion detection systems (NIDSs), operators are faced with a large number of parameters and analysis options for tuning trade-offs between detection accuracy versus resource requirements. In this work we set out to assist this process by understanding and predicting the CPU and memory consumption of such systems.

65 citations


Proceedings ArticleDOI
13 Apr 2008
TL;DR: It is shown that ISP-aided P2P locality benefits both P1P users and ISPs, measured in terms of improved content download times, increased network locality of query responses and desired content, and overall reduction in P2p traffic.
Abstract: Despite recent improvements, P2P systems are still plagued by fundamental issues such as overlay/underlay topological and routing mismatch, which affects their performance and causes traffic strains on the ISPs. In this work, we aim to improve overall system performance for ISPs as well as P2P systems by means of traffic localization through improved collaboration between ISPs and P2P systems. More specifically, we study the effects of different ISP/P2P topologies as well as a broad range of influential user behavior characteristics, namely content availability, churn, and query patterns, on end-user and ISP experience. We show that ISP-aided P2P locality benefits both P2P users and ISPs, measured in terms of improved content download times, increased network locality of query responses and desired content, and overall reduction in P2P traffic.

62 citations


Proceedings ArticleDOI
13 Apr 2008
TL;DR: It is shown that the widely observed unfairness of the protocol in small network topologies does not always persist in large topologies, and in situations where the protocol is long-term fair, a characterization of its short-term fairness is provided.
Abstract: We characterize the fairness of decentralized medium access control protocols based on CSMA/CA, such as IEEE 802.11, in large multi-hop wireless networks. In particular, we show that the widely observed unfairness of the protocol in small network topologies does not always persist in large topologies. This unfairness is essentially due to the unfair advantage of nodes at the border of the network, which have a restricted neighborhood and thus a higher probability to access the communication channel. In large one-dimensional networks these border effects do not propagate inside the network, and nodes sufficiently far away from the border have equal access to the channel; as a result the protocol is long-term fair. In two-dimensional networks, we observe a phase transition. If the access intensity of the protocol is small, the border effects remain local and the protocol behaves similarly as in one- dimensional networks. However, if the access intensity of the protocol is large enough, the border effects persist independently of the size of the network and the protocol is strongly unfair. Finally, in situations where the protocol is long-term fair, we provide a characterization of its short-term fairness.

54 citations


Proceedings ArticleDOI
22 Dec 2008
TL;DR: Based on a model of a typical operator network, the power consumption of different broadband access technologies and architectures especially DSL, FTTN + VDSL and FTTH are compared, finding clear advantage of FTTH with respect to energy efficiency.
Abstract: Based on a model of a typical operator network we have compared the power consumption of different broadband access technologies and architectures especially DSL, FTTN + VDSL and FTTH. Even though power management improves the performance there is still clear advantage of FTTH with respect to energy efficiency.

53 citations


Journal ArticleDOI
TL;DR: The pool allocation mechanism of the Microsoft Windows operating system is analyzed for the first time and a test arrangement is described, which allows to obtain a time series of physical memory images, while it also reduces the effect on the observed operating system.

53 citations


Proceedings ArticleDOI
20 Oct 2008
TL;DR: A stratified model is built and an EM algorithm is specified for estimating the size of populations, and it is suggested that a very significant number of links are missing from standard route monitor measurements of the AS-graph.
Abstract: Study of the Internet's high-level structure has for some time intrigued scientists. The AS-graph (showing interconnections between Autonomous Systems) has been measured, studied, modelled and discussed in many papers over the last decade. However, the quality of the measurement data has always been in question. It is by now well known that most measurements of the AS-graph are missing some set of links. Many efforts have been undertaken to correct this, primarily by increasing the set of measurements, but the issue remains: how much is enough? When will we know that we have enough measurements to be sure we can see all (or almost all) of the links. This paper aims to address the problem of estimating how many links are missing from our measurements. We use techniques pioneered in biostatistics and epidemiology for estimating the size of populations (for instance of fish or disease carriers). It is rarely possible to observe entire populations, and so sampling techniques are used. We extend those techniques to the domain of the AS-graph. The key difference between our work and the biological literature is that all links are not the same, and so we build a stratified model and specify an EM algorithm for estimating its parameters. Our estimates suggest that a very significant number of links (many of thousands) are missing from standard route monitor measurements of the AS-graph. Finally, we use the model to derive the number of monitors that would be needed to see a complete AS-graph with high-probability. We estimate that 700 route monitors would see 99.9% of links.

53 citations


Journal ArticleDOI
TL;DR: This work argues for using deviation tests, discounting, passing on only first-hand information, introducing secondary response, and stressing the importance of identity in reputation systems.
Abstract: Self-organized networks such as mobile ad-hoc, Internet-based peer-to-peer, wireless mesh and Fourth generation (4G) wireless networks depend on cooperation of nodes. Reputation systems help nodes decide with whom to cooperate and which nodes to avoid. They have been studied and applied almost separately in diverse disciplines such as economics, computer science, and social science, resulting in effort duplication and inconsistent terminology. We aim to bring together these efforts by outlining features and fundamental questions common to reputation systems in general. We derive methodologies to address these questions for both reputation system design and research from our own experiences and evaluations by simulation and analytical modeling. We argue for using deviation tests, discounting, passing on only first-hand information, introducing secondary response, and stressing the importance of identity.

Proceedings ArticleDOI
13 Jan 2008
TL;DR: In this paper, the authors investigated the use of virtual globes and found that most common virtual globe tasks only include the "what" and "why" of spatial distribution, and developed a multi-touch virtual globe derived from an adapted virtual globe paradigm.
Abstract: Virtual globes have progressed from little-known technology to broadly popular software in a mere few years. We investigated this phenomenon through a survey and discovered that, while virtual globes are en vogue, their use is restricted to a small set of tasks so simple that they do not involve any spatial thinking. Spatial thinking requires that users ask "what is where" and "why"; the most common virtual globe tasks only include the "what". Based on the results of this survey, we have developed a multi-touch virtual globe derived from an adapted virtual globe paradigm designed to widen the potential uses of the technology by helping its users to inquire about both the "what is where" and "why" of spatial distribution. We do not seek to provide users with full GIS (geographic information system) functionality, but rather we aim to facilitate the asking and answering of simple "why" questions about general topics that appeal to a wide virtual globe user base.

Patent
20 May 2008
TL;DR: In this article, an ontological-content-based method for filtering and ranking the relevancy of items is presented, which finds general use in the fields of information filtering and publishing, specifically the production of electronic newspapers.
Abstract: The invention is an ontological-content-based method for filtering and ranking the relevancy of items. The filtering method of the invention utilizes a hierarchical ontology, which considers the distance, or similarity between concepts representing each user to concepts representing each item, according to the position of related concepts in the hierarchical ontology. Based on that, the filtering algorithm computes the similarity between the items and users and rank-orders the items according to their relevancy to each user. The method finds general use in the fields of information filtering and publishing, specifically the production of electronic newspapers for which the invention provides methods of filtering and ranking the relevance of news content to specific readers in order to allow production of personalized electronic newspapers.

Patent
14 Oct 2008
TL;DR: In this paper, the location of a driver owning a mobile communication device is determined in order to be able to check whether the driver is located in a parking zone (20) within which he is permitted to park.
Abstract: The invention relates to a method for performing a parking procedure with the help of a mobile communication device (30). According to the invention, the location of a driver owning a mobile communication device (30) is determined in order to be able to check whether the driver is located in a parking zone (20) within which he is permitted to park. The determination as to whether the driver is located within a permitted parking zone (20) is made based on GSM coordinates or a mobile communication network cell identifier, which define the current location of the driver.

Book ChapterDOI
16 Jun 2008
TL;DR: The results show that standardized questionnaires are applicable only to a limited extent and indicate the need for the development of a valid and reliable questionnaire covering the usability and quality of multimodal systems.
Abstract: Different questionnaires assessing the usability of two multimodal systems and one unimodal system were compared. Each participant (N=21) performed several tasks with each device and was afterwards asked to rate the system by filling out different questionnaires. The results show that standardized questionnaires are applicable only to a limited extent. Despite some concordance, the results differ considerably and thus indicate the need for the development of a valid and reliable questionnaire covering the usability and quality of multimodal systems.

Proceedings ArticleDOI
05 Apr 2008
TL;DR: Initial user testing showed that this form of tactile interaction was easy to understand and handy to interact with, also for unexperienced users.
Abstract: In this paper, we introduce the change of a mobile phone's hardware shape as a means of tactile interaction. The alteration of shape is implemented in a hardware prototype using a dynamic knob as an interaction device for the user. The knob alters the phone's shape according to different events and states, like incoming calls, new voice mail, or missed calls. Therefore, the user can explore the phone's status by touching it -- ambiently, even through the pocket. Initial user testing showed that this form of tactile interaction was easy to understand and handy to interact with, also for unexperienced users.

Book ChapterDOI
01 Jan 2008
TL;DR: The physical and mathematical background of several methods for multichannel sound field reproduction is presented, which aim at the physically correct synthesis of acoustical wave fields with a large number of loudspeakers.
Abstract: Multichannel sound field reproduction aims at the physically correct synthesis of acoustical wave fields with a large number of loudspeakers. It goes beyond stereophony by extending or eliminating the so-called sweet spot. This chapter presents the physical and mathematical description of various methods for this purpose and presents the physical and mathematical background of several methods for multichannel sound field reproduction.

Proceedings ArticleDOI
08 Dec 2008
TL;DR: The CARMEN project is presented, the vision of which is to extend operatorspsila infrastructure by providing carrier grade services through a heterogeneous wireless mesh, thereby generating major benefits to operators and users.
Abstract: Current Internet use is evolving, users are becoming mobile and are expecting data services on the go. This fact presents big challenges and opportunities to operators, which see the increase in data services as a big market still to be exploited. However, current cellular technologies cannot accommodate the demand that will arise when the true mobile Internet evolves. Addressing these challenges, we present the CARMEN project, the vision of which is to extend operatorspsila infrastructure by providing carrier grade services through a heterogeneous wireless mesh. The CARMEN architecture will provide enough bandwidth to cope with userspsila expectations at a reduced cost, thereby generating major benefits to operators and users.

Proceedings ArticleDOI
02 Jun 2008
TL;DR: This analysis highlights P2P video multicast characteristics such as high bandwidth requirements, high peer churn, low peer persistence in the P1P multicast system, significant variance in the media stream quality delivered to peers, relatively large channel start times, and flash crowd effects of popular video content.
Abstract: We evaluate the performance of a large-scale live P2P video multicast session comprising more than 120, 000 peers on the Internet. Our analysis highlights P2P video multicast characteristics such as high bandwidth requirements, high peer churn, low peer persistence in the P2P multicast system, significant variance in the media stream quality delivered to peers, relatively large channel start times, and flash crowd effects of popular video content. Our analysis also indicates that peers are widely spread across the IP address space, spanning dozens of countries and hundreds of ISPs and Internet ASes. As part of the P2P multicast evaluation several QoS measures such as fraction of stream blocks correctly received, number of consecutive stream blocks lost, and channel startup time across peers. We correlate the observed quality with the underlying network and with peer behavior, suggesting several avenues for optimization and research in P2P video multicast systems.

Patent
22 May 2008
TL;DR: In this paper, a distributed system for detecting eThreats that propagate in a network, which comprises: (a) graphs database storing at least one propagation graph, each graph describing the typical propagation over time of one e-Threat class or a legitimate executable class within the network; (b) plurality of agents that are distributed in corresponding plurality of hosts within the networks, each of said agents continuously monitoring the corresponding host and reporting to a Central Decision Maker (CDM) the identity of any new suspected executable, and the time in which said suspected executable has been first
Abstract: The invention relates to a distributed system for detecting eThreats that propagate in a network, which comprises: (a) graphs database storing at least one propagation graph, each graph describing the typical propagation over time of one eThreat class or a legitimate executable class within the network; (b) plurality of agents that are distributed in corresponding plurality of hosts within the network, each of said agents continuously monitoring the corresponding host and reporting to a Central Decision Maker (CDM) the identity of any new suspected executable, and the time in which said suspected executable has been first detected by said agent; (c) a CDM for: (c.1) receiving all said reports from said plurality of agents; (c.2) creating from said reports for each suspected executable a corresponding propagation graph which reflects the propagation characteristics over time of said suspected executable within the network, and (c.3) comparing each of said created graphs with said stored at least one propagation graph; (c.4) upon finding a similarity above a predefined threshold between a created graph and one of the stored graphs, concluding respectively that said executable belongs to the class as defined by said stored graph; and (c.5) conveying said conclusion to said agents, for optionally taking an appropriate action.

Proceedings Article
01 Jan 2008
TL;DR: Many mobile devices, specifically mobile phones, come equipped with a microphone, which serves as an additional source of input to the developing field of mobile phone performance.
Abstract: Many mobile devices, specifically mobile phones, come equipped with a microphone. Microphones are high-fidelity sensors that can pick up sounds relating to a range of physical phenomena. Using simple feature extraction methods, parameters can be found that sensibly map to synthesis algorithms to allow expressive and interactive performance. For example blowing noise can be used as a wind instrument excitation source. Also other types of interactions can be detected via microphones, such as striking. Hence the microphone, in addition to allowing literal recording, serves as an additional source of input to the developing field of mobile phone performance.

Proceedings ArticleDOI
26 Oct 2008
TL;DR: A real-time CD cover recognition using a cameraphone and fast and reliable image matching against a database of 10,000 CD covers is accomplished using a scalable vocabulary tree.
Abstract: Automatic CD cover recognition has interesting applications for comparison shopping and music sampling. We demonstrate a real-time CD cover recognition using a cameraphone. By snapping a picture of a CD cover with her cameraphone, a user can conveniently retrieve information related to the CD. Robust image feature extraction is applied to overcome the image distortions in the query photo. To limit the amount of data transmitted over a wireless network, we compress the query image or features extracted from the query image. On the database side, fast and reliable image matching against a database of 10,000 CD covers is accomplished using a scalable vocabulary tree.

Journal ArticleDOI
TL;DR: The proposed architecture can be deployed by an IPTV provider over heterogeneous access networks (mobile, wireless, and fixed) as a part of standardized NGN solutions.
Abstract: This article presents an architecture to support IPTV services in an IMS-based NGN. The architecture extends the current IMS specification with the required functionality to meet additional requirements of IPTV services. The proposed architecture can be deployed by an IPTV provider over heterogeneous access networks (mobile, wireless, and fixed) as a part of standardized NGN solutions. After presenting an overview of the IPTV standardization activities in DVB, ITU-T, ETSI, ATIS, 3GPP, and OMA, this article focuses on the ETSI TISPAN IPTV standardization. IMS-based IPTV architectural functions and possible IPTV evolutionary steps are discussed, and then the article presents an implementation example.

Proceedings ArticleDOI
28 Oct 2008
TL;DR: The use of spatial proximity regions around mobile devices on a table to significantly reduce the effort of proposing and exploring content within a group of collocated people and was rated as superior in comparison to other established techniques.
Abstract: Negotiation and coordination of activities involving a number of people can be a difficult and time-consuming process, even when all participants are collocated. We propose the use of spatial proximity regions around mobile devices on a table to significantly reduce the effort of proposing and exploring content within a group of collocated people. In order to determine the location of devices on ordinary tables, we developed a tracking mechanism for a camera-projector system that uses dynamic visual markers displayed on the screen of a device. We evaluated our spatial proximity region based approach using a photo-sharing application for people sat around a table. The tabletop provides a frame of reference in which the spatial arrangement of devices signals the coordination state to the users. The results from the study indicate that the proposed approach facilitates coordination in several ways, for example, by allowing for simultaneous user activity and by reducing the effort required to achieve a common goal. Our approach reduced the task completion time by 43% and was rated as superior in comparison to other established techniques.

Journal ArticleDOI
TL;DR: A normalized log-likelihood measure, computed between perceptual features extracted from synthesized speech and a gender-dependent HMM reference model, is proposed and shown to be a reliable parameter for multidimensional TTS quality diagnosis.
Abstract: In this letter, the first steps toward the development of a signal-based instrumental quality measure for text-to-speech (TTS) systems are described. Hidden Markov models (HMM), trained on naturally-produced speech, serve as artificial text- and speaker-independent reference models against which synthesized speech signals are assessed. A normalized log-likelihood measure, computed between perceptual features extracted from synthesized speech and a gender-dependent HMM reference model, is proposed and shown to be a reliable parameter for multidimensional TTS quality diagnosis. Experiments with subjectively scored synthesized speech data show that the proposed measure attains promising estimation performance for quality dimensions labeled overall impression, listening effort, naturalness, continuity/fluency, and acceptance.

Journal ArticleDOI
TL;DR: A simple method to reduce the distortions in SBS based slow light systems by broadening and adaptation of the gain bandwidth and showing that this compensation can be done by additional loss spectra is shown.
Abstract: We show a simple method to reduce the distortions in SBS based slow light systems. The distortion reduction is simply based on a broadening and adaptation of the gain bandwidth. However, a broadened gain reduces the achievable fractional delay which cannot be compensated by higher pump powers. Here we will show that this compensation can be done by additional loss spectra. With the presented method low distortions for high fractional pulse delays are possible. We show the theory and experimental verifications of our method. For Gaussian pulses with a fractional delay of 1Bit we achieved a distortion reduction of around 23%.

Journal ArticleDOI
TL;DR: It is shown in theory and experiment that in a SBS based delay line pulses can be delayed to more than a bit period without broadening, and the method could have the potential to compensate the fiber dispersion.
Abstract: We show in theory and experiment that in a SBS based delay line pulses can be delayed to more than a bit period without broadening. Zero-broadening is possible since the broadening due to the narrow Brillouin gain bandwidth can be compensated by the group velocity dispersion accompanied with the pulse delay. We achieve compensation by a superposition of a broad gain with two narrow losses at its wings. In our experiments 1.9ns pulses were delayed to around 1.5Bit, at the same time its FWHM width was compressed to 80%. Therefore, besides slowing down pulses, the method could have the potential to compensate the fiber dispersion.

Proceedings ArticleDOI
06 May 2008
TL;DR: Both the well-known blind and supervised adaptive filtering algorithms turn out as special cases of this generic framework, called TRINICON, and gain various new insights and synergy effects for the development of new and improved adaptation algorithms.
Abstract: In recent years broadband signal aquisition by sensor arrays, e.g., for speech and audio signals in a hands-free scenario, has become a popular research field in order to separate certain desired source signals from competing or interfering source signals ((blind) source separation or interference cancellation) and to possibly dereverberate them (blind deconvolution). In various practical scenarios, some or even all interfering source signals may be directly accessible and/or some side information on the propagation path is known. In these cases we can tackle the separation problem by supervised adaptation algorithms, e.g., the popular LMS- or RLS-type algorithms, rather than the more involved blind adaptation algorithms. In contrast, for blind estimation, such as in the blind source separation (BSS) scenario where both the propagation paths and the original source signals are unknown, the method of independent component analysis (ICA) is typically applied. Traditionally, the ICA method and supervised adaptation algorithms have been treated as different research areas. In this paper, we establish a conceptually simple, yet fundamental relation between these two worlds. This is made possible using the previously introduced generic broadband adaptive filtering framework, called TRINICON. As we will demonstrate, not only both the well-known blind and supervised adaptive filtering algorithms turn out as special cases of this generic framework, but we also gain various new insights and synergy effects for the development of new and improved adaptation algorithms.

Proceedings ArticleDOI
12 Mar 2008
TL;DR: The physical background of an ideal ANC system for noise sources outside and within the ANC system is discussed and the versatile framework of eigenspace and wave-domain adaptive filtering is proposed as solution to these.
Abstract: Multichannel acoustic active noise control (ANC) systems are increasingly being developed in order to enlarge the spatial extend of the quiet zone. This paper discusses the physical background of an ideal ANC system for noise sources outside and within the ANC system. This idealized system can be realized reasonably well with a high number of channels (massive multichannel ANC). It is shown that the typically applied adaptation algorithms have fundamental limitations in the context of massive multichannel ANC. The versatile framework of eigenspace and wave-domain adaptive filtering is proposed as solution to these. Simulation of a massive multichannel ANC system illustrates the successful application of the proposed concepts.

Book ChapterDOI
07 Sep 2008
TL;DR: This work focuses on separate optimization of the quantization rules at the sensors and the fusion rule at the fusion center for decentralized detection, when the number of sensors is very large and the observations are conditionally independent and identically distributed given each hypothesis.
Abstract: Decentralized detection has been an active area of research since the late 1970s. Its earlier application area has been distributed radar systems, and more recently it has found applications in sensor networks and intrusion detection. The most popular decentralized detection network structure is the parallel configuration, where a number of sensors are directly connected to a fusion center. The sensors receive measurements related to an event and then send summaries of their observations to the fusion center. Previous work has focused on separate optimization of the quantization rules at the sensors and the fusion rule at the fusion center or on asymptotic results when the number of sensors is very large and the observations are conditionally independent and identically distributed given each hypothesis.