scispace - formally typeset
Search or ask a question

Showing papers by "Alberto Sangiovanni-Vincentelli published in 2008"


Journal ArticleDOI
TL;DR: A distributed estimation algorithm for sensor networks is proposed, in which each node computes its estimate as a weighted sum of its own and its neighbors' measurements and estimates, and an upper bound of the error variance in each node is derived.
Abstract: A distributed estimation algorithm for sensor networks is proposed. A noisy time-varying signal is jointly tracked by a network of sensor nodes, in which each node computes its estimate as a weighted sum of its own and its neighbors' measurements and estimates. The weights are adaptively updated to minimize the variance of the estimation error. Both estimation and the parameter optimization is distributed; no central coordination of the nodes is required. An upper bound of the error variance in each node is derived. This bound decreases with the number of neighboring nodes. The estimation properties of the algorithm are illustrated via computer simulations, which are intended to compare our estimator performance with distributed schemes that were proposed previously in the literature. The results of the paper allow to trading-off communication constraints, computing efforts and estimation quality for a class of distributed filtering problems.

133 citations


Proceedings ArticleDOI
10 Mar 2008
TL;DR: A generic and retargetable tool flow is presented that enables the export of timing data from software running on a cycle-accurate Virtual Prototype to a concurrent functional simulator, which runs the annotated source code much faster than the VP while preserving timing accuracy.
Abstract: A generic and retargetable tool flow is presented that enables the export of timing data from software running on a cycle-accurate Virtual Prototype (VP) to a concurrent functional simulator. First, an annotation framework takes information gathered from running an application on the VP and automatically annotates the line-level delays back to the original source code. Then, a SystemC-based timed functional simulator runs the annotated source code much faster than the VP while preserving timing accuracy. This simulator is API-compatible with the multiprocessor's operating system. Therefore, it can compile and run unmodified applications on the host PC. This flow has been implemented for MuSIC (Multiple SIMD Cores) [6], a heterogeneous multiprocessor developed at Infineon to support Software Defined Radio (SDR). When compared with an optimized cycle-accurate VP of MuSIC on a variety of tests, including a multiprocessor JPEG encoder, the accuracy is within 20%, with speedups from 10x to 1000x.

63 citations


Proceedings ArticleDOI
13 Mar 2008
TL;DR: A healthcare monitoring platform upon which applications for posture recognition as well as physical therapy is implemented, and it is shown how body sensor network resources can be dynamically allocated to support heterogeneous application requirements without hardware reconfiguration.
Abstract: To aid development of Body Sensor Network (BSN) applications, we define a framework that manages common tasks for healthcare monitoring applications The framework allows for dynamic configuration and control of signal processing and sensing functions on the sensor nodes Using this framework, we create a healthcare monitoring platform upon which we implement applications for posture recognition as well as physical therapy We show how body sensor network resources can be dynamically allocated to support heterogeneous application requirements without hardware reconfiguration

53 citations


Journal ArticleDOI
TL;DR: An algebra of tag structures is introduced to define heterogeneous parallel composition formally and Morphisms between tag structures are used to define relationships between heterogeneous models at different levels of abstraction.
Abstract: We present a compositional theory of heterogeneous reactive systems. The approach is based on the concept of tags marking the events of the signals of a system. Tags can be used for multiple purposes from indexing evolution in time (time stamping) to expressing relations among signals, like coordination (e.g., synchrony and asynchrony) and causal dependencies. The theory provides flexibility in system modeling because it can be used both as a unifying mathematical framework to relate heterogeneous models of computations and as a formal vehicle to implement complex systems by combining heterogeneous components. In particular, we introduce an algebra of tag structures to define heterogeneous parallel composition formally. Morphisms between tag structures are used to define relationships between heterogeneous models at different levels of abstraction. In particular, they can be used to represent design transformations from tightly synchronized specifications to loosely-synchronized implementations. The theory has an important application in the correct-by-construction deployment of synchronous design on distributed architectures.

43 citations


Journal ArticleDOI
TL;DR: This paper presents a finite-state machine (FSM) reengineering method that enhances the FSM synthesis by reconstructing a functionally equivalent but topologically different FSM based on the optimization objective, and it maintains the quality of the synthesis solutions.
Abstract: This paper presents a finite-state machine (FSM) reengineering method that enhances the FSM synthesis by reconstructing a functionally equivalent but topologically different FSM based on the optimization objective. This method enables the FSM synthesis algorithms to explore a set of functionally equivalent FSMs and obtain better solutions than those in the original FSM. To demonstrate the effectiveness of the proposed method, we apply it to popular power- and area-driven FSM synthesis algorithms, respectively. Our method achieves an average of 5.5% power reduction and 2.7% area reduction, respectively, on 25 Microelectronics Center of North Carolina (MCNC) FSM benchmarks, where the proposed method is applicable. This is a significant performance improvement for the power- and area-driven FSM synthesis algorithms being used. Our method has a negligible run-time overhead, and it maintains the quality of the synthesis solutions.

42 citations


Journal ArticleDOI
TL;DR: This work presents a design flow that enables the efficient exploration of redundancy/cost tradeoffs in fault-tolerant data flow, a novel model of computation that simplifies the integration of formal validation techniques, and reports on the application of the design flow to two case studies from the automotive industry.
Abstract: Safety-critical feedback-control applications may suffer faults in the controlled plant as well as in the execution platform, i.e., the controller. Control theorists design the control laws to be robust with respect to the former kind of faults while assuming an idealized scenario for the latter. The execution platforms supporting modern real-time embedded systems, however, are distributed architectures made of heterogeneous components that may incur transient or permanent faults. Making the platform fault tolerant involves the introduction of design redundancy with obvious impact on the final cost. We present a design flow that enables the efficient exploration of redundancy/cost tradeoffs. After providing a system-level specification of the target platform and the fault model, designers can rely on the synthesis of the low-level fault-tolerance mechanisms. This is performed automatically as part of the embedded software deployment through the combination of the following three steps: replication, mapping, and scheduling. Our approach has a sound foundation in fault-tolerant data flow, a novel model of computation that simplifies the integration of formal validation techniques. Finally, we report on the application of our design flow to two case studies from the automotive industry: a steer-by-wire system from General Motors and a drive-by-wire system from BMW.

40 citations


Journal ArticleDOI
TL;DR: The problem of reachability analysis of hybrid automata to decide safety properties is discussed, and the algorithm used in A riadne to compute over-approximations of reachable sets is described.

37 citations


Proceedings ArticleDOI
11 Jun 2008
TL;DR: The methodology is applied to the synthesis of wireless networks for an essential step in any control algorithm in a distributed environment: the estimation of control variables such as temperature and air-flow in buildings.
Abstract: We present a methodology and a software framework for the automatic design exploration of the communication network among sensors, actuators and controllers in building automation systems. Given 1) a set of end-to-end latency, throughput and packet error rate constraints between nodes, 2) the building geometry, and 3) a library of communication components together with their performance and cost characterization, a synthesis algorithm produces a network implementation that satisfies all end-to-end constraints and that is optimal with respect to installation and maintenance cost. The methodology is applied to the synthesis of wireless networks for an essential step in any control algorithm in a distributed environment: the estimation of control variables such as temperature and air-flow in buildings.

35 citations


Proceedings ArticleDOI
16 Jun 2008
TL;DR: Experimental results show that Breath meets the latency and reliability requirements, and that it exhibits a good distribution of the working load, thus ensuring a long lifetime of the network.
Abstract: The novel cross-layer protocol Breath for wireless sensor networks is designed, implemented, and experimentally evaluated. The Breath protocol is based on randomized routing, MAC and duty-cycling, which allow it to minimize the energy consumption of the network while ensuring a desired packet delivery end-to-end reliability and delay. The system model includes a set of source nodes that transmit packets via multi-hop communication to the destination. A constrained optimization problem, for which the objective function is the network energy consumption and the constraints are the packet latency and reliability, is posed and solved. It is shown that the communication layers can be jointly optimized for energy efficiency. The optimal working point of the network is achieved with a simple algorithm, which adapts to traffic variations with negligible overhead. The protocol was implemented on a test-bed with off-the-shelf wireless sensor nodes. It is compared with a standard IEEE 802.15.4 solution. Experimental results show that Breath meets the latency and reliability requirements, and that it exhibits a good distribution of the working load, thus ensuring a long lifetime of the network.

34 citations


Proceedings ArticleDOI
10 Mar 2008
TL;DR: To what degree the existing AUTOSAR standard can support the development of safety- and time-critical software and what is required to move toward the desirable goal of timing isolation when integrating multiple applications into the same execution platform are discussed.
Abstract: System-level integration requires an overall understanding of the interplay of the sub-systems to enable component- based development with portability, reconfigurability and extensibility, together with guaranteed reliability and performance levels. Integration by simple interfaces and plug- and-play of sub-systems, which is the main objective of AUTOSAR, requires solving essential technical problems. We discuss to what degree the existing AUTOSAR standard can support the development of safety- and time-critical software and what is required to move toward the desirable goal of timing isolation when integrating multiple applications into the same execution platform.

30 citations


Journal Article
TL;DR: In this article, an approach for schedulability analysis based solely on Petri net structure is proposed, which shows that unschedulability can be caused by a structural relation among transitions modelling non-deterministic choices.
Abstract: A schedule of a Petri Net (PN) represents a set of firing sequences that can be infinitely repeated within a bounded state space, regardless of the outcomes of the nondeterministic choices. Schedulability analysis for a given PN answers the question whether a schedule exists in the reachability space of this net. This paper suggests a novel approach for schedulability analysis based solely on PN structure. It shows that unschedulability can be caused by a structural relation among transitions modelling nondeterministic choices. A method based on linear programming for checking this relation is proposed. This paper also presents a necessary condition for schedulability based on the rank of the incidence matrix of the underlying PN. These results shed a light on the sources of unschedulability often found in PN models of embedded multimedia systems.

Proceedings ArticleDOI
10 Mar 2008
TL;DR: A reliability analysis that checks if the given short-term reliability of a program variable update in an implementation is sufficient to meet the logical reliability requirement in the long run and a notion of design by refinement where a task can be refined by another task that writes to program variables with less logical reliability.
Abstract: We propose the notion of logical reliability for real-time program tasks that interact through periodically updated program variables. We describe a reliability analysis that checks if the given short-term (e.g., single-period) reliability of a program variable update in an implementation is sufficient to meet the logical reliability requirement (of the program variable) in the long run. We then present a notion of design by refinement where a task can be refined by another task that writes to program variables with less logical reliability. The resulting analysis can be combined with an incremental schedulability analysis for interacting real-time tasks proposed earlier for the Hierarchical Timing Language (HTL), a coordination language for distributed real-time systems. We implemented a logical-reliability- enhanced prototype of the compiler and runtime infrastructure for HTL.

Proceedings ArticleDOI
08 Dec 2008
TL;DR: An optimization problem where the objective function is the total energy consumption in transmit, receive, listen and sleep states, subject to constraints of delay and reliability of the packet delivery and the decision variables are the sleep and wake time of the receivers is solved.
Abstract: We present a novel approach for minimizing the energy consumption of medium access control (MAC) protocols developed for duty-cycled wireless sensor networks (WSN) for the unslotted IEEE 802.15.4 standard while guaranteeing delay and reliability constraints. The main challenge in this optimization is the random access associated with the existing IEEE 802.15.4 hardware and MAC specification that prevents controlling the exact transmission time of the packets. Data traffic, network topology, MAC, and the key parameters of duty cycles (sleep and wake time) determine the amount of random access, which in turn determines delay, reliability and energy consumption. We formulate and solve an optimization problem where the objective function is the total energy consumption in transmit, receive, listen and sleep states, subject to constraints of delay and reliability of the packet delivery and the decision variables are the sleep and wake time of the receivers. The optimal solution can be easily implemented on existing IEEE 802.15.4 hardware platforms, by storing light look-up tables in the receiver nodes. Numerical results show that the protocol outperforms significantly existing solutions.

Journal ArticleDOI
TL;DR: This article presents a software framework for communication infrastructure synthesis of distributed systems, which is critical for overall system performance in communication-based design.
Abstract: This article presents a software framework for communication infrastructure synthesis of distributed systems, which is critical for overall system performance in communication-based design. Particular emphasis is given to on-chip interconnect synthesis of multicore designs.

Journal ArticleDOI
TL;DR: The principles of the design of embedded electronic systems from the perspective of the entire system, not restricting this perspective to the electrical domain, can help bring system-level design to a new level of efficiency.
Abstract: This article describes the principles of the design of embedded electronic systems from the perspective of the entire system. By not restricting this perspective to the electrical domain, a more disciplined methodology can help bring system-level design to a new level of efficiency.

Journal ArticleDOI
TL;DR: The paper addresses the problem of designing a component that combined with a known part of a system, called the context FSM, is a reduction of a given specification FSM by providing two different algorithms to compute a largest regular compositionally progressive solution.
Abstract: The paper addresses the problem of designing a component that combined with a known part of a system, called the context FSM, is a reduction of a given specification FSM. We study compositionally progressive solutions of synchronous FSM equations. Such solutions, when combined with the context, do not block any input that may occur in the specification, so they are of practical use. We show that, if a synchronous FSM equation has a compositionally progressive solution, then there is a largest regular compositionally progressive solution including all of them. We provide two different algorithms to compute a largest regular compositionally progressive solution: one deletes all compositionally non-progressive strings from a largest solution, the other splits states of a largest solution and then removes those inducing a non-progressive composition.

Posted Content
TL;DR: In this article, a distributed adaptive algorithm to estimate a time-varying signal, measured by a wireless sensor network, is designed and analyzed, where each node of the network locally computes adaptive weights that minimize the estimation error variance.
Abstract: A distributed adaptive algorithm to estimate a time-varying signal, measured by a wireless sensor network, is designed and analyzed. One of the major features of the algorithm is that no central coordination among the nodes needs to be assumed. The measurements taken by the nodes of the network are affected by noise, and the communication among the nodes is subject to packet losses. Nodes exchange local estimates and measurements with neighboring nodes. Each node of the network locally computes adaptive weights that minimize the estimation error variance. Decentralized conditions on the weights, needed for the convergence of the estimation error throughout the overall network, are presented. A Lipschitz optimization problem is posed to guarantee stability and the minimization of the variance. An efficient strategy to distribute the computation of the optimal solution is investigated. A theoretical performance analysis of the distributed algorithm is carried out both in the presence of perfect and lossy links. Numerical simulations illustrate performance for various network topologies and packet loss probabilities.

Proceedings ArticleDOI
10 Mar 2008
TL;DR: Methods and tools for the evaluation of the function performance and its timing correctness by simulation or by worst case static analysis are reviewed.
Abstract: Automotive systems are increasingly distributed and complex. Reduced time-to-market, cost and safety concerns require advance validation of the integrated systems and its components, from the functional, timing, and reliability standpoints. In particular, function correctness and performance may depend on communication and computation delays imposed by the selected architecture platform. Hence, the need for methods and tools capable of predicting the system-level timing behaviour (latencies and jitter), resulting from the HW platform selection, the synchronization between tasks and messages, and also from the synchronization and queuing policies of the middleware and RTOS levels. In this paper, we review methods and tools for the evaluation of the function performance and its timing correctness by simulation or by worst case static analysis.

Proceedings ArticleDOI
15 Apr 2008
TL;DR: This paper analyzes the interference effects and proposes a statistical approach to estimate the interference distortion products accurately, and demonstrates how to adjust the two-tone technique to provide accurate distortion estimations for MB-OFDM UWB systems.
Abstract: The inter-modulation and cross-modulation products of interferences introduced by nonlinearities of the receiver could significantly degrade system performance, and should be properly estimated when determining system design specifications. In MB-OFDM UWB systems, the traditional two- tone technique is still widely used to estimate nonlinear effects. However, this technique is not accurate enough. In this paper, we analyze the interference effects and propose a statistical approach to estimate the interference distortion products accurately. The analytical expressions for various interference scenarios are derived and then validated by simulation. Based on this analysis, we demonstrate how to adjust the two-tone technique to provide accurate distortion estimations for MB-OFDM UWB systems.

01 Jan 2008
TL;DR: An approach for automatically annotating timing information obtained from a cycle-level model back to the original application source code is developed, and the annotated source code can then be simulated without the underlying architecture and still maintain good timing accuracy.
Abstract: The combination of increasing design complexity, increasing concurrency, growing heterogeneity, and decreasing time to market windows has caused a crisis for embedded system developers. To deal with this problem, dedicated hardware is being replaced by a growing number of microprocessors in these systems, making software a dominant factor in design time and cost. The use of higher level models for design space exploration and early software development is critical. Much progress has been made on increasing the speed of cycle-level simulators for microprocessors, but they may still be too slow for large scale systems and are too low-level (i.e. they require a detailed implementation) for effective design space exploration. Furthermore, constructing such optimized simulators is a significant task because the particularities of the hardware must be accounted for. For this reason, these simulators are hardly flexible. This thesis focuses on modeling the performance of software executing on embedded processors in the context of a heterogeneous multi-processor system on chip in a more flexible and scalable manner than current approaches. We contend that such systems need to be modeled at a higher level of abstraction and, to ensure accuracy, the higher level must have a connection to lower-levels. First, we describe different levels of abstraction for modeling such systems and how their speed and accuracy relate. Next, the high-level modeling of both individual processing elements and also a bus-based microprocessor system are presented. Finally, an approach for automatically annotating timing information obtained from a cycle-level model back to the original application source code is developed. The annotated source code can then be simulated without the underlying architecture and still maintain good timing accuracy. These methods are driven by execution traces produced by lower level models and were developed for ARM microprocessors and MuSIC, a heterogeneous multiprocessor for Software Defined Radio from Infineon. The annotated source code executed between one to three orders of magnitude faster than equivalent cycle-level models, with good accuracy for most applications tested.

01 Jan 2008
TL;DR: Stochastic analysis frameworks that calculate the probability distributions of response times for software tasks and messages, and end-to-end latencies in a Controller Area Network based system for the performance evaluation of automotive distributed architectures are presented.
Abstract: Probabilistic Timing Analysis of Distributed Real-time Automotive Systems by Haibo Zeng Doctor of Philosophy in Engineering-Electrical Engineering and Computer Sciences University of California, Berkeley Professor Alberto L. Sangiovanni-Vincentelli, Chair Distributed architectures supporting the execution of real-time applications are common in automotive systems. Many applications, including most of those developed for active safety and chassis systems, do not impose hard real-time deadlines. Nevertheless, they are sensitive to the latencies of the end-to-end computations from sensors to actuators. We believe a characterization of the timing metrics that, not only provides the worst case bound, but assigns a probability to each possible latency value, is very desirable to estimate the quality of an architecture configuration. In this dissertation, we present stochastic analysis frameworks that calculate the probability distributions of response times for software tasks and messages, and end-to-end latencies in a Controller Area Network based system for the performance evaluation of automotive distributed architectures. Also, the regression technique is used to quickly characterize the message response time probability distribution, which is suitable when only part of the message set is known as in the early design stage. The applicability of the analysis frameworks is validated by either simulation, or trace data extracted

Proceedings ArticleDOI
26 May 2008
TL;DR: This work presents an optimized receiver front-end design obtained by a systematic design space exploration technique based on the platform-based design (PBD) methodology and shows how it map the system-level performance requirements to circuit-level platforms through an optimization process.
Abstract: The design of an MB-OFDM ultra-wideband receiver is challenging when we target power consumption minimization while providing enough robustness against the nearby wireless interference. We present an optimized receiver front-end design obtained by a systematic design space exploration technique based on the platform-based design (PBD) methodology. At the system level, we investigate the interference effects and propose an approach to estimate the inter-modulation products introduced by receiver nonlinearities. We show how we map the system-level performance requirements to circuit-level platforms through an optimization process. We obtain a RF front-end consuming 10.8 mW in a 0.13 mum CMOS technology, which achieves a 22.3% savings of power compared to a manually optimized design.

Proceedings ArticleDOI
10 Mar 2008
TL;DR: The design of innovative chip architectures, new upcoming standards for high-bandwidth and deterministic communication (FlexRay) and sensors are the domains of interest, with emphasis on reliability and support for advanced active safety functions.
Abstract: This section will provide insight into new developments and advances in electronics automotive architectures. The design of innovative chip architectures, new upcoming standards for high-bandwidth and deterministic communication (FlexRay) and sensors are the domains of interest, with emphasis on reliability and support for advanced active safety functions.


01 Jan 2008
TL;DR: This dissertation demonstrates the feasibility of an MILP-based optimization approach that provides the minimum memory implementation of a set of communication channels within the deadline constraints of the tasks.
Abstract: A fundamental asset of a model-based development process is the capability of providing an automatic implementation of the model that preserves its semantics, and at the same time makes efficient use of the execution platform resources. Synchronous Reactive (SR) models are increasingly used in model-based design flows for the development of embedded control applications. The implementation of communication links between functional blocks in an SR model requires buffering schemes and access procedures implemented at the kernel level. Platform-based design methodology is introduced to synthesize a real-time operating system when implementing SR models. Previous research has proposed two methods for sizing the communication buffer. This dissertation demonstrates how it is possible to improve on the state of the art, providing not only tighter bounds by leveraging task timing information, but also an approach that is capable of dealing with a more general model and implementation platform configuration. To achieve rigorous model semantics, this dissertation presents semantics preserving implementations of SR communication for multi-rate systems on single processor architectures. The implemented protocols define the assignment of indices of shared buffers to writer and reader tasks at activation time, rather than at execution time. Two constant-time portable solutions are developed in the C language and with the automotive OSEK OS standard. Run-time complexity and memory requirements are discussed for the two protocol implementations, and tradeoffs are analyzed. This dissertation completes the SR model-based design flow by supporting automatic code generation for the double buffer and the dynamic buffering protocols. To support software portability and reusability, the ePICos18, compliant to the OSEK OS standard, is used. The generated code is validated by emulation on the PIC18F452 microcontroller through the MPLAB IDE simulator. An implementation of communication links with a minimum buffer size is often desirable, but it may require a longer access time and it may also lead to the violation of deadline constraints in real-time applications. This dissertation demonstrates the feasibility of an MILP-based optimization approach that provides the minimum memory implementation of a set of communication channels within the deadline constraints of the tasks.

Journal ArticleDOI
TL;DR: This roundtable examines issues and attempts to provide a definite picture of where ESL design is today and where it might be in the next five to 10 years.
Abstract: This is the first of two roundtables on electronic system-level design in this issue of IEEE Design & Test. ESL design and tools have been present in the design landscape for many years. Significant ESL innovations are now part of most advanced design methodologies, spanning the domains of modeling, simulation, and synthesis. Techniques such as transaction-level modeling, automatic interconnection generation, behavioral synthesis, automatic instruction-set customization, retargetable compilers, and many others are currently used in the design of multimillion-gate chips. Yet, ESL design still seems to struggle to live up to the promise of providing increased productivity and design quality. This roundtable examines these issues and attempts to provide a definite picture of where ESL design is today and where it might be in the next five to 10 years. The participants in this roundtable include well-known experts in ESL design from the user side, universities, and tool providers. IEEE Design & Test thanks the roundtable participants: moderator Reinaldo Bergamaschi (CadComponents), Luca Benini (University of Bologna), Krisztian Flautner (ARM UK), Wido Kruijtzer (NXP Semiconductors), Alberto Sangiovanni-Vincentelli (University of California, Berkeley), and Kazutoshi Wakabayashi (NEC Japan). D&T gratefully acknowledges the help of Roundtables Editor Bill Joyner (Semiconductor Research Corp.), who organized the event.


01 Jan 2008
TL;DR: The COmmunication Synthesis Infrastructure (COSI), a public-domain design framework for the design exploration and synthesis of interconnection networks, is presented and a design for on-chip interconnect design is focused on.
Abstract: Alessandro Pinto, Luca P. Carloni and Alberto L. Sangiovanni-VincentelliThe COmmunication Synthesis Infrastructure (COSI), a public-domain designframework for the design exploration and synthesis of interconnection networks,is presented. The framework embodies a methodology based on the platform-based design principles and is used to de ne speci c design ows for a variety ofapplications. In this paper, we focus on a design ow for on-chip interconnectdesign.

01 Jan 2008
TL;DR: A generic andargetable toolflow is presented that enables the export of timing data from software running on a cycle-accurate Virtual Prototype (VP) to a concurrent functional simulator while preserving timing accuracy.
Abstract: A generic andretargetable toolflowispresented thatenables theexportoftiming datafromsoftware running on a cycle-accurate Virtual Prototype (VP)toa concurrent functional simulator. First, anannotation framework takes information gathered fromrunning anapplication onthe VP andautomatically annotates theline-level delays back totheoriginal source code.Then,aSystemC-based timed functional simulator runstheannotated sourcecodemuch faster thantheVP whilepreserving timing accuracy. This simulator isAPI-compatible withthemultiprocessor's operating system. Therefore, itcancompile andrununmodified applications onthehostPC.ThisflowhasbeenimplementedforMuSIC(Multiple SIMDCores) [6], aheterogeneousmultiprocessor developed atInfineon tosupport SoftwareDefined Radio(SDR). When compared withanoptimized cycle-accurate VP ofMuSIConavariety oftests, including amultiprocessor JPEG encoder, theaccuracy is within 20%,withspeedups fromlOxto1000x.

Proceedings ArticleDOI
08 Dec 2008
TL;DR: Two solutions of the maximization problem with the simplified outage probability constraint are proposed: one solves the problem using mixed integer-real programming and the other relaxes the constraints that rates be integers yielding a standard convex programming optimization that can be solved much faster.
Abstract: The problem of maximizing the sum of the transmit rates while limiting the outage probability below an appropriate threshold is investigated for networks where the nodes have limited processing capabilities. We focus on CDMA wireless network whose rates are characterized under mixed Rayleigh- lognormal fading. The outage probability is given implicitly by a complex function so that solving the optimization problem requires substantial computing. In this paper, we propose a novel explicit approximation of this function that allows solving the problem in an affordable manner. We propose two solutions of the maximization problem with the simplified outage probability constraint: one solves the problem using mixed integer-real programming. The other relaxes the constraints that rates be integers yielding a standard convex programming optimization that can be solved much faster. Numerical results show that our approaches perform well for average values of the outage requirements.